IMEC challenge 1: Triple-B

Share this post on:

Challenge 1: Triple-B

The core objective of the TripleB project is to develop a superior, human-relevant in vitro organ-on-chip model of the Blood-Brain Barrier (BBB) to de-risk and accelerate drug development for neurological diseases. The development of effective therapies for neurodegenerative diseases is one of modern medicine’s greatest challenges, primarily due to the physiological hurdle of the Blood-Brain Barrier. Traditional preclinical models, particularly animal models, are costly, ethically complex, and have consistently proven to be poor predictors of human response. This leads to high failure rates in clinical trials. 

The TripleB project directly tackles this issue by combining two key innovations:

  1. Advanced Organ-on-Chip Hardware: A next-generation MPS platform designed for improved manufacturability, reproducibility, and integration of online sensors.
  2. A Foundational AI Stack: A suite of AI and data science tools designed to extract actionable insights from the high-volume data produced by the chip, turning raw data into predictive knowledge.

The strategic goal of this PoC project is to create a powerful synergy between the hardware and the AI, resulting in a predictive platform that significantly improves translation from the lab to human clinical outcomes.

Updates

Contributions in 2024 have focused on building the intelligent software layer of the TripleB platform 

Data Centralization, Cleaning, and Exploration.

A robust and scalable data pipeline was engineered to serve as the single source of truth for all experimental data. This was a critical first step to manage the variety and volume of data generated. The process involved:

  • Centralizing data from disparate sources, including instrument readouts and image files.
  • Cleaning and pre-processing the data to handle noise, outliers, and experimental variability.
  • Structuring and exploring data from various chip design experiments to identify initial trends and correlations.

Modeling of Chip Attributes vs. Target Variables

We successfully developed a suite of advanced machine learning models to predict BBB performance based on chip design parameters. Our predictive modeling approach utilized a combination of powerful techniques, including gradient boosted decision trees (XGBoost) for capturing complex non linear relationships and regularized regression models (e.g., LASSO) for feature selection and

interpretability. These models provide a quantitative understanding of how physical attributes of the chip influence biological function. The models focused on predicting three critical target variables:

  • Barrier Integrity (TEER): Modeling Trans-endothelial Electrical Resistance as a function of chip geometry and cell culture conditions.
  • Compound Permeability: Quantifying the passage of key reference molecules (transferrin and dextran) across the cellular barrier.
  • Cell Coverage & Morphology: Using automated image analysis to extract quantitative metrics of cell layer confluence and health.

To complement our predictive models, we applied explainable AI (XAI) techniques to ‘look inside the black box’ and gain deeper insights. Specifically, we employed methods such as SHAP (SHapley Additive exPlanations) to quantify the impact of each feature on model predictions and generated partial dependence plots to visualize the marginal effect of individual chip parameters. This allowed us to identify the most influential chip design parameters driving successful outcomes, providing a clear, data-driven rationale for future design improvements.

These models are the first step toward creating a true in silico twin of the chip, enabling us to predict performance before committing to costly and time-consuming fabrication and experiments.

Data-Driven Optimal Experiment Design Framework 

Moving beyond a traditional trial-and-error approach, we introduced an innovative framework for data-driven optimal experiment design. This framework leverages the predictive models described

above to create a powerful feedback loop:

  1. The AI models analyze the results of all past experiments.
  2. Based on this analysis, the system identifies the areas of greatest uncertainty or potential improvement. 
  3. The framework then proposes the next set of experimental parameters that will be most informative, maximizing the knowledge gained from each subsequent experiment.

This approach dramatically accelerates our progress toward an optimized BBB model by ensuring our experimental resources are always focused on the most critical questions.

Share this post on:

Leave a Reply

Your email address will not be published. Required fields are marked *