First confirmed Sessions
Stay tuned for the full agenda

Optimization of Immunotherapies via Machine Learning

Optimization of Immunotherapies via Machine Learning

Summary:

Using machine learning to find new molecules is an exciting but challenging task. In his presentation, Stefan presents his experience in designing immunotherapies using machine learning. He will review the latest work in the field and discuss the challenges of working with omics data. Deep learning can be used to understand the function of immune cells using a large repository of labeled data. However, the enormous variety of immune cell structures and functions makes this search problem extremely difficult to solve. Stefan shows how Remissio is using reinforcement learning to optimize proteins for desired properties.

Improving Mental Health Treatment by Balancing Privacy, Interpretability and Predictive Power: a Distributed K-Nearest Neighbors Example

Improving Mental Health Treatment by Balancing Privacy, Interpretability and Predictive Power: a Distributed K-Nearest Neighbors Example

Summary:

Besides having good predictive power, data science applications in healthcare need to satisfy strict privacy regulations, and preferably be interpretable to both the healthcare professional and, ideally, the patient. As satisfying each of these criteria simultaneously is not straightforward, successful data science applications in healthcare generally require going beyond the standard machine learning approach. Our case study provides a practical example of how to augment the standard machine learning approach in order to satisfy the different criteria relevant to clinical practice.

Five mental healthcare institutions in the Netherlands developed a tool to predict treatment outcomes for anxiety and depression, supporting healthcare professionals and patients in deciding whether to continue or adjust treatment. Distributed learning was applied to help satisfy GDPR regulations while preserving as much detail as possible in the available data. K-Nearest Neighbors (KNN) was chosen as a simple, intuitive, visualizable prediction algorithm to promote interpretability among healthcare professionals and patients. KNN was adjusted in order to increase the level of sophistication and therefore the predictive power of the algorithm. Different measures were taken to ensure that the possibility to recognize individual patients was minimized for a specific approach.

Unlocking the Potential of FAIR Data Using AI at Roche

Unlocking the Potential of FAIR Data Using AI at Roche

Summary:

For life science companies, healthcare providers, patients and consumers, AI offers great potential to streamline processes and achieve better treatment results. On the one hand, the findings from data generated in the real world setting could take personalized medicine to a new level by individually tailoring diagnosis and treatment in terms of effectiveness and safety to the patient. On the other hand, the linking of clinical study data and real world data with the enormous advances in biology and medicine is a prerequisite for more targeted research and more efficient development processes. In her talk Dr. Anna Bauer-Mehren describes the role of real world data, data science or data analysis in pharmaceutical research and the resulting new opportunities for personalized medicine. In particular, she addresses the importance of high quality data and Roche’s efforts to make data FAIR. In their view, this is essential for the success of AI methods in R&D. Using several examples, she shows in which areas of pharmaceutical research AI is already being used successfully, but also discusses which areas still have great challenges. Her examples include the use of deep learning (AI) for process optimization in the development of therapeutic antibodies and for the automatic annotation of tumor biopsy images in digital pathology. She also discusses how AI is used to develop new digital or image-based biomarkers to differentiate between different tumor immunophenotypes.

Translational Data Science or “Digitalization with Laminated Pocket Cards”

Translational Data Science or “Digitalization with Laminated Pocket Cards”

Summary:

Particularly in the domain of clinical decision support, digitalization efforts in healthcare have lagged far behind expectations. To be effective, any decision support needs to be adapted to both the data structure of the decision problem and the decision ecology of the end-user. Translational Data Science (TDS) is a novel approach to clinical decision support development in which the latest insights of the decision and data sciences are combined to quickly and efficiently move “from data to decision” (D2D). Dr. Niklas Keller will introduce the concepts and key-methods of TDS and present a use-case of the development of a decision support tool for post-operative patient allocation. The new approach kept all of the complexity of the decision problem at the “back-end” while maintaining a high degree of simplicity at the “front-end”. The resultant decision support has a high predictive accuracy, pays respect to the constraints of the decision ecology of the end-user, is action-oriented and can be easily integrated in various clinical settings as a laminated pocket card.

Product Demand Forecast for Off-Patent Drugs Using Maching Learning at PUREN Pharma

Product Demand Forecast for Off-Patent Drugs Using Maching Learning at PUREN Pharma

Summary:

The German generic industry faces a big challenge: They need to deliver drugs to patients immediately, but due to high price pressure, they have very long lead times from suppliers. Therefore a accurate demand forecast in the long run is essential. Fortunately the data basis of the whole market is very good. This means one can very good identify what factors drive demand. To solve this issue at PUREN Pharma we implemented more than one year ago a new demand forecast process based on time series algorithms (which are used to determine the market size) and regression algorithms (for determining the market share of PUREN Pharma in the future). The process also includes a web-front end for planning responsibles. Based on this new process, the demand forecast accuracy and the error rates could be reduced significantly. Technologies we haved used for this project were R for time series, Python for regression, AZURE SQL for storage, GAPTEQ Forms for web front end.

Newsletter

Knowledge is everything!
Sign up for our newsletter to receive:

  • an extra 10% off your ticket!
  • insights, interviews, tips, news, and much more about Predictive Analytics World
  • price break reminders