We will present a recent research project where we aimed at identifying novel targets for Schizophrenia. Current treatments are based on the assumption that there is an excessive dopaminergic neurotransmission. This hypothesis is now 70 years old and has helped us to treat positive symptoms in Schizophrenia like hallucination and delusion for many years. There are also negative symptoms in Schizophrenia which includes lack of attention and motivation as well as cognitive problems. The negative symptoms are not improved by current medication. A genetically modified mouse was developed as a model for Schizophrenia. Small samples of cortical neurons have been grown on plates allowing easy measurements of electric signaling in these networks using a calcium imaging assay. Two questions arose from analysing data from these networks; is neuronal communication altered in the mouse model and, if so, can the alteration be reversed? The answer was not obvious when looking at the data. However, a machine learning approach combined with a traditional statistical analysis helped us see through the data.
Current personalized cancer treatment is based on biomarkers which allow assigning each patient to a subtype of the disease, for which treatment has been established. Such stratified patient treatments represent a first important step away from one-size-fits-all treatment. However, the accuracy of disease classification comes short in the granularity of the personalization: it assigns patients to one of a few classes, within which heterogeneity in response to therapy usually is still very large. In addition, the combinatorial explosive quantity of combinations of cancer drugs, doses and regimens, makes clinical testing impossible. We propose a new strategy for personalised cancer therapy, based on producing a copy of the patient’s tumour in a computer, and to expose this synthetic copy to multiple potential therapies. We show how mechanistic mathematical modelling and simulation can be used to predict the effect of combination therapies in a solid cancer. The model account for complex interactions at the cellular and molecular level, and is able of bridging multiple spatial and temporal scales. The model is a combination of ordinary and partial differential equations, cellular automata and stochastic models. The model is personalised by estimating multiple parameters from individual patient data, routinely acquired, including histopathology, imaging and molecular profiling. The results show that mathematical models can be personalized to predict the effect of therapies in each specific patient. The model is tested with data from five breast tumours collected in a recent neoadjuvant clinical phase II trial. The model predicted correctly the outcome after 12 weeks treatment and showed by simulation how alternative treatment protocols would have produced different, and some times better, outcomes. This study is possibly the first one towards personalized computer simulation of breast cancer treatment incorporating relevant biologically-specific mechanisms and multi-type individual patient data in a mechanistic and multiscale manner: a first step towards virtual treatment comparison.
Xiaoran Lai, Oliver Geier, Thomas Fleischer, Øystein Garred, Elin Borgen, Simon Funke, Surendra Kumar, Marie Rognes, Therese Seierstad, Anne-Lise Børressen-Dale, Vessela Kristensen, Olav Engebråten, Alvaro Köhn-Luque, and Arnoldo Frigessi, Towards personalized computer simulation of breast cancer treatment: a multi-scale pharmacokinetic and pharmacodynamic model informed by multi-type patient data, Cancer Research, May 22 2019. abstract.
Real-world data is increasingly used by the pharmaceutical industry and regulatory authorities to supplement randomized clinical trials. Where the latter are usually conducted in controlled settings and with inclusion and exclusion criteria that ensures a homogenous trial population, real-world data are based on a broader population of patients in routine care, that better represent everyday life with the disease or medicine under study. Real-world data are typically obtained from anonymized electronic medical records, insurance claims or from non-interventional studies conducted in routine clinical practice. This data may be abundant in terms of number of patients and follow-up time, but it is often also of a lower quality than clinical trial data, and the analyses may be of a more exploratory nature. Over the last couple of years Novo Nordisk has increased the focus on using modern data science methods to gain knowledge from this type of data. In the talk I will present how we are building a data science community in Novo Nordisk and give examples of methods and applications.
Latent variable models (LVMs) estimate (often) low-dimensional representations of data, which can aid both analysis and interpretation of data. When the relation between latent representations and the data is affine, we recover classic models such as PCA, while in the more general nonlinear case LVMs correspond to autoencoders and related models. In this talk, I will discuss the geometry of the recovered latent space and argue that it should be viewed as being equipped with a random Riemannian metric. I'll discuss limitations of classic Riemannian geometry for coping with this scenario, and present both theoretical and practical tools for data analysis under random Riemannian metrics.
This talk is divided into two parts. In the first part I will give a short presentation of the new education in data science at Aarhus University: organisational structure, courses, people. The second part of the talk is on one of my own research interests: classification for high dimensional data. The focus will be on trying to understand the limits where classification is possible when the underlying signal is sparse. I will mainly consider Fisher's linear discriminant and the independence classifier.
Patients with Alzheimer’s disease are typically grouped into a set of discrete classes such as cognitive normal, subjective cognitive decline, mild cognitive impairment, and dementia. Alzheimer’s disease is however a progressive neurodegenerative disease, so grouping of patients based on cognitive ability is an artificial dichotomization of the continuous disease course. Within each class of patients, the decline in cognitive ability is often compared equally without considering the potentially large variation in patients’ location on the continuous disease progression time line. The aim of the talk is to describe a model framework for estimation of a common continuous biological time scale for Alzheimer’s disease based on nonlinear mixed-effects modeling. In the presented model, subject-specific random time shifts are included in addition to random effects that model systematic effects within patient. The predicted random time shifts describe the location of a patient on the biological disease scale introducing a continuous alignment of subjects. The continuous alignment results in a less biased comparison of the cognitive decline across patients and identification of factors associated with faster disease progression. The framework is applied to the ADNI database where it is shown how the common biological time scale can be used to compare the sensitivity of items across three cognitive measures, MMSE, CDR Sum of Boxes, and ADAS-cog.
In this talk I will focus on hands-on examples of data science
in industry ranging from issues with data quality, statistical understanding versus business understanding, to how application of deep learning is enabling new ways of understanding drug absorption to making wind energy more economically sustainable. Lastly, I'll provide some insights on how much and how little AI is part of daily operations in an AI-first company like Wind Power Lab.
We have been talking about e-learning, digital learning, and distance learning for many years now, but we have not found a new learning paradigm yet. How can we explore new ways of learning (and teaching) when looking at statistics and mathematics? I will give some examples and discuss the future og digital learning technology.