My research focuses on the indirect relationship between the encoding of neural networks and their function. In particular, I am interested in how structural and self-organizing mechanisms affect the network topology and how this in turn contributes to the function of a network.
I mostly use computational models of neural networks for my research. But I am also looking for description languages that allow the Artificial Development of neural networks in order to generate scalable and evolvable structures.
The hippocampus is a good region to work along these lines of research as the anatomy of the network is unique and well conserved across species. Its context-dependent functional role in spatial navigation and memory formation however points to an underlying universal function of high evolutionary importance. Identifying this function will not only let us understand how the hippocampus does what it does. We can also learn how the function of neuronal networks in general emerge from structural encodings, self-organizing mechanisms and external input.
I completed my diploma in computer science with an emphasis on artificial intelligence and linguistics at the University of Münster. During my PhD and my first PostDoc, I investigated the influence of cognitive tasks on the Default Mode Network in subsequent rest. In my current position, I use computational methods to investigate the neural mechanisms underlying hippocampal functions.
A complete list of my publications can be found on my personal website.
Here is a summary of my more recent work
In this paper, we investigate the functional influence of a few generic properties in the cortex-hippocampus loop. In particular, we focus on the high degree of connectivity between cortex and hippocampus leading to converging and diverging forward and backward projections and heterogenous synaptic transmission delays that result from the detached location of the hippocampus and its multiple loops. We found that in a model incorporating these concepts, each cortical pattern can evoke a unique spatio-temporal spiking pattern in hippocampal neurons. This hippocampal response facilitates a reliable disambiguation of learned associations and a bridging of a time interval larger than the time window of spike-timing dependent plasticity in the cortex. Moreover, we found that repeated retrieval of a stored association leads to a compression of the interval between cue presentation and retrieval of the associated pattern from the cortex. We conclude that generic connectivity properties between cortex and hippocampus implement mechanisms for representing and consolidating temporal information. These functions are robust to evolutionary changes consistent with the preserved function of the hippocampal loop across different species.
PAM is the foundation for studying generic and specific coding principles of neural networks. The idea is to create artificial neural networks whose connectivity properties and connection lengths result from reconstructed anatomical data. Using a 3d environment to model and describe real neural networks, we can describe complex relationships between layers of neurons. With PAM, we can account for connectivity properties that are influenced by global and local anatomical axes, non-linear relationships between distances of neurons and their connection lengths, and we can incorporate many experimental data, like images from tracer studies, directly into our model.
Short introduction to PAM on Youtube.