Tutorials

1. Introduction to Probabilistic Modeling and Rational Analysis

Organizer: Frank Jäkel (University of Osnabrück)

 

The first part of the course is a basic introduction to probability theory from a Bayesian perspective, covering conditional probability, independence, Bayes' rule, coherence, calibration, expectation, and decision-making. We will also discuss how Bayesian inference differs from frequentist inference. In the second part of the course we will discuss why Bayesian Decision Theory provides a good starting point for probabilistic models of perception and cognition. The focus here will be on rational analysis and ideal observer models that provide an analysis of the task, the environment, the background assumptions and the limitations of the cognitive system under study. We will go  through several examples from signal detection to categorization to illustrate the approach.

 Slides

2.    Modeling Vision

Organizer: Heiko Neumann (University of Ulm)

 

Models of neural mechanisms underlying perception can provide links between experimental data from different modalities such as, e.g., psychophysics, neurophysiology, and brain imaging. Here we focus on visual perception.

The tutorial is structured into three parts. In the first part the role of models in vision science is motivated. Models can be used to formulate hypotheses and knowledge about the visual system that can be subsequently tested in experiments which, in turn, may lead to model improvements. Modeling vision can be described at various levels of abstraction and using different approaches (first principles approaches, phenomenological models, dynamical systems). In the second part specific models of early and mid-level vision are reviewed, addressing topics such as, e.g., contrast and motion detection, perceptual grouping, motion integration, figure-ground segregation, surface perception, and optical flow. The third part focuses on higher-level form and motion processing and building learning-based representations. In particular, object recognition, biological/articulated motion perception, and attention selection are considered.

 

 

3.    Visualization of Eye Tracking Data

Organizer: Michael Raschke (University of Stuttgart)

 

Apart from measuring completion times and recording accuracy rates of correctly given answers during performance of visual tasks, eye tracking experiments provide an additional technique to analyze how the attention of an observer is changing on a presented stimulus. Besides using statistical algorithms to compare eye tracking metrics, visualization techniques allow us to visually analyze different aspects of the recorded data. However, in most times only state of the art visualization techniques are usually used, such as scan path or heat map visualizations.

 

In this tutorial we will present an overview on further existing visualization techniques for eye tracking data and demonstrate their application in different user experiments and use cases. The tutorial will present three topics of eye tracking visualization:

 

1.) Visualization for supporting the general analysis process of a user experiment.

2.) Visualization for static and dynamic stimuli.

3.) Visualization for understanding cognitive and perceptual processes and refining parameters for cognition and perception simulations.

 

This tutorial is designed for researchers who are interested in eye tracking in general or in applying eye tracking techniques in user experiments. Additionally, the tutorial could be of interest for psychologists and cognitive scientists who would like to evaluate and refine cognition and perception simulations. It is suitable for PhD students as well as for experienced researchers. The tutorial requires a minimal level of pre‐requisites. Fundamental concepts of eye tracking and visualization will be explained during the tutorial.

 

Contributors: Tanja Blascheck, Michael Burch, Kuno Kurzhals, Hermann Pflüger

 

4.     Dynamic Field Theory: from Sensorimotor Behaviors to Grounded Spatial Language  

Organizers: Yulia Sandamirskaya  and Sebastian Schneegans  (Ruhr University Bochum)

 

Dynamic Field Theory (DFT) is a conceptual and mathematical framework, in which cognitive processes are grounded in sensorimotor behaviour through continuous in time and in space dynamics of Dynamic Neural Fields (DNFs). DFT originates in Dynamical  Systems thinking which postulates that the moment-to-moment behaviour of an embodied agent is generated by attractor dynamics, driven by sensory inputs and interactions between dynamic variables. Dynamic Neural Fields add representational power to the Dynamical Systems framework through DNFs, which formalise the dynamics of neuronal populations in terms of activation functions defined over behaviourally relevant parameter spaces. DFT has been successfully used to account for development of visual and spatial working memory, executive control, scene representation, spatial language, and word learning, as well as to guide behaviour of autonomous cognitive robots. In the tutorial, we will cover the basic concepts of Dynamic Field Theory in several short lectures. The topics will be: the attractors and instabilities that model elementary cognitive functions; the couplings between DNFs and multidimensional DNFs; coordinate transformations and coupling DNFs to sensory and motor systems; autonomy within DFT. We will show on an exemplary architecture for generation of flexible spatial language behaviours  how the DNF architectures may be linked to sensors and motors and generate real-world behaviour autonomously. The same architecture may be used to account for behavioral findings on spatial language. The tutorial will include a hands-on session to familiarise participants with a MATLAB software framework COSIVINA, which allows to build complex DNF architectures with little programming overhead.

Slides Schneegans1 Slides Schneegans2

5. Introduction to Cognitive Modelling with ACT-R

Organizers: Nele Rußwinkel, Sabine Prezenski, Fabian Joeres, Stefan Lindner und Marc Halbrügge (Technische Universität Berlin)

 

Contributors: Fabian Joeres, Maria Wirzberger  (Technische Universität Berlin)

 

ACT-R is the implementation of a theory of human cognition. It has a very active and diverse community that uses the architecture in laboratory tasks others in applied research. ACT-R is oriented on the organization of the brain and is called hybrid architecture because it holds symbolic and subsymbolic components. The aim of working on cognitive models with a cognitive architecture is to understand how humans produce intelligent behaviour.

In this tutorial the cognitive architecture ACT-R is introduced (Anderson, 2007). In the beginning we will give a short introduction of the background, structure and scope of ACT-R. Then we would like to start with hands-on examples how cognitive models are written in ACT-R.

In the end of the tutorial we will give a short overview about recent work and its benefit for applied cognitive science. No prior experience or programming knowledge is required. Please bring a laptop.

 

Anderson, J. R. (2007). How can the human mind occur in the physical universe? New York: Oxford University Press

 

N.B.: capacity is limited to 30 participants. To register for this tutorial, please mail your interest to sabine.prezenski(at)zmms.tu-berlin.de