Inhaltsbereich / Content
Cognitive Science Colloqium
Thursdays 15:30 to 17:00 - Building 57, Room 508
November 16, 2017
Speaker: Dr. Garvin Brod (Center for Individual Development and Adaptive Education of Children at Risk (IDeA) & German Institute for International Educational Research (DIPF), Frankfurt), invited by Daniela Czernochowski)
Topic: Is asking students to make predictions an effective technique to activate prior knowledge and improve learning?
Abstract: It is well known that activating students' prior knowledge of a subject improves their learning performance. But what are simple techniques to reliably activate knowledge? And are these techniques equally effective in university students and school children? In two experiments, we tested whether asking university students and school children (grades 4–5) to make a prediction about a specific outcome is a viable technique to activate their knowledge and improve memory performance. We hypothesized that making a prediction would particularly benefit memory for events that violate expectancies, because making a wrong prediction should yield a surprise reaction, which in turn should boost memory for an event. The surprise reaction was measured using pupillometry. In short, findings were in line with our hypothesis but additionally pointed to age-related differences in the way that surprise is leveraged for learning. Implications for theory and educational practice will be discussed.
November 23, 2017
Speaker: Dr. Jana Jarecki (Basel University - Germany, invited by Tandra Ghose)
Topic: Class-conditional independence in human classification learning
Abstract: Humans excel in categorization. Yet from a computational standpoint, learning a novel probabilistic classification task involves severe computational challenges. The present paper investigates one way to address these challenges: assuming class-conditional independence of features. This feature in-dependence assumption simplifies the inference problem, allows for informed inferences about novel feature combinations, and performs robustly across different statistical environments. We designed a new Bayesian classification learning model (the dependence-independence structure and category learn-ing model, DISC-LM) that incorporates varying degrees of prior belief in class-conditional independence, learns whether or not independence holds, and adapts its behavior accordingly. Theoretical results from two simulation studies demonstrate that classification behavior can appear to start simple, yet adapt effectively to unexpected task structures. Two experiments — de-signed using optimal experimental design principles — were conducted with human learners. Classification decisions of the majority of participants were best accounted for by a version of the model with very high initial prior belief in class-conditional independence, before adapting to the true envi-ronmental structure. Class-conditional independence may be a strong and useful default assumption in category learning tasks.
Reading (optional): Jarecki, J. B., Meder, B., & Nelson, J. D. (2017). Naive and robust: class-conditional independence in human classification learning. Cognitive Science, 1–39. doi:10.1111/cogs.12496
December 07, 2017
Speaker: Dr. Rebecca Förster (Bielefeld University - Germany, invited by Tandra Ghose)
Topic: Innovative techniques for measuring visual attention
Abstract: Visual selective attention – the ability to preferably process task-relevant visual input reaching our eyes – is indispensable for purposeful adaptive behavior in our crowded visual world. Unsurprisingly, visual selective attention is a highly studied and influential topic across research fields reaching from basic research over clinical research up to robotics. I will present how interdisciplinary approaches and innovative techniques such as static and mobile eye tracking, head-mounted displays, and G-Sync technology can be used to reveal interesting new insights into the mechanisms of visual attention and may foster the development of efficient and convenient applications.
December 11, 2017
SPECIAL TALK - (Monday, 13:00, Biulding 57, Room 215)
Speaker: Prof. Dr. Koichi Kise (Dept. of Computer Science and Intelligent Systems – Osaka, Japan, invited by Andreas Dengel and Thomas Lachmann)
Topic: Quantified Reading and Learning for Sharing Experiences
Abstract: In my talk, I will present two topics. The first is an overview of our recently started project called "experiential supplement", which is to transfer human experiences by recording and processing them to be acceptable by others. The second is sensing technologies for producing experiential supplements in the context of learning. Because a basic activity of learning is reading, we also deal with the sensing of reading. Methods for quantifying the reading in terms of the number of read words, the period of reading, type of read documents, identifying read words are shown with experimental results. As for learning, we propose methods for estimating the English ability, confidence in answers to English questions, and estimating unknown words. The above are sensed by various sensors including eye trackers, EOG, EEG, and the first person vision.
December 11, 2017
SPECIAL TALK - (Monday, 17:15, Biulding 42, Hörsaal 110 - Biologisches Kolloquium)
Speaker: Prof. Dr. Dr. h.c. Erwin Neher (MPI für biophysikalische Chemie, Göttingen; Nobelpreis für Medizin und Physiologie, 1991, invited by EcKhard Friauf)
Topic: Modulation of Short-term Plasticity at a Glutamatergic Synapse
Abstract: Synaptic Plasticity is held to be at the basis of most signal processing capabilities of the central nervous system. Long-term plasticity receives most attention by neuroscientists, since it underlies learning and memory. Short-term plasticity (STP), on the other hand, is not less important, since it mediates basic signal processing tasks, such as filtering, gain control, adaptation, and many more. My laboratory has studied STP at the Calyx of Held, a glutamatergic nerve terminal in the auditory pathway, which is large enough to be voltage-clamped in the ‘whole-cell mode’, using patch pipettes. STP is highly modulated by second messengers, such as Ca++ and diacylglycerol. In particular, it was shown that such modulators accelerate a process called ‘superpriming’, a slow transition of release-ready vesicles from a ‘normally primed’ state to a faster, ‘superprimed’ one (Lee et al. 2013; PNAS 110, 15079). Recently, we could demonstrate that this same process also mediates Post-Tetanic Potentiation, a medium-term form of synaptic plasticity (Taschenberger et al., 2016; PNAS 113, E4548-57). These findings will be discussed in the framework of literature data on various forms of short-term plasticity.
December 14, 2017
Speaker: Dr. Pieter Moors (Leuven University - Belgium, invited by Sven Panis)
Topic: Processing invisible stimuli during continuous flash suppression: stimulus fractionation vs. stimulus integration
Abstract: Continuous flash suppression (CFS) is a perceptual suppression technique which was introduced by Tsuchiya and Koch (2005) about ten years ago. It relies on the phenomenon of binocular rivalry where dissimilar visual input presented to both eyes leads to perceptual alternations of the stimuli presented to each eye. In CFS, an ensemble of rapidly flickering geometrical patterns is presented to one eye, whilst another stimulus is presented to the other eye, yielding prolonged perceptual suppression of that stimulus. Presented as a highly effective suppression technique, researchers readily started seeking for the boundaries of unconscious processing under such deep perceptual suppression. A particularly promising paradigm proved to be breaking CFS, in which the time it takes for the initially suppressed stimulus to enter awareness, is measured. A series of studies was published from which the converging conclusion seemed to be that the perceptually suppressed stimulus was processed in a fully integrated manner, up to the semantic level. For example, semantic congruency violations in sentences could be detected or arithmetic operations could be performed on invisible stimuli. In this talk, I will present three different studies, all challenging the stimulus integration account. Based on our findings, as well as those of other authors, we propose that stimuli suppressed through CFS are represented in a fractionated way, and processing is limited to elementary parts of the stimulus. We outline some predictions such a model makes, and evidence consistent with it.
January 18, 2018
Speaker: Dr. Christoph Scheepers (Glasgow University - UK, invited by Leigh Fernandez)
Topic: Pupillometric work on emotional resonance in L1 vs. L2 - Pupil dilation as an indicator of reduced emotional resonance in one’s second language
A number of behavioural and physiological studies suggests that late bilinguals ‘feel less’ in their second (L2) as opposed to their first language (L1) – a phenomenon dubbed reduced emotional resonance in L2 (e.g., Pavlenko, 2006; Dewaele, 2010). However, few studies to date have
carefully controlled for participants’ proficiency in L2 or variables affecting word recognition (e.g., length, frequency, etc.). The present pupillometry experiments were designed to overcome these shortcomings.
In Experiment 1, 32 Finnish-English and 32 German-English late bilinguals (all highly proficient in English) were tested both in their first language (L1) and in English (L2). An additional control group (32 English monolinguals) was tested only in English. In each language version of the experiment, we presented 30 high-arousal (e.g. “alarm”) and 30 low-arousal (e.g., “swamp”) words, alongside 30 emotionally neutral distractors. Word length, frequency, valence, and abstractness were controlled for both by design and analytically. Participants were shown the stimuli while their pupillary responses were continuously monitored (eye-tracking). The experiment confirmed reliably enhanced pupil-dilation in response to high- vs. low-arousal words, but only when participants were tested in their respective L1, and in spite of being able to recognise the words in L2.
In Experiment 2, 240 English words (80 high-, 80 low-arousal, and 80 distractors, carefully matched on a number of lexical variables) was presented to 116 participants from various language backgrounds (92 bilinguals with English as L2, and 24 monolingual English speakers). All participants were pre-assessed in terms of English proficiency (LexTALE). Again, participants’ pupillary responses were continuously monitored during the main task. There was no difference in pupillary responses to high- vs. low-arousal words in bilinguals (English L2), but clear pupillary effects in English monolinguals
(English L1). Importantly, this word type * group interaction remained significant even when differences in English proficiency were analytically controlled for. We conclude that reduced emotional resonance in L2 is real, and that it is not due to word recognition difficulties or differences in language proficiency.
January 25, 2018
Speaker: Prof. Dr. Rosana Tristão (University of Brasília - Brazil, invited by Thomas Lachmann)
Topic: Auditory Processing Disorder and Cognitive Profile in Children with Specific Learning Disorder
Abstract: We have been studying the development of auditory processing (AP) in infants and children with neurodevelopmental disorders due to different causes as Down syndrome and prematurity. In this presentation I will focus on AP disorder, its relation to specific learning disorder (SLD) in children and its the impact over cognitive profile and visuomotor skills. We investigated 25 children (7-14-years-old) through intelligence and visuomotor tests and audiologic evaluation that encompassed auditory threshold; brainstem auditory evoked response (BERA), event related potentials (ERP) P3/N2; behavioral auditory processing tests (APE): dichotic digits (DD), speech in noise (SN), sound localization (LOC), staggered spondaic words (SSW). Multiple linear regressions were used, and effects were found among AP disorder and WISC tests, IQs and indexes and visuomotor performance. We concluded that children with altered auditory processing presented a specific cognitive profile including lower verbal and spatial reasoning performance that is sensitive to parental education level and they should go through complete multimodal examination for better investigation of their specificities.
February 01, 2018
Speaker: Dr. Evan Kidd (MPI for Psycholinguistics - Australia National University, invited by Shanley Allen)
Topic: Individual differences in language acquisition
Abstract: Language acquisition is a developmental process categorised by significant yet stable individual differences. While we should expect individual differences to predict growth within domains (e.g., vocabulary at 12 months predicting vocabulary at 24 months), cross-domain predictive relations are particularly insightful because they can reveal important insights into the process of acquisition, serving to constrain our theoretical models by revealing patterns of representation and drivers of developmental change across time. In this talk I will discuss an ongoing individual difference project being conducted in the ANU Language Lab (https://anulanguagelab.wordpress.com/). The Canberra Longitudinal Child Language Project is a large-scale longitudinal individual differences study of children’s language processing. The study is tracking children’s language processing skills across time and linking them to their subsequent language acquisition, with the aim of moving towards more dynamic mechanistic explanations of the acquisition process. In this talk I will discuss the initial phase the project, which investigated how children’s segmentation skills relate to late vocabulary development. Using ERPs, we found robust individual differences 9-month-old children’s ability to extract words from running speech, which subsequently predicted vocabulary development and the children’s ability to learn novel labels.
February 08, 2018
Speaker: Prof. Dr. Sonja A. Kotz (Maastricht University - The Netherlands & Max Planck Institute for Human Cognitive and Brain Sciences - Germany, invited by Patricia Wesseling)
Topic: Multimodal emotional speech perception
Abstract: Social interactions rely on multiple verbal and non-verbal information sources and their interaction. Crucially, in such communicative interactions we can obtain information about the current emotional state of others (‘what’) but also about the timing of these information sources (‘when’). However, the perception and integration of multiple emotion expressions is prone to environmental noise and may be influenced by a specific situational context or learned knowledge. In our work on the temporal and neural correlates of multimodal emotion expressions we address a number of questions by means of ERPs and fMRI within a predictive coding framework. In my talk I will focus on the following questions: (1) How do we integrate verbal and non-verbal emotion expressions; (2) How does noise affect the integration of multiple emotion expressions; (3) How do cognitive demands impact the processing of multimodal emotion expressions; (4) How do we resolve interferences between verbal and non-verbal emotion expressions?
April 27, 2017
Speaker: Radha Nila Meghanathan (Leuven University, invited by Thomas lachmann)
Topic: Memory accumulation across sequential eye movements and related electrical brain activity
Abstract: Visual short term memory for items presented at fixation has been studied extensively informing us about memory capacity for features and objects, the fidelity of accumulated memory and the neural correlates of memory load. However, during free viewing, information is accumulated in working memory across sequences of fixations and saccades. We attempted to understand accumulation of memory across sequential eye movements. We proposed that memory load would be reflected during fixation intervals in electrical brain activity (EEG). To find this EEG correlate of memory accumulation, we conducted a combined multiple target visual search- change detection experiment with simultaneous eye movement and EEG recording. Participants were asked to search for and memorize the orientations of 3, 4 or 5 targets in a visual search display in order to perform a subsequent change detection task where one of the targets changed orientation in half of the cases. We studied eye movement properties, pupil size, fixation related brain potentials and EEG of participants during the task. In my talk, I present the analyses we performed, the obstacles we faced there in and the solutions we found, the results that followed, and also discuss our new understanding of working memory and information accumulation.
May 11, 2017
Speaker: Katherine Messenger (Warwick University, invited by Shanley Allen)
Topic: The persistence of priming: Exploring long-lasting syntactic priming effects in children and adults
Abstract: Syntactic priming, the unconscious repetition of syntactic structure across speakers and utterances, has been a key method in demonstrating the psychological reality of abstract syntactic representations that adults recruit in their language processing. Syntactic priming has therefore been applied to test children's knowledge of syntactic structures but more recently it has been framed as a mechanism that can also explain how these structures are acquired. This theory has been instantiated in computational models but support from behavioural evidence is still needed. In this talk I will present research investigating whether syntactic priming effects in children (and adults) are indicative of language learning.
May 18, 2017
Speaker: Bertram Opitz (Surrey University - UK, invited by Thomas Lachmann)
Topic: The mysteries of second language acquisition: a neuroscience perspective
Abstract: One of the most intense debates in second language acquisition regards the critical period hypothesis. This hypothesis claims that there is a critical period during someone's development that enables the acquisition of any human language. Based on research utilising an artificial language learning paradigm I will demonstrate that the same cognitive processes and highly similar brain regions are involved in first and second language acquisition, at least in the areas of the acquisition of syntax, orthography and emotional semantics. I will also demonstrate that cognitive and environmental constraints of the learning process determine individual differences in second language learning.
June 08, 2017
Speaker: Yaïr Pinto (Amsterdam University, invited by Thomas Schimidt)
Topic: What can permanent and temporary split-brain teach us about conscious unity?
Abstract: A healthy human brain only creates one conscious agent. In other words, under normal circumstances consciousness is unified. However, the brain is made up of many, semi-independent modules. So how is this conscious unity possible? Current leading consciousness theories differ on the answer to this question, but intuitively, it seems that informational integration between modules is the key. In the current talk, I will present data that challenges this intuition, and suggests that even without massive communication between modules conscious unity can persist. I will discuss why our intuition may be mistaken, and I will present alternative explanations of conscious unity.
June 22, 2017
Speaker: Norbert Jaušovec (Maribor University - Slovenia, invited by Saskia Jaarsveld)
Topic: Increasing intelligence
Abstract: The “Nürnberger Trichter” – a magic funnel used to pour knowledge, expertise and wisdom into students – demonstrates that the idea of effortless learning and the power of intelligence was “cool” even 500 years ago. Today noninvasive brain stimulation (NIBS), which involves transcranial direct and alternating current stimulation (tDCS and tACS), as well as random noise (tRNS) and transcranial magnetic stimulation (TMS), could be regarded as a contemporary replacement for the magic funnel. They represent and extension to the more classical methods for cognitive enhancement, such as behavioral training and computer games. On the other hand, there is still a number of alternative approaches that can affect cognitive function. Among the most prominent are: nutrition, drugs, exercise, meditation-related reduction in psychological stress and neurofeedback. The presentation will provide a concise overview of methods claiming to improve cognitive functioning – psychological constructs such as intelligence and working memory. Discussed will be changes in behavior and brain activation patterns observed with the electroencephalogram (EEG), functional magnetic resonance imaging (fMRI) and diffusion tensor imaging (DTI). Examined will be the usefulness of brain training for the man/woman in the street, as well as an additional device that can verify and bring causation into the relations between brain activity and cognition. Modulating brain plasticity and by that changing network dynamics crucial for intelligent behavior can be a powerful research tool that can elucidate the neurobiological background of intelligence, working memory and other psychological constructs.
June 29, 2017
Speaker: Grégory Simon (Caen Basse-Normandie University - France, invited by Thomas Lachmann)
Topic: Inhibition: a key factor for the cognitive development
Abstract: In the Laboratory for the Psychology of Child Development and Education (LaPsyDÉ - https://www.lapsyde.com/), our research projects mostly focus on the key role of inhibition processing during cognitive development. After presenting our major works in this field, I will focus more precisely on applied results from both behavioral and imaging (MRI, ERPs) assessments that deal with reading acquisition.
October 27, 2016
Speaker: Prof. Dr. Petra Hendricks (invited by Prof. Allen and Frau Azpiroz)
Topic: Production may precede comprehension in children’s development of language
Abstract: It is generally assumed that comprehension precedes production in children’s development of language (e.g., Clark, 2003). That is, children first learn to comprehend a particular linguistic form and only later learn how this linguistic form must be used. However, several studies have found that, for some linguistic forms, adult-like production seems to be ahead of adult-like comprehension. For example, English and Dutch 4-year-olds know when to use a personal pronoun (him) and when to use a reflexive pronoun (himself), but make errors in their comprehension of personal pronouns until age 6 or even later (de Villiers, Cahillane & Altreuter, 2006; Spenader, Smits & Hendriks, 2009). Such production/comprehension asymmetries present a challenge to most linguistic theories. I will discuss several of these asymmetries and present a possible explanation for the existence of these asymmetries in terms of a constraint-based direction-sensitive grammar.
November 03, 2016
Speaker: Dr. Jan Hirtz (TU Kaiserslautern) (invited by Prof. Lachmann)
Topic: Two-photon imaging of neuronal activity in mouse neocortex
November 10, 2016
Speaker: Dr. Diana Peppoloni (University of Siena) (invited by Prof. Lachmann)
Topic: Learning Complexity of English Language for Dyslexics: a Proposal for a Multisensory and Interactive Approach
Abstract: English knowledge constitutes a basic requirement for undergraduates to complete their academic career and move into the world of work. This goal is difficult to achieve for dyslexic students, since dyslexia is a specific learning disease that affects not only literacy skills in students’ first language, but also foreign language learning. This is even more evident when the target language has an opaque orthography, such as the English one. For this reason, it is required that educational research identifies new teaching-learning strategies which, through personalized pathways, could enhance the potential of all students to promote their educational success. If not all students learn in the same way, then we have to interpret their cognitive style to maximize their performances.
Research results suggest that dyslexics prefer to learn in a multisensory, creative way; that’s why language lessons should integrate at the same time action and emotions. Complementarity between right and left brain hemisphere seems to be the key to enhance their learning process (GG-Hypothesis). The latest findings in the field of neurosciences give a new and robust scaffolding to our belief that drama activities boost language learning. Therefore, theatrical didactics seems to constitute the best suitable practice to convey linguistic knowledge to dyslexics. Involving many sensorial channels, it promotes the development of strong and persistent cortical and subcortical bindings, responsible for the storage both of linguistic and affective information. I will present in this talk, an original didactic method, conceived to overcome the most common problems related to the acquisition of oral skills in second language learning, based on theatrical didactics, in which “mind” and “body” are both fully involved as it happens in any real communication situation. Through a holistic approach, it aims at providing college dyslexic students with repeatable mechanisms of linguistic and communicative knowledge acquisition, which can be then put in place and reused in different learning opportunities, making this kind of learners self-confident and enabling them to build their own set of skills in an autonomous way.
November 24, 2016
Speaker: Dr. Huaiyong Zhao (Department of Psychology - TU Darmstadt) (invited by Prof. Ghose)
Topic: How do people steer a car to intercept a moving target:The constant target‑heading strategy.
Abstract: Successful interaction with moving targets in an environment is vital to human’s survival. Locomotor interception of a moving target is one of the interactions. Three strategies have been proposed for locomotor interception: in the pursuit strategy, the target is kept at the heading direction; in the constant target‑heading strategy, it is kept at a constant angle relative to the heading direction; in the constant bearing strategy, it is kept at a constant bearing angle relative to an allocentric reference axis. In my study, I examine how drivers steer a car to intercept a moving target in virtual environments. Steering in virtual environment does not have the spatial or temporal limitations in real environment. Moreover, virtual environment enables me to manipulate the availability of visual information. These advantages may help reveal the strategy used in locomotor interception. In this talk, I will present three experiments, and show participants’ interceptive steering in different environments. The results suggest that locomotor interception is better accounted for by the constant target‑heading strategy.
December 08, 2016
Speaker: Dr. Andrey R. Nikolaev (Laboratory for Perceptual Dynamics, Brain & Cognition Research Unit, KU Leuven, Belgium) (invited by Prof. Lachmann)
lTopic: EEG-eye movement co-registration: method and application to free viewing behavior
Abstract: Information about the surrounding space is visually sampled with saccadic eye movements. Eye movements are tightly coupled with brain activity which reflects the perceptual and cognitive processes. Recent advances in eye-tracking technology have allowed researchers to use eye movements as markers for segmentation of ongoing EEG activity into episodes relevant to sequential steps of information processing. Consequently, the simultaneous recording of EEG and eye movements has increasingly become popular in various fields of vision research. The co-registration of EEG and eye movements is particularly advantageous for the investigation of processes associated with free visual exploration of the environment. In my talk I’ll discuss methodological aspects of the EEG-eye movement co-registration in free viewing and will give examples of its application. Particularly, I’ll focus on the problem of overlapping effects on EEG of subsequent eye movements and will show the ways of its solution. Then, I’ll describe several studies from our laboratory which considered visual memory encoding, visual search, and saccade guidance. My talk will contribute to a better understanding of the range of research questions that can be approached by the co-registration, the requirements for experimentation, and methodological solutions for simultaneous EEG and eye movement recording and data processing.
December 12, 2017
Speaker: Dr. Sonja Eisenbeiss (Köln University) (invited by Prof. Allen)
Topic: Studying Child Language in India
Abstract: In this talk, I will present initial results and new tools from cross-linguistic studies of Indian languages developed together with colleagues Pori Saikia (University of Essex), Benu Pareek (JNU, Delhi), and Ayesha Kidwai (JNU, Delhi). I will first present results of our studies on grammatical markers in Hindi and Assamese child language. Then, I will discuss new tools that we are currently developing for comparative studies on children´s and adults´use of grammatical markers in different regions of India. The focus of this discussion will be on the challenges of conducting cross-linguistic and comparative research in a culturally and linguistically diverse region.
January 19, 2017
Speaker: Prof. Dr. Alfred Effenberg (Leibniz University Hannover) (invited by Dr. Schinauer)
Topic: Auditory Modulation of Multisensory Representations
Abstract: Motor learning is based on motor perception and emergent perceptual-motor representations. A lot of behavioral research is related to single perceptual modalities but during last two decades the contribution of multimodal perception on motor behavior was discovered more and more. A growing number of studies indicates an enhanced impact of multimodal stimuli on motor perception, motor control and motor learning in terms of better precision and higher reliability of the related actions. Our own research is dedicated to real world actions: It will be shown that kinematic real-time acoustics can be used (1) to enhance perceptual-motor representations as well as (2) to enhance the perceptual and motor performance and (3) to modulate the perception systematically by changing the kinematic-acoustic mapping stepwise.
January 26, 2017
Speaker: Prof. Dr. Stefan Koelsch (University in Bergen) (invited by Prof. Lachmann)
Topic: Brain correlates of music-evoked emotions
Abstract: Music is a universal feature of human societies, partly owing to its power to evoke strong emotions and influence moods. During the past decade, the investigation of the neural correlates of music-evoked emotions has been invaluable for the understanding of human emotion.
Functional neuroimaging studies on music and emotion show that music can modulate activity in brain structures that are known to be crucially involved in emotion, such as the amygdala, nucleus accumbens, hypothalamus, hippocampus, insula, cingulate cortex and orbitofrontal cortex. The potential of music to modulate activity in these structures has important implications for the use of music in the treatment of psychiatric and neurological disorders.
February 02, 2017
Speaker: Dr. Katharina Nimz (Bielefeld University) (invited by Dr. Leigh Fernandez and Prof. Shanley Allen)
Topic: Second language speech learning: the case of vowel perception and production in L2 German
Abstract: Learning to speak a foreign language is hard. When it comes to German vowels, L2 learners have to master various phonetic dimensions which most likely differ from those in their L1. For example, German differentiates between long, tense and short lax vowels, and it is crucial for an L2 learner to perceive and produce the difference between, for example, -Rate- (“instalment”) and -Ratte- (“rat”). It has been hypothesized that both Turkish and Polish learners have difficulties acquiring this difference, but – up until the present study – this has not been tested experimentally. By means of various production and perception experiments, we investigated whether and how learners have problems learning important acoustic dimensions such as duration and spectral features. Furthermore, we investigated the influence of orthography in second language speech learning. This factors has, as of recently, received considerable attention in the field, and the present study is the first to experimentally investigate the influence of orthographic markers in L2 German speech.
February 09, 2017
Speaker: Prof. Dr. Kielan Yarrow (City University London) (invited by Prof. Ghose)
Topic: Can classification videos reveal the information used to respond to an opponent’s tennis stroke?
Abstract: Experts are able to predict the outcome of their opponent’s next action (e.g. a tennis stroke) based on kinematic cues “read” from preparatory body movements. This ability has been revealed in occlusion experiments, which present sporting scenarios with different possible outcomes (e.g. a cross-court or down-the-line tennis return) but remove segments of video (e.g. by stopping the video at ball contact) in order to assess how behaviour is affected. Here, we instead used classification-image techniques to find out how participants discriminate sporting scenarios as they unfold.
We filmed tennis players serving and hitting forehands, each with two possible directions. These videos were presented to novices and club-level amateurs, running from 800ms before to 200ms after racquet-ball contact. During practice, participants reported shot direction under a time limit targeting 90% accuracy. Participants then viewed videos through Gaussian windows ("Bubbles") placed at random in the temporal (E1), spatial (E2) or spatiotemporal (E3) domains. Comparing Bubbles from correct and incorrect trials revealed the contribution of information from different regions toward a correct response.
Temporally, two regions supported accurate responding (from ~50 ms before ball contact to 100+ ms afterwards, and, for forehands, at around the time of swing initiation, ~300 ms before ball contact). Spatially, information was accrued from the ball trajectory and from the opponent’s head, perhaps reflecting their gaze direction. Spatiotemporal bubbles again highlighted ball trajectory information, but seemed susceptible to an attentional cuing artefact. Overall, there seems to be potential to help players improve by showing them from when/where they read information, but our results so far have been dominated by the information accrued from the ball trajectory, rather than earlier kinematic cues. This may reflect the competent, but not elite, standard of our tennis players. We are now analysing data from experiments that focus on the period before ball contact, and hope to present our preliminary findings from these newer experiments in order to promote discussion of collaborative follow-up work.
April 28, 2016
Speaker: Haaß-Talk: Mark R. Runco (University of Georgia, Athens, USA) (invited by Prof. Lachmann)
Topic: 10 Key Findingy from Creativity Research and Implications for Education, Business and Everyone
Abstract: Research on creativity is expanding at an unprecedented pace. Our understanding of creativity is growing, with a multitude of implications for education, business, and various other applied areas. This presentation points to a set of the more important findings from the recent research. It explores implications of each. The main points to be made include the following: (1) There are domain differences in creativity, though some evidence of universals as well, and some discussion of new domains (e.g., Technological Creativity). (2) The idea of brain localization no longer plays a large role in studies of creativity. Now the interest is in Systems. (3) It is not easy to study Intuition, but several creative methodologies confirm that Intuition can contribute to the creative process. (4) Problem finding is more important than problem solving, at least for many creative performances. (5) There are lifespan changes in creativity, including several slumps, both in childhood and adulthood, but it may be that these can be minimized. (6) Creativity does not depend on IQ or intelligence. In fact, there is often a “cost of expertise.” (7) One important indicator of creative potential is the capacity to generate and judge ideas, and ideas may be useful in every domain, though the idea itself may take different forms. (8) Settings and context have a huge impact on the fulfillment of creative potential and on the expression of creative ideas. There are several commonalities that apply to all levels–the home, the school, the organization, culture. (9) Paradoxes are good. They often stimulate creative thinking. (10) Dopamine has been related to creative potential, in a limited way, which does imply a genetic basis for creativity.
May 12, 2016
Speaker: Dr. Marcus Heldmann (Universität Lübeck) (invited by Prof. Lachmann)
Topic: The impact of cognitive control on the maintenance of dyslexia: implications and electrophysiological evidence
Abstract: One critical aspect in the maintenance of dyslexia is the inability of affected persons to detect errors in their own writing. This impaired error sensitivity is assumed to promote the consolidation of memory representations of incorrect word spellings. In our present research we are able to show, that impaired error sensitivity affects event-related components in the EEG, which are assumed to reflect cognitive control processes. We are also able to show that cognitive control processes vary with the development of reading and writing skills in children. Based on these findings we would like to argue for a training program in children with impaired writing abilities that is based on the principles of errorless learning procedures. /p>
May 19, 2016
Speaker: Dr. Bilge Sayim (Universität Leuven) (invited by Prof. Lachmann)
Topic: Crowding and Appearance in Peripheral Vision
Abstract: To perceive and navigate in complex environments, humans rely strongly on information from the visual periphery. A severe limit of peripheral vision is crowding – the inability to identify objects in clutter that are easily identified in isolation. For example, a letter presented in the periphery that can be identified when presented alone, is indiscernible when flanked by close-by letters. Crowding does not only deteriorate performance but also changes target appearance. However, only few studies have addressed the appearance of crowded stimuli, even though the specific kinds of appearance changes may be key to understanding crowding. Here, I will present results that show how crowded appearance predicts performance, introduce appearance-based methods that reveal error characteristics of crowding that are not revealed in standard crowding paradigms, and show how visual artists may play a key role in shedding light on the underlying mechanisms of crowding.
June 2, 2016
Speaker: () (invited by )
June 9, 2016
Speaker: Dr. Jens Schwarzbach (University of Regensburg) (invited by Prof. Schmidt)
June 16, 2016
Speaker: Leor Roseman, PhD (Imperial College London) (invited by Prof. Allen)
Topic: Neural correlates of LSD-induced, eyes-closed, psychedelic imagery
Abstract:Lysergic acid diethylamide (LSD) is a psychedelic drug that induces an altered state of consciousness characterized by visual hallucinations. Healthy participants were injected with LSD (75 μg) or placebo and underwent 14-min fMRI resting-state scans. During LSD sessions (but not in placebo sessions) two distinct patterns were observed: First, subjective ratings of eyes-closed psychedelic imagery were positively correlated with significant increases in functional connectivity between the primary visual cortex (V1) and other regions; bilateral striatum, insular cortex, operculum cortex, orbitofrontal cortex, inferior frontal gyrus, superior and middle temporal gyrus, supramarginal gyrus, angular gyrus, paracingulate gyrus and medial posterior thalamus. Second, eyes-closed functional connectivity within the visual network (between V1 and V3) mimicked patterns defined as characteristic to perceptual localizations of de facto presented stimuli.
As the first modern neuroimaging LSD study, it has been widely reported (and harshly misinterpreted) in the media during the last weeks. I will attempt to clarify recurrent misconceptions and compare our findings to psychedelic imagery research conducted in the 50’s and 60’s.
June 30, 2016
Speaker: Prof. Dr. Andreas Nieder (Universität Tübingen) (invited by Prof. Friauf)
Topic: Intelligence without a cerebral cortex - lessons from crows
July 7, 2016
Speaker: () (invited by )
July 14, 2016
Speaker: Prof. Dr. Kepa Paz-Alonso (BCBL) (invited by Prof. Czernochowski)
Topic: Neural correlates of the testing effect
Abstract:Extensive behavioral evidence has demonstrated that retrieval practice is highly beneficial for long-term memory. However, the neural mechanisms underlying the testing effect remain relatively elusive. Here we sought to investigate the role of the hippocampus and cortical regions typically associated with retrieval success on the testing effect using functional and structural MRI. Thirty-seven adults studied 100 Swahili-Spanish word pairs, under repeated retrieval or repeated study conditions, and underwent MRI scanning 48 hours after encoding. fMRI results revealed that although similar brain regions were recruited for study- and retrieval-practice conditions, differential MTL activation and functional connectivity with MTL profiles emerged for successful retrieval as a function of these conditions. Structural analyses showed that total left hippocampus and left CA3/4 and dentate gyrus subfield volumes predicted successful retrieval only for information learned via study practice. Our findings showed differential hippocampal involvement and MTL-cortical neural dynamics as a function of the learning strategy.
19. November 2015
Speaker: Dr. Nora Schaal (Heinrich-Heine-Universität Düsseldorf) (invited by Prof. Czernochowski)
Topic: Modulating pitch memory using non-invasive brain stimulation methods
Abstract: Memory for pitch is an important factor for music and language processing. Behavioural research suggests a specific storage mechanism for pitch information and brain imaging studies have highlighted a complex neural network underlying pitch memory. As functional brain imaging studies cannot reveal causal involvements of neural areas of interest and cognitive tasks, non-invasive brain stimulation methods are better suited. In my talk I will introduce three non-invasive brain stimulation techniques (transcranial direct current stimulation, transcranial alternating current stimulation and transcranial magnetic stimulation) and will discuss three studies using these techniques in order to modulate pitch memory abilities and to investigate the significance of targeted brain areas for the pitch memory process. Furthermore, I will talk about whether neural specificities for pitch memory can be found for musicians, as experts in the musical domain and amusics, who dispose a pitch memory deficit.
26. November 2015
Speaker: Prof. Dr. Christine Schiltz (COSA Institute, University of Luxemburg) (invited by Prof. Lachmann)
Topic: How do space and language influence number concept learning?
Abstract: A major challenge in math classes is the fact that numerical concepts and symbols are abstract. Especially for young children, this abstractness stands in contrast to their preference of concrete situations and problems.
Recently, it has been proposed that even the most abstract concepts, such as numbers, are rooted in concrete body-related processes – an idea termed “embodied cognition”. Here we will analyze how number concepts relate to the concrete aspect of space and consider recent evidence providing insights into the mechanisms underlying number-space associations and their development (e.g. Goffaux et al., 2012; Hoffmann et al., 2013).
Besides the sensori-motor influence on number concepts, the learner’s context and especially his/her language environment also plays a critical role in shaping number concepts. Investigations into the relation between language and numerical cognition - in particular the question how multi-lingual persons conceive and process numbers - have lately regained interest. Here we will present and discuss recent findings from studies on the influence of language on numerical cognition in a multilingual context such as Luxembourg (e.g. Van Rinsveld et al., 2015).
Taken together we hope that this work on the relationship between the domain-general aspects of space and language and number concepts will help us understand how number symbols are represented and why and how people differ in their numerical understandings.
14. January 2016
Speaker: Dr. Mathias Vukelic (IAO.Fraunhofer, Stuttgart) (invited by Dr. Christmann)
Topic: Utilizing brain-based interaction between humans and machines for neuro-adaptive technologies
Abstract: In the last four decades brain-based interaction between humans and machines – known as Brain-Computer or Brain-Robotic Interfaces - has been investigated extensively. Using such innovative technology, relevant information from the user can be continuously detected by recognizing users’ mental, cognitive and emotional state. Based on this extracted information, the attributes of interactive digital systems and technical environments can be adjusted accordingly – leading to neuro-adaptive systems. While most research is aimed at the design of assistive, supportive or restorative systems for severely disabled persons, the last decade additionally showed new research towards applications for people without physical impairments in the field of Human-Technology Interaction (HTI). I will present recent and ongoing research of two categories of brain-based interaction in which neuro-adaptive systems can be used for medical applications, i.e. to restore motor functions after stroke and for HTI in general, i.e. to design assistive technologies which are more user-oriented.
21. January 2016
Speaker: Prof. Dr. Laura Winther Balling (Copenhagen Business School) (invited by Prof. Allen)
Topic: Text and Sentence Processing in the lab and in the wild
Abstract: I will present my work on sentence and text processing, two lines of research which are at the same time closely related and fundamentally different. The close relation lies, among other things, in the fact that sentence and text processing are adjacent levels of processing, with considerable impact of sentence level phenomena on text processing. The fundamental difference is in the approach: I study sentence processing in strictly controlled experiments and text processing in a much more naturalistic setup. In addition to talking about the findings in these two lines of research, I will present my view of pros and cons of the different approaches, and hope to discuss with the audience ways of making research on language processing in the lab as informative as possible about language processing in the wild.
11. February 2016
Speaker: Prof. Dr. Juhani Järvikivi (University of Alberta, Canada) (invited by Prof. Allen)
Topic: Preschoolers’ processing of pronouns in speech – Snapshots from the visual world
Abstract: How people assign reference to pronouns has received a lot of attention in psycholinguistics. Even though we understand many of the linguistic and contextual cues that direct attention to referents in adult language comprehension, relatively little is still known about how young children go about this feat given the time constraints of normal conversation.
The advent of the visual world eye tracking paradigm has advanced the study of children’s language processing. These studies typically ask whether children show sensitivity to the same sources of information and/or similar parsing strategies as adults. Studies have used the look-and-listen variant of the paradigm offer a series of snapshots into young children’s pronoun processing suggesting that their online (and offline) comprehension is sensitive to many of the same cues as adult processing (e.g., Arnold et al., 2005; Pyykkönen et al., 2010; Hartshorne et al., 2015). However, these results differ markedly with respect to the timecourse of the effects – children seem to be consistently slower. These differences can be (and have been) attributed either to children’s language knowledge (experience) or cognitive maturation (memory, cognitive control).
In this light, I will briefly review some of the recent literature on children’s online processing of reference and discuss some of our recent and ongoing work investigating preschoolers’ comprehension of ambiguous pronouns – in particular, how sentence/information structure and visual cues affect children’s (and adults’) pronoun comprehension.
23. April 2015
Speaker: Prof. Dr. Caterina Gawrilow (Universität Tübingen) (invited by Prof. Lachmann)
Topic: Assessment and Intervention of Self-Regulation
Abstract: The ability to self-regulate thoughts, emotions, and actions is pivotal for success in daily life: Individuals with good self-regulation skills achieve better grades in school, are healthier, have more satisfactory relationships, etc. This talk therefore presents data on (new) reliable methods to assess different aspects of self-regulation in various developmental phases – also at a day-to-day perspective (using for instance smartphones). Next to measurement of self-regulation, evidence-based interventions aimed at fostering self-regulation are important for individuals with low levels of self-regulation in particular (e.g., individuals with ADHD). Thus, a second focus of this talk will be on empirical studies investigating effects of such interventions. Finally, a theoretical classification of the facets of self-regulation will be suggested.
30. April 2015
Speaker:Prof. Dr. Matthew Crocker (Universität des Saarlandes) (invited by Prof. Allen)
Topic: The interplay of gaze and speech: Eye-tracking in virtual settings
Abstract: When speech pertains to the objects and events in the world around us, gaze and speech become closely intertwined: Speakers typically look at objects about 1sec before they mention them, while listeners fixate relevant objects within 250msec of hearing them mentioned. In face-to-face interaction, listeners can "short-cut" this process by following the speaker's gaze directly to get cues about which objects he/she is planning to mention. Thus gaze serves as a useful visual cue for both grounding and efficient disambiguation of spoken referents. In this talk I will discuss recent research which seeks to further understand the nature, importance and dynamics of gaze in visually-situated human-computer interaction. I will first focus on how eye gaze of artificial agents influences the way people understand their speech, with the aim of determining whether speaker gaze merely offers a visual cue, or more meaningful information regarding referential intentions. I will then discuss recent research which exploits the real-time gaze of human users in order to improve the automatic generation of spoken directions, as users seek to navigate their way through a virtual environment. I will conclude by summarizing the importance of eye-gaze as a real-time channel for situated spoken interaction, and discuss the implications for both human-human and human-computer interaction.
21. Mai 2015
Speaker: Prof. Dr. Andrea Kiesel (University of Freiburg) (invited by Prof. Schmidt)
Topic: It is not only words - Exploring instructed stimulus-response associations
Abstract: Previous studies on item-specific priming have established the independence of two distinct components of acquired stimulus-response associations: Stimulus-Action (S-A) and Stimulus-Classification (S-C) associations. Here we show that merely instructing S-A and S-C mappings leads to associative learning and influences later behavior. More specifically, we demonstrate that item-specific switches in S-A and S-C mappings between a prime and a later corresponding probe trial independently affect reaction time and accuracy both when participants act upon prime stimuli as well as when participants are merely instructed about the correct action and classification associated with prime stimuli. In a number of experiments, we elaborate on the stability and durability of S-A and S-C associations instantiated by mere instruction.
28. Mai 2015
Speaker: Dr. David Vinson (University College London) (invited by Prof. Allen)
Topic: Making sense of the hands: integration of speech and gestures in comprehension
Abstract: When we speak, we cannot help but gesture. It is now well established by researchers on "nonverbal" communication that large amounts of meaningful information are conveyed by the gestures we make when we speak, that representational gestures and speech are tightly temporally synchronised and exhibit other properties of mutual dependence. Researchers in psycholinguistics, however, tend to regard gesture as secondary or irrelevant to the processes constituting comprehension, in large part due to the use of printed text or auditory speech, both of which eliminate the visual cues that are present in face-to-face communication. In recent years there have been increased efforts to bring the two fields together, particularly in studies examining the behavioural and neural consequences that arise when speech and gesture mismatch. Behavioural studies suggest that speech and gesture are integrated and weighted equally in providing access to meaning. Neuroimaging studies implicate bilateral temporal regions (posterior STS, middle temporal gyrus) and left inferior frontal gyrus (pars triangularis) in semantic integration. However, all existing studies of speech-gesture mismatch have used video clips of speakers whose heads/faces are not visible - not only due to the technical challenges of producing realistic mismatching speech-gesture combinations with a visible face, but also to avoid tapping into integration between the auditory speech signal and visible head/face movements corresponding to produced speech. I will argue that the face is essential to understanding semantic integration between speech and gesture, presenting behavioural experiments and an fMRI study, all using digitally manipulated video clips in which the speaker's face is visible and corresponds to the heard speech. While gesture is still relevant to speech comprehension, the balance of the cues is shifted; and the neural systems engaged in semantic integration appear to be limited to bilateral temporal regions under these conditions.
11. June 2015
Speaker: Mireille Trautmann (invited by Prof. Lachmann)
Topic: Researching the causes of dyslexia
Abstract: Instead of following one line of argumentation, such as the influence of auditory perception on the development of phonological awareness, a systematic facet model is necessary to identify the development of singular, bi-, and cross-modal perception, attention, and higher cognition in dyslexia over time. In addition, such a model should also include etiological measures such as genetic and family risk factors. In my presentation I present data from a study that focused on distinctive processing types that fit into a complex model of how dyslexia emerges.
25. June 2015
Speaker: Prof. Dr. Julia Fischer (Kognitive Ethologie Deutsches Primatenzentrum Leibniz Institut für Primateforschung Göttingen) (invited by Prof. Friauf)
Topic: Comparative approaches to understanding the origins of the language faculty
Abstract: Elucidating the origins of speech has been a major driver in studies of animal vocal communication. I will here discuss three aspects that have been deemed central in this regard, namely semanticity, vocal learning, and intentionality. The classic example of semantic communication are the alarm calls of vervet monkeys (Chlorocebus pygerythrus), who give three distinct alarm calls in response to their three main predator classes, and the calls alone are sufficient to elicit different escape strategies. The calls were thus deemed as being “functionally referential”, and assumed to provide important insights into the evolution of speech. We used comprehensive quantitative analysis to re-assess the structural variation within their vocalizations. Although the three different alarm calls indeed differ significantly, we found substantial overlap in acoustic structure with calls produced in non-alarm contexts, indicating that these alarm calls are indexical rather than symbolic. Playback experiments on Green monkeys, the West African congeners of vervets further show that responses are guided by both the acoustic information available and contextual cues. Analyses of the variation of alarm calls in different Chlorocebus populations underscore the view that the structure of nonhuman primate vocalizations is largely innate. In the second part of the talk, I will review a number of studies that have addressed the genetic basis of vocal learning in mice, including mice carrying the human variant of the FOXP2 gene. These studies strongly suggest that mice are not vocal learners either, questioning their utility as models for studying the genetic basis of vocal learning. I will conclude with a brief discussion of differences in the intentional structure of human and nonhuman primate communication and provide an outlook on promising future questions.
02. July 2015
Ehrenkolloquium Hans-Georg Geissler
09. July 2015
Speaker: Dr. Ilja Sligte (Universität Amsterdam) (invited by Prof. Schmidt / Dr. Panis)
Topic: Visual sensory memory = perceptual consciousness
Abstract: In recent years, we have published several papers showing the existence of a high-capacity (up to 15 objects) and long-lived (up to 4s) form of sensory memory that can be clearly dissociated from iconic memory (Sligte, Scholte, Lamme, 2008) and from working memory (Sligte et al., 2011; Vandenbroucke, Sligte, & Lamme, 2011). We have argued that this new form of sensory memory, putatively termed fragile memory, represents a low-level and parallel form of phenomenological awareness. Indeed, signatures of consciousness such as perceptual inference (Vandenbroucke et al., 2012), metacognition (Vandenbroucke et al., 2014) and feature binding (Elport & Sligte, forthcoming) all seem to be present in fragile memory. However, all our results were based on partial-report experiments where subjects had to choose between change and no-change responses. This fact has triggered the criticism that subjects were just guessing (Phillips, 2011) on the basis of unconscious representations, as in blindsight. To explore this alternative explanation of our findings, we tested how subjects performed on a partial-report task with continuous response options (see Zhang & Luck, 2008; Bays & Husain, 2008 for examples of the task; we added retro-cues to this paradigm similar to Sligte, Scholte, & Lamme, 2008). We observed that subjects could report 7 objects (out of 8) with high precision on pure iconic memory conditions, about 6 on retro-cue (long-lasting and fragile form of memory) conditions, and only 4 on post-cue, working memory conditions. This suggests that all our previous studies validly made perceptual consciousness available for cognitive access.
23. July 2015
Speaker: Prof. Dr. Wieske van Zoest (University of Trento) (invited by Prof. Schmidt / Dr. Panis)
Topic: Developing representations in visual selection and time
Abstract: Oculomotor selection has an idiosyncratic time-course: early selection is driven primarily by raw saliency, whereas late selection is guided by search templates. In the last couple of years my colleagues and I have started to test the idea that this time-course can change as a function of experience and early developmental plasticity.
In particular, we investigated whether selection dynamics are altered 1) in observers that have a background of increased visual training as a result of extensive action video-game (AVG) playing, 2) in deaf observers who rely heavily on information from the visual domain because of the absence of auditory input. Our results show that the time-course of oculomotor performance is speeded in AVG players, but slowed in deaf observers, in both cases relative to control groups. Moreover, individuals that respond relatively slower in both groups are less influenced by stimulus-saliency than individuals that respond relatively fast. However, the time-course of visual processing fundamentally remains consistent across these groups: the function shifts in time, but the representation does not qualitatively change. In addition, using concurrent eye-tracking and EEG to study the neurophysiological correlates, we show that the development of distractor inhibition as indicated by a contralateral positivity (Pd) looks similar regardless of how much time observers took before making the eye movement. Thus, the ERP component being equal, saccadic performance critically depended on when in time the eye movement was made: Only slow oculomotor responses benefit from inhibition while fast responses did not because these occurred before inhibition was established. To conclude, the development of visual representations in processing seems relatively determined across experience and in the brain. Critically, when the representation is accessed defines the kind of information that is prioritized and consequent control and performance.