An interview with Alain de Cheveigné
Alain de cheveigné (Laboratoire des Systèmes Perceptifs) has obtained a grant from the H2020 programme for the COCOHA project (Cognitive Control of a Hearing Aid).
Can you tell us what you did before joining the LSP's Audition Team ?
My training and formative pathway are somewhat eclectic. I received undergraduate training in Physics, Mathematics and Electronics in Paris. My PhD work was in experimental phonetics, after which I spent two years at the Computer Science department of Kyoto University in Japan, followed by many subsequent visits to Japan over the years, mainly at the ATR institute. Before establishing myself as a professional scientist, I worked variously as theatre lighting designer and as a computer engineer, and I now hold a position with the CNRS, at ENS in Paris, in addition to an honorary position at University College London. Before arriving at the ENS I worked at the Ircam institute that deals with contemporary music, and before that at the Linguistics department of Paris Diderot University. Scientifically I have worked on problems in speech processing and auditory perception, in particular the perception of pitch and timbre and auditory scene analysis. More recently I have been working to develop new methods to process signals from the brain, that I hope will be useful for our new project. I am interested in the science, in figuring out how things work, but also in transforming this knowledge into something useful. I believe the two cannot be dissociated: putting knowledge to use is a "touch stone" to prove its validity. This does not mean that I favor application-directed research: the pathways by which knowledge comes to bear on the real world is often indirect, and fundamental research is the best place to invest energy to come up with new solutions. This makes me happy about the project that just got funded, as it well illustrates the pathway from fundamental research to high-impact applications.
The LSP is made up of two teams: the Audition Team and the Vision Team. Could you describe the LSP and its Audition Team ?
Our lab is born from the convergence of two teams, Vision and Audition. Both are grounded in sensory psychophysics, with a strong emphasis on theory and models of perception, and a keen interest in engineering counterparts of these models. The Audition team is interested in the perception of low-level features such as pitch, more complex skills such as speech perception, and more general questions such as sensory memory, attention, plasticity, scene analysis, predictive coding, and so on. The arrival of Shihab Shamma from the University of Maryland has opened a new dimension of investigation involving human and animal behavior and electrophysiology. We have cultivated strong links with the clinical world and cochlear implant and hearing aid industries that give a firm basis for the COCOHA project.
You are in charge of coordination of the European COCOHA project. It's an ambitious project that involves the creation of a new, intelligent hearing aid. Could you tell us more about the project ?
The COCOHA project revolves around a need, an opportunity, and a challenge. Millions of people struggle to communicate in noisy environments particularly the elderly: 7% of the European population are classified as hearing impaired. Hearing aids can effectively deal with a simple loss in sensitivity, but they do not restore the ability of a healthy pair of young ears to pick out a weak voice among many, that is needed for effective social communication. That is the need. The opportunity is that decisive technological progress has been made in the area of acoustic scene analysis: arrays of microphones and beamforming algorithms, or distributed networks of handheld devices such as smart phones can be recruited to vastly improve the signal-to-noise ratio of weak sound sources. Some of these techniques have been around for a while, and are even integrated within commercially available hearing aids. However their uptake is limited for one very simple reason: there is no easy way to steer the device, no way to tell it to direct the processing to the one source among many that he or she wishes to attend to. The COCOHA project proposes to use brain signals (EEG) to help steer the acoustic scene analysis hardware, in effect extending the efferent neural pathways that control all stages of processing from the cortex down to the cochlea, to govern also the external device. To succeed we must overcome major technical hurdles, drawing on methods from acoustic signal processing and machine learning borrowed from the field of Brain Computer Interfaces. On the way we will probe interesting scientific problems related to attention, electrophysiological correlates of sensory input and brain state as well as the structure of sound and brain signals. This is the challenge.
Site web du LSP: www.iec-lsp.ens.fr
Site web du projet COCOHA: http://audition.ens.fr/cocoha/
Nicolas Baumard, Alexandre Hyafil, Ian Morris, Pascal Boyer. Increased Affluence Explains the Emergence of Ascetic Wisdoms and Moralizing Religions. Current Biology. December 11, 2014
Between roughly 500 BCE and 300 BCE, three distinct regions, the Yangtze and Yellow River Valleys, the Eastern Mediterranean, and the Ganges Valley, saw the emergence of highly similar religious traditions with an unprecedented emphasis on self-discipline and asceticism and with “otherworldly,” often moralizing, doctrines, including Buddhism, Jainism, Brahmanism, Daoism, Second Temple Judaism, and Stoicism, with later offshoots, such as Christianity, Manichaeism, and Islam. This cultural convergence, often called the “Axial Age,” presents a puzzle: why did this emerge at the same time as distinct moralizing religions, with highly similar features in different civilizations? The puzzle may be solved by quantitative historical evidence that demonstrates an exceptional uptake in energy capture (a proxy for general prosperity) just before the Axial Age in these three regions.
Statistical modeling confirms that economic development, not political complexity or population size, accounts for the timing of the Axial Age.
We discussed several possible causal pathways, including the development of literacy and urban life, and put forward the idea, inspired by life history theory, that absolute affluence would have impacted human motivation and reward systems, nudging people away from short-term strategies (resource acquisition and coercive interactions) and promoting long-term strategies (self-control techniques and cooperative interactions).
Stéphane Buffat, Véronique Chastres, Alain Bichot, Delphine Rider, Frédéric Benmussa, and Jean Lorenceau
OB3D, a new set of 3D objects available for research: a web-based study. Front. Psychol., 06 October 2014
Studying object recognition is central to fundamental and clinical research on cognitive functions but suffers from the limitations of the available sets that cannot always be modified and adapted to meet the specific goals of each study. We here present a new set of 3D scans of real objects available on-line as ASCII files, OB3D. These files are lists of dots, each defined by a triplet of spatial coordinates and their normal that allow simple and highly versatile transformations and adaptations. We performed a web-based experiment to evaluate the minimal number of dots required for the denomination and categorization of these objects, thus providing a reference threshold. We further analyze several other variables derived from this data set, such as the correlations with object complexity. This new stimulus set, which was found to activate the Lower Occipital Complex (LOC) in another study, may be of interest for studies of cognitive functions in healthy participants and patients with cognitive impairments, including visual perception, language, memory, etc.
Andrew Martin, Thomas Schatz, Maarten Versteegh, Kouki Miyazawa, Reiko Mazuka, Emmanuel Dupoux, and Alejandrina Cristia Mothers Speak Less Clearly to Infants Than to Adults: A Comprehensive Test of the Hyperarticulation Hypothesis. Psychological Science - 2015
Infants learn language at an incredible speed, and one of the first steps in this voyage is learning the basic sound units of their native languages. It is widely thought that caregivers facilitate this task by hyperarticulating when speaking to their infants. Using state-of-the-art speech technology, we addressed this key theoretical question: Are sound categories clearer in infant-directed speech than in adult-directed speech? A comprehensive examination of sound contrasts in a large corpus of recorded, spontaneous Japanese speech demonstrates that there is a small but significant tendency for contrasts in infant-directed speech to be less clear than those in adult-directed speech. This finding runs contrary to the idea that caregivers actively enhance phonetic categories in infant-directed speech. These results suggest that to be plausible, theories of infants’ language acquisition must posit an ability to learn from noisy data.
Pieron M, Seassau M, Leboyer M, Zalla T.. Accelerated time course of saccadic inhibition of return in individuals with autism spectrum disorders. Exp Brain Res. 2014 Nov 30. PMID: 25432625
The inhibition of return (IOR) refers the observer's slower response time when the target stimulus appears on the previously attended location. In the present study, we examined the time course of saccadic IOR by using five stimuli onset asynchronies (SOAs) in a group of adults with autism spectrum disorders (ASDs) and a comparison group. The results showed that the IOR effect occurred earlier (300 ms SOA) in participants with ASDs, relative to the comparison participants (500 and 700 ms SOAs). The ASD group also committed a greater number of anticipatory saccades, which positively correlated with scores on restricted and repetitive behaviors, as assessed by the Autism Diagnostic Interview-Revised (ADI-R; Lord et al. in J Autism Dev Disord 24:659-685, 1994). These findings reveal an accelerated time course for saccadic IOR along with diminished volitional oculomotor control in participants with ASDs. We discussed these results with reference to the atypical and the superior visual search abilities often reported in this population.
Zalla T, Miele D, Leboyer M, Metcalfe J.. Metacognition of agency and theory of mind in adults with high functioning autism.Conscious Cogn. 2014 Dec 4; 31C:126-138.
We investigated metacognition of agency in adults with high functioning autism or Asperger Syndrome (HFA/AS) using a computer task in which participants moved the mouse to get the cursor to touch the downward moving X's and avoid the O's. They were then asked to make judgments of performance and judgments of agency. Objective control was either undistorted, or distorted by adding turbulence (i.e., random noise) or a time Lag between the mouse and cursor movements. Participants with HFA/AS used sensorimotor cues available in the turbulence and lag conditions to a lesser extent than control participants in making their judgments of agency. Furthermore, the failure to use these internal diagnostic cues to their own agency was correlated with decrements in a theory of mind task. These findings suggest that a reduced sensitivity to veridical internal cues about the sense of agency is related to mentalizing impairments in autism.