-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.json
1 lines (1 loc) · 62.3 KB
/
index.json
1
[{"authors":null,"categories":null,"content":"About I\u0026rsquo;m a cognitive neuroscientist currently using neuroimaging techniques in farm animals to investigate: 1) how early environment influences behavioural and neurobiological development and 2) social perception and its underlying neural mechanisms.\nSince November 2017, I have been a researcher (Chargé de Recherche) at the PRC (Physiologie de la Reproduction et des Comportements) research unit in Nouzilly, France.\nAlso check out the INRAE Farm Animal Cognition and Welfare network that I help to animate.\n","date":-62135596800,"expirydate":-62135596800,"kind":"term","lang":"en","lastmod":-62135596800,"objectID":"00ce5bf736bf5b73096e222f166bef2e","permalink":"https://scott-love.github.io/author/","publishdate":"0001-01-01T00:00:00Z","relpermalink":"/author/","section":"authors","summary":"About I\u0026rsquo;m a cognitive neuroscientist currently using neuroimaging techniques in farm animals to investigate: 1) how early environment influences behavioural and neurobiological development and 2) social perception and its underlying neural mechanisms.","tags":null,"title":"","type":"authors"},{"authors":["Cécile Arnould","Scott A. Love","Benoit Piègu","Gaëlle Lefort","Marie-Claire Blache","Céline Parias","Delphine Soulet","Frédéric Lévy","Raymond Nowak","Léa Lansade","Aline Bertin"],"categories":null,"content":"","date":1718983401,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1718983401,"objectID":"587c891a5ff37f82f0f4f5814672f41a","permalink":"https://scott-love.github.io/publication/arnould2024/","publishdate":"2017-01-01T00:00:00Z","relpermalink":"/publication/arnould2024/","section":"publication","summary":"The study offacial expressions inmammals provided great advances inthe identification of their emotions and then inthe comprehension oftheir sentience. So far, this area of research has excluded birds. With anaturalist approach, we analysed facial blushing and feather displays indomestic fowl. Hens were filmed insituations contrasting inemotional valence and arousal level: situations known toindicate calm states (positive valence /low arousal), have rewarding effects (positive valence /high arousal) orinduce fear-related behaviour (negative valence /high arousal). Head feather position as well as skin redness ofcomb, wattles, ear lobes and cheeks varied across these situations. Skin ofall four areas was less red insituations with low arousal compared tosituations with higher arousal. Furthermore, skin redness ofthe cheeks and ear lobes also varied depending on the valence of the situation: redness was higher insituations with negative valence compared tosituations with positive valence. Feather position also varied with the situations. Feather fluffing was mostly observed inpositively valenced situations, except when hens were eating. We conclude that hens have facial displays that reveal their emotions and that blushing isnot exclusive tohumans. This opens apromising way toexplore the emotional lives ofbirds, which is acritical step when trying toimprove poultry welfare.","tags":["emotion","chicken"],"title":"Facial blushing and feather fluffing are indicators of emotions in domestic fowl (Gallus gallus domesticus)","type":"publication"},{"authors":["Camille Pluchot","Hans Adriaensen","Céline Parias","Didier Dubreuil","Cécile Arnould","Elodie Chaillou","Scott A. Love"],"categories":null,"content":"","date":1716895130,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1716895130,"objectID":"65c877cc5d14b80c075caa3582b0fd57","permalink":"https://scott-love.github.io/publication/pluchot2024/","publishdate":"2017-01-01T00:00:00Z","relpermalink":"/publication/pluchot2024/","section":"publication","summary":"Magnetic resonance imaging (MRI) is a non-invasive technique that requires the participant to be completely motionless. To date, MRI in awake and unrestrained animals has only been achieved with humans and dogs. For other species, alternative techniques such as anesthesia, restraint and/or sedation have been necessary. Anatomical and functional MRI studies with sheep have only been conducted under general anesthesia. This ensures the absence of movement and allows relatively long MRI experiments but it removes the non-invasive nature of the MRI technique (i.e., IV injections, intubation). Anesthesia can also be detrimental to health, disrupt neurovascular coupling, and does not permit the study of higher-level cognition. Here, we present a proof-of-concept that sheep can be trained to perform a series of tasks, enabling them to voluntarily participate in MRI sessions without anesthesia or restraint. We describe a step-by-step training protocol based on positive reinforcement (food and praise) that could be used as a basis for future neuroimaging research in sheep. This protocol details the two successive phases required for sheep to successfully achieve MRI acquisitions of their brain. By providing structural brain MRI images from six out of ten sheep, we demonstrate the feasibility of our training protocol. This innovative training protocol paves the way for the possibility of conducting animal welfare-friendly functional MRI studies with sheep to investigate ovine cognition.","tags":["sheep","mri"],"title":"Sheep (Ovis aries) training protocol for voluntary awake and unrestrained structural brain MRI acquisitions","type":"publication"},{"authors":["Delphine Soulet","Anissa Jahoul","Rodrigo Guabiraba","Léa Lansade","Marie-Claire Blache","Benoit Piègu","Gaëlle Lefort","Vanaïque Guillory","Pascale Quéré","K. Germain","Frédéric Lévy","Scott A. Love","Aline Bertin","Cécile Arnould"],"categories":null,"content":"","date":1716305001,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1716305001,"objectID":"9a563d312aba8425a78caea44797588b","permalink":"https://scott-love.github.io/publication/soulet2024/","publishdate":"2017-01-01T00:00:00Z","relpermalink":"/publication/soulet2024/","section":"publication","summary":"Non-invasive markers of affective states can help understanding animals' perception of situations and improving their welfare. These markers are scarce in avian species. In this study, we investigate the potential relation between alterations in facial skin redness in hens and their corresponding affective states. Six hens were filmed in both naturally unfolding scenarios and controlled tests designed to elicit various affective states. The facial skin redness was measured from images extracted from the videos. Our observations revealed that hens exhibited the highest degree of facial skin redness in negative situations of high arousal, a high redness in positive situations of high arousal, and the lowest in positive situations of low arousal. In a second study, we further examined whether facial skin redness and secretory immunoglobulin A (S-IgA) can serve as markers for the quality of the human-animal relationship. Two groups of hens, one habituated to humans (n=13) and one non-habituated (n=12), were compared for general fearfulness in an open field test and for fear of humans in a reactivity to human test. In the open-field test, there were no statistical differences in general fearfulness, facial skin redness or S-IgA concentrations between both groups. However, habituated hens exhibited significantly lower fearfulness and facial skin redness in the presence of humans compared to non-habituated hens in the reactivity to human test. Additionally, habituated hens showed significant lower S-IgA concentration in lachrymal fluid in the presence of humans, with no significant differences in saliva or cloacal samples. We propose that changes in facial skin redness reflect variations in affective states and can be used as a marker for assessing the quality of the human-hen relationship. The relationship between S-IgA concentrations and affective states requires further investigation.","tags":["emotion"],"title":"Exploration of skin redness and immunoglobulin A as markers of the affective states of hens","type":"publication"},{"authors":["Paula Regener","Naomi Heffer","Scott A. Love","Karin Petrini","Frank Pollick"],"categories":null,"content":"","date":1713972201,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1713972201,"objectID":"1be9aede4622363e2e21424f12d18540","permalink":"https://scott-love.github.io/publication/regener2024/","publishdate":"2017-01-01T00:00:00Z","relpermalink":"/publication/regener2024/","section":"publication","summary":"Research has shown that children on the autism spectrum and adults with high levels of autistic traits are less sensitive to audiovisual asynchrony compared to their neurotypical peers. However, this evidence has been limited to simultaneity judgments (SJ) which require participants to consider the timing of two cues together. Given evidence of partly divergent perceptual and neural mechanisms involved in making temporal order judgments (TOJ) and SJ, and given that SJ require a more global type of processing which may be impaired in autistic individuals, here we ask whether the observed differences in audiovisual temporal processing are task and stimulus specific. We examined the ability to detect audiovisual asynchrony in a group of 26 autistic adult males and a group of age and IQ-matched neurotypical males. Participants were presented with beep-flash, point-light drumming, and face-voice displays with varying degrees of asynchrony and asked to make SJ and TOJ. The results indicated that autistic participants were less able to detect audiovisual asynchrony compared to the control group, but this effect was specific to SJ and more complex social stimuli (e.g., face-voice) with stronger semantic correspondence between the cues, requiring a more global type of processing. This indicates that audiovisual temporal processing is not generally different in autistic individuals and that a similar level of performance could be achieved by using a more local type of processing, thus informing multisensory integration theory as well as multisensory training aimed to aid perceptual abilities in this population.","tags":["synchrony","autism spectrum disorders","audiovisual","multisensory"],"title":"Differences in audiovisual temporal processing in autistic adults are specific to simultaneity judgments","type":"publication"},{"authors":["Aline Bertin","Baptiste Mulot","Raymond Nowak","Marie-Claire Blache","Scott Love","Mathilde Arnold","Annabelle Pinateau","Cécile Arnould","Léa Lansade"],"categories":null,"content":"","date":1674314601,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1674314601,"objectID":"04eafad171cc4dbd6cf0f32671296409","permalink":"https://scott-love.github.io/publication/bertin2023/","publishdate":"2017-01-01T00:00:00Z","relpermalink":"/publication/bertin2023/","section":"publication","summary":"In mammals, human-animal bonding is recognized as a source of positive affect for companion or farm animals. Because this remains unexplored in birds, we investigated captive parrots’ perspective of the human-animal relationship. We used a classical separation-reunion paradigm and predicted that variations in parrots ’ facial displays and behaviours would indicate their appraisal of the relationship. The test was divided into three phases of two minutes each: the bird was placed in an unfamiliar environment with a familiar caregiver (union), then the bird was left alone (separation) and finally, the caregiver returned (reunion). The test was repeated 10 times for each bird and video recorded in order to analyze their behaviour. The data show significantly higher crown and nape feather heights, higher redness of the skin and higher frequency of contact-seeking behaviours during the union and reunion phases than during the separation phase during which they expressed long distance contact calls. We observed the expression of eye pinning during the union and reunion phases in one out of five macaws. We argue that variation in facial displays provides indicators of parrot ’s positive appraisal of the caretaker presence. Our results broaden the scope for further studies on parrots’ expression of their subjective feelings.","tags":["emotion"],"title":"Captive Blue-and-yellow macaws (Ara ararauna) show facial indicators of positive affect when reunited with their caregiver","type":"publication"},{"authors":["Guillaume Ubiema","Marine Siwiaszczyk","Celine Parias","Roman Bresso","Christophe Hay","Baptiste Mulot","Scott A. Love","Elodie Chaillou"],"categories":null,"content":"","date":1664551401,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1664551401,"objectID":"3d94a0402ddce1ccfb759e30ee2a5c36","permalink":"https://scott-love.github.io/publication/ubiema2022/","publishdate":"2017-01-01T00:00:00Z","relpermalink":"/publication/ubiema2022/","section":"publication","summary":"Music can cause pleasant sensations in humans whereas some noises can cause discomfort. The effects of music and noise have also been somewhat studied in animals, showing different impacts. In this review we aim to illustrate the differences and similarities between animals, in terms of their sensitivity to auditory stimuli (noise or music), by first recalling some generalities about the physical characteristics of sound and the biological bases of hearing. Second, based on the studies reported in this review, we conclude that ambient noise is harmful and/or stressful, and that musical sounds can take many forms with a large range of impacts in animals. Finally, we present two practical examples of the use of music with animals (one in the context of a zoo and the other in cattle breeding) and an example of an experiment designed to understand the impact of music on neonate lambs. These three examples highlight how music can help to improve animal welfare.","tags":["music","vocalization"],"title":"The use and impact of auditory stimulation in animals","type":"publication"},{"authors":["Marine Siwiaszczyk","Scott A. Love","Elodie Chaillou"],"categories":null,"content":"","date":1653665001,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1653665001,"objectID":"f8fae94cd6d77bf961854ff60f1706fa","permalink":"https://scott-love.github.io/publication/siwiaszczyk2022/","publishdate":"2017-01-01T00:00:00Z","relpermalink":"/publication/siwiaszczyk2022/","section":"publication","summary":"If you have ever been out to the countryside in the spring, you might have heard sheep bleating to their lambs. Sheep also bleat when they are separated from the flock or stressed in some other way. To us, all these bleats sound very similar. But do you think they also sound similar to the lambs? Or do you think the lambs know whose mother is calling and what they are saying? Scientists try to interpret the bleats of sheep by observing their behavior when they hear these sounds. They study the sound waves of recorded bleats to identify each sheep’s unique voice and even determine which emotions the sheep are feeling. They also investigate the brain to find out what is going on inside the heads of sheep when they hear and understand the sounds of other sheep. Studies show that sheep really can recognize each other’s voices and communicate vocally.","tags":["sheep","vocalization"],"title":"“BAA, BAA”: Can Sheep Talk to Each Other?","type":"publication"},{"authors":["Raïssa Yebga Hot","Marine Siwiaszczyk","Scott A. Love","Frédéric Andersson","Ludovic Calandreau","Fabrice Poupon","Justine Beaujoin","Bastien Herlin","Fawzi Boumezbeur","Baptiste Mulot","Elodie Chaillou","Ivy Uszynski","Cyril Poupon"],"categories":null,"content":"","date":1648725530,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1648725530,"objectID":"0d4fc5bac7e7c960f0546dee5d652fc9","permalink":"https://scott-love.github.io/publication/yebgahot2022/","publishdate":"2017-01-01T00:00:00Z","relpermalink":"/publication/yebgahot2022/","section":"publication","summary":"The structural connectivity of animal brains can be revealed using post-mortem diffusion-weighted magnetic resonance imaging (MRI). Despite the existence of several structural atlases of avian brains, few of them address the bird’s structural connectivity. In this study, a novel atlas of the structural connectivity is proposed for the male Japanese quail (Coturnix japonica), aiming at investigating two lines divergent on their emotionality trait: the short tonic immobility (STI) and the long tonic immobility (LTI) lines. The STI line presents a low emotionality trait, while the LTI line expresses a high emotionality trait. 21 male Japanese quail brains from both lines were scanned post-mortem for this study, using a preclinical Bruker 11.7 T MRI scanner. Diffusion-weighted MRI was performed using a 3D segmented echo planar imaging (EPI) pulsed gradient spin-echo (PGSE) sequence with a 200 μm isotropic resolution, 75 diffusion-encoding directions and a b-value fixed at 4500 s/mm2. Anatomical MRI was likewise performed using a 2D anatomical T2-weighted spin-echo (SE) sequence with a 150 μm isotropic resolution. This very first anatomical connectivity atlas of the male Japanese quail reveals 34 labeled fiber tracts and the existence of structural differences between the connectivity patterns characterizing the two lines. Thus, the link between the male Japanese quail’s connectivity and its underlying anatomical structures has reached a better understanding.","tags":["quail","mri"],"title":"A novel male Japanese quail structural connectivity atlas using ultra-high field diffusion MRI at 11.7 T","type":"publication"},{"authors":["Scott A. Love","Emmanuelle Haslin","Manon Bellardie","Frédéric Andersson","Laurent Barantin","Isabelle Filipiak","Hans Adriaensen","Csilla L. Fazekas","Laurène Leroy","Dóra Zelena","Mélody Morisse","Frédéric Elleboudt","Christian Moussu","Frédéric Lévy","Raymond Nowak","Elodie Chaillou"],"categories":null,"content":"","date":1645874330,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1645874330,"objectID":"6345120ed49dbac7c6dc9c6ecbbd6c5f","permalink":"https://scott-love.github.io/publication/love2022/","publishdate":"2017-01-01T00:00:00Z","relpermalink":"/publication/love2022/","section":"publication","summary":"The psychoendocrine evaluation of lamb development has demonstrated that maternal deprivation and milk replacement alters health, behavior, and endocrine profiles. While lambs are able to discriminate familiar and non-familiar conspecifics (mother or lamb), only lambs reared with their mother develop such clear social discrimination or preference. Lambs reared without mother display no preference for a specific lamb from its own group. Differences in exploratory and emotional behaviors between mother-reared and mother-deprived lambs have also been reported. As these behavioural abilities are supported by the brain, we hypothesize that rearing with maternal deprivation and milk replacement leads to altered brain development and maturation. To test this hypothesis, we examined brain morphometric and microstructural variables extracted from in vivo T1-weighted and diffusion-weighted magnetic resonance images acquired longitudinally (1 week, 1.5 months, and 4.5 months of age) in mother-reared and mother-deprived lambs. From the morphometric variables the caudate nuclei volume was found to be smaller for mother-deprived than for mother-reared lambs. T1-weighted signal intensity and radial diffusivity were higher for mother-deprived than for mother-reared lambs in both the white and gray matters. The fractional anisotropy of the white matter was lower for mother-deprived than for mother-reared lambs. Based on these morphometric and microstructural characteristics we conclude that maternal deprivation delays and affects lamb brain growth and maturation.","tags":["sheep","mri"],"title":"Maternal deprivation and milk replacement affect the integrity of gray and white matter in the developing lamb brain","type":"publication"},{"authors":["Artur Agaronyan","Raeyan Syed","Ryan Kim","Chao-Hsiung Hsu","Scott A. Love","Jacob M. Hooker","Alicia E. Reid","Paul C. Wang","Nobuyuki Ishibashi","Yeona Kang","Tsang-Wei Tu"],"categories":null,"content":"","date":1642173801,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1642173801,"objectID":"7df7423bdcec79c0daab739a7a5e90ba","permalink":"https://scott-love.github.io/publication/agaronyan2021/","publishdate":"2017-01-01T00:00:00Z","relpermalink":"/publication/agaronyan2021/","section":"publication","summary":"The olive baboon (Papio anubis) is phylogenetically proximal to humans. Investigation into the baboon brain has shed light on the function and organization of the human brain, as well as on the mechanistic insights of neurological disorders such as Alzheimer’s and Parkinson’s. Non-invasive brain imaging, including positron emission tomography (PET) and magnetic resonance imaging (MRI), are the primary outcome measures frequently used in baboon studies. PET functional imaging has long been used to study cerebral metabolic processes, though it lacks clear and reliable anatomical information. In contrast, MRI provides a clear definition of soft tissue with high resolution and contrast to distinguish brain pathology and anatomy, but lacks specific markers of neuroreceptors and/or neurometabolites. There is a need to create a brain atlas that combines the anatomical and functional/neurochemical data independently available from MRI and PET. For this purpose, a three-dimensional atlas of the olive baboon brain was developed to enable multimodal imaging analysis. The atlas was created on a population-representative template encompassing 89 baboon brains. The atlas defines 24 brain regions, including the thalamus, cerebral cortex, putamen, corpus callosum, and insula. The atlas was evaluated with four MRI images and 20 PET images employing the radiotracers for [11C]benzamide, [11C]metergoline, [18F]FAHA, and [11C]rolipram, with and without structural aids like [18F]flurodeoxyglycose images. The atlas-based analysis pipeline includes automated segmentation, registration, quantification of region volume, the volume of distribution, and standardized uptake value. Results showed that, in comparison to PET analysis utilizing the “gold standard” manual quantification by neuroscientists, the performance of the atlas-based analysis was at \u003e80 and \u003e70% agreement for MRI and PET, respectively. The atlas can serve as a foundation for further refinement, and incorporation into a high-throughput workflow of baboon PET and MRI data. The new atlas is freely available on the Figshare online repository (https://doi.org/10.6084/m9.figshare.16663339), and the template images are available from neuroImaging tools \u0026 resources collaboratory (NITRC) (https://www.nitrc.org/projects/haiko89/).","tags":["brain","anatomy","baboon","template","segmentation"],"title":"A Baboon Brain Atlas for Magnetic Resonance Imaging and Positron Emission Tomography Image Analysis","type":"publication"},{"authors":["Karin Petrini","Georgina Denis","Scott A. Love","Marko Nardini"],"categories":null,"content":"","date":1602170601,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1602170601,"objectID":"ffc53208b0ec7d1108773b137cf7733d","permalink":"https://scott-love.github.io/publication/petrini2020/","publishdate":"2017-01-01T00:00:00Z","relpermalink":"/publication/petrini2020/","section":"publication","summary":"The brain's ability to integrate information from the different senses is essential for decreasing sensory uncertainty and ultimately limiting errors. Temporal correspondence is one of the key processes that determines whether information from different senses will be integrated and is influenced by both experience- and task-dependent mechanisms in adults. Here we investigated the development of both task- and experience-dependent temporal mechanisms by testing 7-8-year-old children, 10-11-year-old children, and adults in two tasks (simultaneity judgment, temporal order judgment) using audiovisual stimuli with differing degrees of association based on prior experience (low for beep-flash vs. high for face-voice). By fitting an independent channels model to the data, we found that while the experience-dependent mechanism of audiovisual simultaneity perception is already adult-like in 10-11-year-old children, the task-dependent mechanism is still not. These results indicate that differing maturation rates of experience-dependent and task-dependent mechanisms underlie the development of multisensory integration. Understanding this development has important implications for clinical and educational interventions.","tags":["synchrony","development","audiovisual","face","multisensory"],"title":"Combining the senses: the role of experience- and task-dependent mechanisms in the development of audiovisual simultaneity perception","type":"publication"},{"authors":["Emilie C. Perez","Maryse Meurisse","Lucile Hervé","Marion Georgelin","Paul Constantin","Fabien Cornilleau","Scott A. Love","Frédéric Lévy","Ludovic Calandreau","Aline Bertin"],"categories":null,"content":"","date":1575300201,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1575300201,"objectID":"9833040eeacc547f2df676a7b1734d62","permalink":"https://scott-love.github.io/publication/perez2020/","publishdate":"2017-01-01T00:00:00Z","relpermalink":"/publication/perez2020/","section":"publication","summary":"Avoidance of novelty, termed neophobia, protects animals from potential dangers but can also impair their adaptation to novel environments or food resources. This behaviour is particularly well described in birds but the neurobiological correlates remain unexplored. Here, we measured neuronal activity in the amygdala and the striatum, two brain regions believed to be involved in novelty detection, by labelling the early gene c-fos following chicks exposure to a novel food (NF), a novel object (NO) or a familiar food (FF). NF and NO chicks showed significantly longer latencies to touch the food, less time eating and emitted more fear-vocalizations than control chicks. Latency to touch the food was also longer for NO than for NF chicks. Significantly higher densities of c-fos positive cells were present in all the nuclei of the arcopallium/amygdala of NF and NO chicks compared to FF chicks. Also, NO chicks showed higher positive cell densities than NF chicks in the posterior amygdaloid, the intermediate and the medial arcopallium. Exposure to novel food or object induced a similar increase in c-fos expression in the nucleus accumbens and the medial striatum. Our data provide evidence activation of the arcopallium/amygdala is specific of the type of novelty. The activation of striatum may be more related to novelty seeking.","tags":["neophobia","emotion"],"title":"Object and food novelty induce distinct patterns of c-fos immunoreactivity in amygdala and striatum in domestic male chicks (Gallus gallus domesticus)","type":"publication"},{"authors":["Scott A. Love","Karin Petrini","Cyril R. Pernet","Marianne Latinus","Frank E. Pollick"],"categories":null,"content":"","date":1531221530,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1531221530,"objectID":"65b40df3d2a992b5c5420f4371cd4b10","permalink":"https://scott-love.github.io/publication/love2018/","publishdate":"2017-01-01T00:00:00Z","relpermalink":"/publication/love2018/","section":"publication","summary":"Multisensory processing is a core perceptual capability, and the need to understand its neural bases provides a fundamental problem in the study of brain function. Both synchrony and temporal order judgments are commonly used to investigate synchrony perception between different sensory cues and multisensory perception in general. However, extensive behavioral evidence indicates that these tasks do not measure identical perceptual processes. Here we used functional magnetic resonance imaging to investigate how behavioral differences between the tasks are instantiated as neural differences. As these neural differences could manifest at either the sustained (task/state-related) and/or transient (event-related) levels of processing, a mixed block/event-related design was used to investigate the neural response of both time-scales. Clear differences in both sustained and transient BOLD responses were observed between the two tasks, consistent with behavioral differences indeed arising from overlapping but divergent neural mechanisms. Temporalorder judgments, butnot synchrony judgments, required transient activation in several left hemisphere regions, which may reflect increased task demands caused by an extra stage of processing. Our results highlight that multisensory integrationmechanisms can be task dependent, which, in particular, has implications for the study of atypical temporal processing in clinical populations.","tags":["fmri","audiovisual","multisensory"],"title":"Overlapping but Divergent Neural Correlates Underpinning Audiovisual Synchrony and Temporal Order Judgments","type":"publication"},{"authors":null,"categories":null,"content":"The project, Functional Neuroimaging of the Vocalisation Perception Mechanisms of Sheep (SheepVoicefMRI), is funded by the French National Research Agency - ANR-20-CE20-0001.\n","date":1518109580,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1518109580,"objectID":"94e2540f0149bc62d7b399103fabab32","permalink":"https://scott-love.github.io/project/sheepvoice/","publishdate":"2018-02-08T18:06:20+01:00","relpermalink":"/project/sheepvoice/","section":"project","summary":"Investigating the sheep auditory cortex with fMRI","tags":["sheep","mri","fmri","brain","voice","social"],"title":"Sheep Voice fMRI","type":"project"},{"authors":null,"categories":null,"content":"A quick(ish) tutorial on how I setup my personal but work related website using Git(Hub), Hugo and Academic on a Mac. This is a fairly simple and completely free way to host an academic personal website. The process came from two main sources: 1. The documentation for Academic and; 2. A blog post by George Cushen.\nSo what did I do: I already had Git installed and you will need it too.\nThen install Hugo. There are several options but I used the Tarball method. Full details can be found here but here is a brief description.\nChose install location (e.g., /usr/local/bin), in your executable PATH. Download the latest Tarball for your system to the Downloads folder. Install Hugo in your chosen location. cd /usr/local/bin # CHOSEN_INSTALL_LOCATION # extract the tarball tar -xvzf ~/Downloads/hugo_X.Y_osx-64bit.tgz # verify that it runs ./hugo version Fork the Academic Kickstart repository to your GitHub account.\nLogin to your GitHub account. Click here or search for sourcethemes/academic-kickstart inside GitHub. Click on the fork icon towards the top right of the screen. Clone the repository onto your local system. Note that you need to change \u0026lt;USERNAME\u0026gt; to your GitHub username and that My_Website can be changed to any name you want. git clone https://github.com/\u0026lt;USERNAME\u0026gt;/academic-kickstart.git My_Website Initialise the theme. cd My_Website git submodule update --init --recursive Create your GitHub website repository. On your GitHub page click the “+” icon in the top right corner and choose “New Repository”. Repository name = \u0026lt;USERNAME\u0026gt;.github.io. Add your \u0026lt;USERNAME\u0026gt;.github.io repository into a submodule folder named public. git submodule add https://github.com/\u0026lt;USERNAME\u0026gt;/\u0026lt;USERNAME\u0026gt;.github.io.git public Add to the local git repository then push to the remote repository. git add . git commit -m \u0026quot;initial commit\u0026quot; git push -u origin master Run Hugo to create the HTML for the site. hugo server View the built site in your browser (http://localhost:1313/) but note this is only a local copy and not visible to others.\nUpload the built site to Github for everyone to see @ \u0026lt;USERNAME\u0026gt;.github.io. Note it will take 2 or 3 minutes to be viewable.\ngit add . git commit -m \u0026quot;build website\u0026quot; git push -u origin master Adding content: OK, the website is built it\u0026rsquo;s time to add content. You can make additions/changes to your website and check out the results before deploying to your actual site.\nA good place to start is the config.toml file in the main directory of your site. It contains a bunch of key-value pairs. change the title value to the title of your website, e.g., your name. Then run Hugo server to create the HTML for the site.\nhugo server View the local copy of the built site in your browser (http://localhost:1313/). Leave the server running and any other changes you make will be automatically visible in local site just created. Go through all the key-value pairs and change, edit or remove any that you want.\nGithub version control: The Host on GitHub page from Hugo outlines a nice way to store all the files of your site on GitHub.\nUpdating the website content: Using the GitHub method above I do not keep a local copy of any of the files. Here is the process I follow to update the website content.\ngit clone https://github.com/\u0026lt;USERNAME\u0026gt;/\u0026lt;YOUR-PROJECT\u0026gt; cd \u0026lt;YOUR-PROJECT\u0026gt;/ git rm -r public At this point you can make changes to the content of your website and push those to the remote repository.\ngit add . git commit -m \u0026quot;updating content\u0026quot; git push origin master However, this only keeps the content in synchrony with the remote repository it does not update the website.\ngit submodule add https://github.com/\u0026lt;USERNAME\u0026gt;/\u0026lt;USERNAME\u0026gt;.github.io.git public ./deploy.sh \u0026quot;comment\u0026quot; The contents of deploy.sh can be seen here\nUpdate academic-kickstart version: I have found updating the academic-kickstart version and subsequently the website to be tricky! Follow the instructions here. However, I was only ever able to update using the ZIP there. Uninstall the current version by deleting the contents of the \u0026ldquo;themes/academic/\u0026rdquo; folder inside \u0026lt;YOUR-PROJECT\u0026gt; and replacing it with the downloaded files.\nAt this point you still need to follow the release notes and update your content and config files to take into account the \u0026ldquo;Breaking changes\u0026rdquo;. If you have jumped a few versions you will need to do this in sequence for each version change (e.g. v3.1 to v3.2 before doing v3.2 to v3.3)\n","date":1518024697,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1518024697,"objectID":"33d571ee6822f89daf3f4badd846a8de","permalink":"https://scott-love.github.io/post/create-this-site/","publishdate":"2018-02-07T18:31:37+01:00","relpermalink":"/post/create-this-site/","section":"post","summary":"A quick(ish) tutorial on how I setup my personal but work related website using Git(Hub), Hugo and Academic on a Mac. This is a fairly simple and completely free way to host an academic personal website.","tags":null,"title":"Git(Hub), Hugo and Academic","type":"post"},{"authors":["Maureen Fontaine","Scott A. Love","Marianne Latinus"],"categories":null,"content":"","date":1486484582,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1486484582,"objectID":"f7fc95c9ad416177963c6d269ce23a75","permalink":"https://scott-love.github.io/publication/fontaine2017/","publishdate":"2017-01-01T00:00:00Z","relpermalink":"/publication/fontaine2017/","section":"publication","summary":"The ability to recognize an individual from their voice is a widespread ability with a long evolutionary history. Yet, the perceptual representation of familiar voices is ill-defined. In two experiments, we explored the neuropsychological processes involved in the perception of voice identity. We specifically explored the hypothesis that familiar voices (trained-to-familiar (Experiment 1), and famous voices (Experiment 2)) are represented as a whole complex pattern, well approximated by the average of multiple utterances produced by a single speaker. In experiment 1, participants learned three voices over several sessions, and performed a three-alternative forced-choice identification task on original voice samples and several “speaker averages,” created by morphing across varying numbers of different vowels (e.g., [a] and [i]) produced by the same speaker. In experiment 2, the same participants performed the same task on voice samples produced by familiar speakers. The two experiments showed that for famous voices, but not for trained-to-familiar voices, identification performance increased and response times decreased as a function of the number of utterances in the averages. This study sheds light on the perceptual representation of familiar voices, and demonstrates the power of average in recognizing familiar voices. The speaker average captures the unique characteristics of a speaker, and thus retains the information essential for recognition; it acts as a prototype of the speaker.","tags":["voice","identity","experience"],"title":"Familiarity and Voice Representation: From Acoustic-Based Representation to Voice Averages","type":"publication"},{"authors":["Damian Marie","Muriel Roth","Romain Lacoste","Alice Bertello","Jean-Luc Anton","William D. Hopkins","Konstantina Margiotoudi","Scott A. Love","Adrien Meguerditchian"],"categories":null,"content":"","date":1486484582,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1486484582,"objectID":"c7d7e8d524d25885957f1ac1f4a05101","permalink":"https://scott-love.github.io/publication/marie2017/","publishdate":"2017-01-01T00:00:00Z","relpermalink":"/publication/marie2017/","section":"publication","summary":"The planum temporale (PT) is a critical region of the language functional network in the human brain showing a striking size asymmetry toward the left hemisphere. Historically considered as a structural landmark of the left-brain specialization for language, a similar anatomical bias has been described in great apes but never in monkeys—indicating that this brain landmark might be unique to Hominidae evolution. In the present in vivo magnetic resonance imaging study, we show clearly for the first time in a nonhominid primate species, an Old World monkey, a left size predominance of the PT among 96 olive baboons (Papio anubis), using manual delineation of this region in each individual hemisphere. This asymmetric distribution was quasi-identical to that found originally in humans. Such a finding questions the relationship between PT asymmetry and the emergence of language, indicating that the origin of this cerebral specialization could be much older than previously thought, dating back, not to the Hominidae, but rather to the Catarrhini evolution at the common ancestor of humans, great apes and Old World monkeys, 30–40 million years ago.","tags":["brain","mri","baboon","lateralization","evolution","comparative"],"title":"Left Brain Asymmetry of the Planum Temporale in a Nonhominid Primate: Redefining the Origin of Brain Specialization for Language","type":"publication"},{"authors":null,"categories":null,"content":"See this poster on Figshare for some more details.\n","date":1461715200,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1461715200,"objectID":"c97427401757157a2f237781e1a58190","permalink":"https://scott-love.github.io/project/sheep-cortex/","publishdate":"2016-04-27T00:00:00Z","relpermalink":"/project/sheep-cortex/","section":"project","summary":"See this poster on Figshare for some more details.","tags":["sheep","anatomy","cortex","mri"],"title":"Sheep Cortex","type":"project"},{"authors":["Scott A. Love","Damien Marie","Muriel Roth","Romain Lacoste","Bruno Nazarian","Alice Bertello","Olivier Coulon","Jean-Luc Anton","Adrien Meguerditchian"],"categories":null,"content":"","date":1457701968,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1457701968,"objectID":"641c841913f794d7cb41851ee2c93f94","permalink":"https://scott-love.github.io/publication/love2016/","publishdate":"2017-01-01T00:00:00Z","relpermalink":"/publication/love2016/","section":"publication","summary":"The baboon (Papio) brain is a remarkable model for investigating the brain. The current work aimed at creating a population-average baboon (Papio anubis) brain template and its left/right hemisphere symmetric version from a large sample of T1-weighted magnetic resonance images collected from 89 individuals. Averaging the prior probability maps output during the segmentation of each individual also produced the first baboon brain tissue probability maps for grey matter, white matter and cerebrospinal fluid. The templates and the tissue probability maps were created using state-of-the-art, freely available software tools and are being made freely and publicly available: http://www.nitrc.org/projects/haiko89/. It is hoped that these images will aid neuroimaging research of the baboon by, for example, providing a modern, high quality normalization target and accompanying standardized coordinate system as well as probabilistic priors that can be used during tissue segmentation.","tags":["brain","anatomy","baboon","template","segmentation"],"title":"The average baboon brain: MRI templates and tissue probability maps from 89 individuals","type":"publication"},{"authors":["Christophe Destrieux","Louis Marie Terrier","Frédéric Andersson","Scott A. Love","Jean-Philippe Cottier","Henri Duvernoy","Stéphane Velut","Kevin Janot","Ilyess Zemmoura"],"categories":null,"content":"","date":1454936810,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1454936810,"objectID":"5fb3425c0b87148996d4c81795fcf733","permalink":"https://scott-love.github.io/publication/destrieux2016/","publishdate":"2017-01-01T00:00:00Z","relpermalink":"/publication/destrieux2016/","section":"publication","summary":"The precise sulcogyral localization of cortical lesions is mandatory to improve communication between practitioners and to predict and prevent post-operative deficits. This process, which assumes a good knowledge of the cortex anatomy and a systematic analysis of images, is, nevertheless, sometimes neglected in the neurological and neurosurgical training. This didactic paper proposes a brief overview of the sulcogyral anatomy, using conventional MR-slices, and also reconstructions of the cortical surface after a more or less extended inflation process. This method simplifies the cortical anatomy by removing part of the cortical complexity induced by the folding process, and makes it more understandable. We then reviewed several methods for localizing cortical structures, and proposed a three-step identification: after localizing the lateral, medial or ventro-basal aspect of the hemisphere (step 1), the main interlobar sulci were located to limit the lobes (step 2). Finally, intralobar sulci and gyri were identified (step 3) thanks to the same set of rules. This paper does not propose any new identification method but should be regarded as a set of practical guidelines, useful in daily clinical practice, for detecting the main sulci and gyri of the human cortex.","tags":["anatomy","brain","mri","cortex"],"title":"A practical guide for the identification of major sulcogyral structures of the human cortex","type":"publication"},{"authors":["Marianne Latinus","Scott A. Love","Alejandra Rossi","Francisco J. Parada","Lisa Huang","Laurence Conty","Nathalie George","Karin James","Aina Puce"],"categories":null,"content":"","date":1423409366,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1423409366,"objectID":"ff269b3417d9a81027d2534165060214","permalink":"https://scott-love.github.io/publication/latinus2015/","publishdate":"2017-01-01T00:00:00Z","relpermalink":"/publication/latinus2015/","section":"publication","summary":"Gaze direction, a cue of both social and spatial attention, is known to modulate early neural responses to faces e.g. N170. However, findings in the literature have been inconsistent, likely reflecting differences in stimulus characteristics and task requirements. Here, we investigated the effect of task on neural responses to dynamic gaze changes: away and toward transitions (resulting or not in eye contact). Subjects performed, in random order, social (away/toward them) and non-social (left/right) judgment tasks on these stimuli. Overall, in the non-social task, results showed a larger N170 to gaze aversion than gaze motion toward the observer. In the social task, however, this difference was no longer present in the right hemisphere, likely reflecting an enhanced N170 to gaze motion toward the observer. Our behavioral and event-related potential data indicate that performing social judgments enhances saliency of gaze motion toward the observer, even those that did not result in gaze contact. These data and that of previous studies suggest two modes of processing visual information: a 'default mode' that may focus on spatial information; a 'socially aware mode' that might be activated when subjects are required to make social judgments. The exact mechanism that allows switching from one mode to the other remains to be clarified.","tags":["gaze","eeg","face","social"],"title":"Social decisions affect neural activity to perceived dynamic gaze","type":"publication"},{"authors":["Aina Puce","Marianne Latinus","Alejandra Rossi","Elizabeth DaSilva","Francisco Parada","Scott A. Love","Arian Ashourvan","Swapnaa Jayaraman"],"categories":null,"content":"","date":1423409001,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1423409001,"objectID":"c0592b5d970bfca8f65d79b7d3a424b0","permalink":"https://scott-love.github.io/publication/puce2015/","publishdate":"2017-01-01T00:00:00Z","relpermalink":"/publication/puce2015/","section":"publication","summary":"In this chapter we focus on the neural processes that occur in the mature healthy human brain in response to evaluating another’s social attention. We first examine the brain’s sensitivity to gaze direction of others, social attention (as typically indicated by gaze contact), and joint attention. Brain regions such as the superior temporal sulcus (STS), the amygdala, and the fusiform gyrus have been previously demonstrated to be sensitive to gaze changes, most frequently with functional magnetic resonance imaging (fMRI). Neurophysiological investigations, using electroencephalography (EEG) and magnetoencephalography (MEG), have identified event-related potentials (ERPs) such as the N170 that are sensitive to changes in gaze direction and head direction. We advance a putative model that explains findings relating to the neurophysiology of social attention , based mainly on our studies. This model proposes two brain modes of social information processing—a nonsocial “Default” mode and a social mode that we have named “Socially Aware”. In Default mode, there is an internal focus on executing actions to achieve our goals, as evident in studies in which passive viewing or tasks involving nonsocial judgments have been used. In contrast, Socially Aware mode is active when making explicit social judgments. Switching between these two modes is rapid and can occur via either top-down or bottom-up routes. From a different perspective, most of the literature, including our own studies, has focused on social attention phenomena as experienced from the first-person perspective, i.e., gaze changes or social attention directed at, or away from, the observer. However, in daily life we are actively involved in observing social interactions between others, where their social attention focus may not include us, or their gaze may not meet ours. Hence, changes in eye gaze and social attention are experienced from the third-person perspective. This area of research is still fairly small, but nevertheless important in the study of social and joint attention, and we discuss this very small literature briefly at the end of the chapter. We conclude the chapter with some outstanding questions, which are aimed at the main knowledge gaps in the literature.","tags":["social","gaze","face","eeg","attention"],"title":"Neural Bases for Social Attention in Healthy Humans","type":"publication"},{"authors":["Phil McAleer","Frank E. Pollick","Scott A. Love","Frances Crabbe","Jeffrey M. Zacks"],"categories":null,"content":"","date":1391873629,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1391873629,"objectID":"46d7c9a95a723bfb11f947d6c3f63071","permalink":"https://scott-love.github.io/publication/mcaleer2014/","publishdate":"2017-01-01T00:00:00Z","relpermalink":"/publication/mcaleer2014/","section":"publication","summary":"It has been proposed that we make sense of the movements of others by observing fluctuations in the kinematic properties of their actions. At the neural level, activity in the human motion complex (hMT+) and posterior superior temporal sulcus (pSTS) has been implicated in this relationship. However, previous neuroimaging studies have largely utilized brief, diminished stimuli, and the role of relevant kinematic parameters for the processing of human action remains unclear. We addressed this issue by showing extended-duration natural displays of an actor engaged in two common activities, to 12 participants in an fMRI study under passive viewing conditions. Our region-of-interest analysis focused on three neural areas (hMT+, pSTS, and fusiform face area) and was accompanied by a whole-brain analysis. The kinematic properties of the actor, particularly the speed of body part motion and the distance between body parts, were related to activity in hMT+ and pSTS. Whole-brain exploratory analyses revealed additional areas in posterior cortex, frontal cortex, and the cerebellum whose activity was related to these features. These results indicate that the kinematic properties of peoples' movements are continually monitored during everyday activity as a step to determining actions and intent.","tags":["biological motion","fmri"],"title":"The role of kinematics in cortical regions for continuous human motion perception","type":"publication"},{"authors":["Scott A. Love","Karin Petrini","Adam Cheng","Frank E. Pollick"],"categories":null,"content":"","date":1360338864,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1360338864,"objectID":"a44453e4748f7c4ad63ddfcca3963e90","permalink":"https://scott-love.github.io/publication/love2013/","publishdate":"2017-01-01T00:00:00Z","relpermalink":"/publication/love2013/","section":"publication","summary":"Synchrony judgments involve deciding whether cues to an event are in synch or out of synch, while temporal order judgments involve deciding which of the cues came first. When the cues come from different sensory modalities these judgments can be used to investigate multisensory integration in the temporal domain. However, evidence indicates that that these two tasks should not be used interchangeably as it is unlikely that they measure the same perceptual mechanism. The current experiment further explores this issue across a variety of different audiovisual stimulus types. Participants were presented with 5 audiovisual stimulus types, each at 11 parametrically manipulated levels of cue asynchrony. During separate blocks, participants had to make synchrony judgments or temporal order judgments. For some stimulus types many participants were unable to successfully make temporal order judgments, but they were able to make synchrony judgments. The mean points of subjective simultaneity for synchrony judgments were all video-leading, while those for temporal order judgments were all audio-leading. In the within participants analyses no correlation was found across the two tasks for either the point of subjective simultaneity or the temporal integration window. Stimulus type influenced how the two tasks differed; nevertheless, consistent differences were found between the two tasks regardless of stimulus type. Therefore, in line with previous work, we conclude that synchrony and temporal order judgments are supported by different perceptual mechanisms and should not be interpreted as being representative of the same perceptual process.","tags":["audiovisual","multisensory","synchrony"],"title":"A Psychophysical Investigation of Differences Between Synchrony and Temporal Order Judgments","type":"publication"},{"authors":["Corinne Jola","Phil McAleer","Marie-Hélène Grosbras","Scott A. Love","Gordon Morison","Frank E. Pollick"],"categories":null,"content":"","date":1360338582,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1360338582,"objectID":"023dc78a52c5983c97646d75b6212c88","permalink":"https://scott-love.github.io/publication/jola2013/","publishdate":"2017-01-01T00:00:00Z","relpermalink":"/publication/jola2013/","section":"publication","summary":"The superior temporal sulcus (STS) and gyrus (STG) are commonly identified to be functionally relevant for multisensory integration of audiovisual (AV) stimuli. However, most neuroimaging studies on AV integration used stimuli of short duration in explicit evaluative tasks. Importantly though, many of our AV experiences are of a long duration and ambiguous. It is unclear if the enhanced activity in audio, visual, and AV brain areas would also be synchronised over time across subjects when they are exposed to such multisensory stimuli. We used intersubject correlation to investigate which brain areas are synchronised across novices for uni- and multisensory versions of a 6-min 26-s recording of an unfamiliar, unedited Indian dance recording (Bharatanatyam). In Bharatanatyam, music and dance are choreographed together in a highly intermodal-dependent manner. Activity in the middle and posterior STG was significantly correlated between subjects and showed also significant enhancement for AV integration when the functional magnetic resonance signals were contrasted against each other using a general linear model conjunction analysis. These results extend previous studies by showing an intermediate step of synchronisation for novices: while there was a consensus across subjects' brain activity in areas relevant for unisensory processing and AV integration of related audio and visual stimuli, we found no evidence for synchronisation of higher level cognitive processes, suggesting these were idiosyncratic.","tags":["dance","fmri","multisensory"],"title":"Uni- and multisensory brain areas are synchronised across spectators when watching unedited dance recordings","type":"publication"},{"authors":["Phil McAleer","Scott A. Love"],"categories":null,"content":"","date":1360338114,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1360338114,"objectID":"cc797fdf7949ec959090938bf75365a6","permalink":"https://scott-love.github.io/publication/mcaleer2013/","publishdate":"2017-01-01T00:00:00Z","relpermalink":"/publication/mcaleer2013/","section":"publication","summary":"Typically, the actions of agents in classical animacy displays are synthetically created, thus forming artificial displays of biological movement. Therefore, the link between the motion in animacy displays and that of actual biological motion is unclear. In this chapter we will look at work being done to clarify this relationship. We will first discuss a modern approach to the creation of animacy displays whereby fullvideo displays of human interactions are reduced into simple animacy displays; this results in animate shapes whose motions are directly derived from human actions. Second, we will review what is known about the ability of typically developed adults and people with autism spectrum disorders to perceive the intentionality within these displays. Finally, we will explore the effects that motion parameters such as speed and acceleration, measured directly from original human actions, have on the perception of intent; fMRI studies that connect neural networks to motion parameters, and the resultant perception of animacy and intention, will also be examined.","tags":["animacy","biological motion","behaviour","fmri"],"title":"Perceiving intention in animacy displays created from human motion","type":"publication"},{"authors":["Pierre Maurage","Scott A. Love","Fabien D'Hondt"],"categories":null,"content":"","date":1357661146,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1357661146,"objectID":"0ddbc3c71fd9367504ba76d5f0223363","permalink":"https://scott-love.github.io/publication/maurage2013/","publishdate":"2017-01-01T00:00:00Z","relpermalink":"/publication/maurage2013/","section":"publication","summary":"Face–voice integration has been extensively explored among healthy participants during the last decades. Nevertheless, while binding alterations constitute a core feature of many psychiatric diseases, these crossmodal processing have been very little explored in these populations. This chapter presents three studies offering an integrative use of behavioural, electrophysiological and neuroimaging techniques to explore the audio–visual integration of emotional stimuli in alcohol dependence. These results constitute a preliminary step towards a multidisciplinary exploration of crossmodal processing in psychiatry, extending to other stimulations, sensorial modalities and populations. The exploration of impaired crossmodal abilities could renew the knowledge on “normal” audio–visual integration and could lead to innovative therapeutic programs.","tags":["audiovisual","face","voice","emotion"],"title":"Crossmodal Integration of Emotional Stimuli in Alcohol Dependence","type":"publication"},{"authors":["Scott A. Love","Frank E. Pollick","Marianne Latinus"],"categories":null,"content":"","date":1297181595,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1297181595,"objectID":"c8ec2d27c6f96f46afe30dbb98b9bb74","permalink":"https://scott-love.github.io/publication/love2011/","publishdate":"2017-01-01T00:00:00Z","relpermalink":"/publication/love2011/","section":"publication","summary":"Perception of faces and voices plays a prominent role in human social interaction, making multisensory integration of cross-modal speech a topic of great interest in cognitive neuroscience. How to define potential sites of multisensory integration using functional magnetic resonance imaging (fMRI) is currently under debate, with three statistical criteria frequently used (e.g., super-additive, max and mean criteria). In the present fMRI study, 20 participants were scanned in a block design under three stimulus conditions: dynamic unimodal face, unimodal voice and bimodal face–voice. Using this single dataset, we examine all these statistical criteria in an attempt to define loci of face–voice integration. While the super-additive and mean criteria essentially revealed regions in which one of the unimodal responses was a deactivation, the max criterion appeared stringent and only highlighted the left hippocampus as a potential site of face– voice integration. Psychophysiological interaction analysis showed that connectivity between occipital and temporal cortices increased during bimodal compared to unimodal conditions. We concluded that, when investigating multisensory integration with fMRI, all these criteria should be used in conjunction with manipulation of stimulus signal-to-noise ratio and/or cross-modal congruency.","tags":["fmri","multisensory","face","voice","social","connectivity"],"title":"Cerebral Correlates and Statistical Criteria of Cross-Modal Face and Voice Integration","type":"publication"},{"authors":["Scott A. Love","Frank E. Pollick","Karin Petrini"],"categories":null,"content":"","date":1297181595,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1297181595,"objectID":"c4abbc45a7a946446db47c01434d253e","permalink":"https://scott-love.github.io/publication/love2012/","publishdate":"2017-01-01T00:00:00Z","relpermalink":"/publication/love2012/","section":"publication","summary":"The ability to successfully integrate information from different senses is of paramount importance for perceiving the world and has been shown to change with experience. We first review how experience, in particular musical experience, brings about changes in our ability to fuse together sensory information about the world. We next discuss evidence from drumming studies that demonstrate how the perception of audiovisual synchrony depends on experience. These studies show that drummers are more robust than novices to perturbations of the audiovisual signals and appear to use different neural mechanisms in fusing sight and sound. Finally, we examine how experience influences audiovisual speech perception. We present an experiment investigating how perceiving an unfamiliar language influences judgments of temporal synchrony of the audiovisual speech signal. These results highlight the influence of both the listener’s experience with hearing an unfamiliar language as well as the speaker’s experience with producing non-native words.","tags":["brain","multisensory","behaviour","experience"],"title":"Effects of Experience, Training and Expertise on Multisensory Perception: Investigating the Link between Brain and Behavior","type":"publication"},{"authors":["Karin Petrini","Frank E. Pollick","Sofia Dahl","Phil McAleer","Lawrie S McKay","Davide Rocchesso","Carl Haakon Waadeland","Scott A. Love","Federico Avanzini","Aina Puce"],"categories":null,"content":"","date":1294503335,"expirydate":-62135596800,"kind":"page","lang":"en","lastmod":1294503335,"objectID":"4339a7d93dd4bae47fb3637ef9f6131b","permalink":"https://scott-love.github.io/publication/petrini2011/","publishdate":"2017-01-01T00:00:00Z","relpermalink":"/publication/petrini2011/","section":"publication","summary":"When we observe someone perform a familiar action, we can usually predict what kind of sound that action will produce. Musical actions are over-experienced by musicians and not by non-musicians, and thus offer a unique way to examine how action expertise affects brain processes when the predictability of the produced sound is manipulated. We used functional magnetic resonance imaging to scan 11 drummers and 11 age- and gender-matched novices who made judgments on point-light drumming movements presented with sound. In Experiment 1, sound was synchronized or desynchronized with drumming strikes, while in Experiment 2 sound was always synchronized, but the natural covariation between sound intensity and velocity of the drumming strike was maintained or eliminated. Prior to MRI scanning, each participant completed psychophysical testing to identify personal levels of synchronous and asynchronous timing to be used in the two fMRI activation tasks. In both experiments, the drummers' brain activation was reduced in motor and action representation brain regions when sound matched the observed movements, and was similar to that of novices when sound was mismatched. This reduction in neural activity occurred bilaterally in the cerebellum and left parahippocampal gyrus in Experiment 1, and in the right inferior parietal lobule, inferior temporal gyrus, middle frontal gyrus and precentral gyrus in Experiment 2. Our results indicate that brain functions in action-sound representation areas are modulated by multimodal action expertise.","tags":["synchrony","fmri","music","multisensory","audiovisual"],"title":"Action expertise reduces brain activity for audiovisual matching actions: an fMRI study with expert drummers","type":"publication"}]