WorldCat Identities

Schwartz, Jean-Luc

Overview
Works: 84 works in 151 publications in 2 languages and 3,673 library holdings
Genres: Academic theses 
Roles: Editor, Author, Director, Thesis advisor, Other, Opponent, Contributor, Composer, Lyricist, Organizer of meeting
Publication Timeline
.
Most widely held works by Jean-Luc Schwartz
Vocalize to localize( )

9 editions published in 2009 in English and held by 1,542 WorldCat member libraries worldwide

Vocalize-to-Localize? Meerkats do it for specific predators ... And babies point with their index finger toward targets of interest at about nine months, well before using language-specific that-demonstratives. With what-interrogatives they are universal and, as relativizers and complementizers, play an important role in grammar construction. Some alarm calls in nonhumans display more than mere localization: semantics and even syntax. Instead of telling another monomodal story about language origin, in this volume advocates of representational gestures, semantically transparent, but with a proble
Primate communication and human language : vocalisation, gestures, imitation and deixis in humans and non-humans( )

13 editions published in 2011 in English and held by 1,403 WorldCat member libraries worldwide

After a long period where it has been conceived as iconoclastic and almost forbidden, the question of language origins is now at the centre of a rich debate, confronting acute proposals and original theories. Most importantly, the debate is nourished by a large set of experimental data from disciplines surrounding language. The editors of the present book have gathered researchers from various fields, with the common objective of taking as seriously as possible the search for continuities from non-human primate vocal and gestural communication systems to human speech and language, in a multidi
Origins of human language : continuities and discontinuities with nonhuman primates by Jean-Luc Schwartz( )

15 editions published between 2017 and 2019 in English and Undetermined and held by 224 WorldCat member libraries worldwide

This book proposes a detailed picture of the continuities and ruptures between communication in primates and language in humans. It explores a diversity of perspectives on the origins of language, including a fine description of vocal communication in animals, mainly in monkeys and apes, but also in birds, the study of vocal tract anatomy and cortical control of the vocal productions in monkeys and apes, the description of combinatory structures and their social and communicative value, and the exploration of the cognitive environment in which language may have emerged from nonhuman primate vocal or gestural communication
Multisensory and sensorimotor interactions in speech perception by Riikka Mottonen( )

4 editions published in 2015 in English and held by 179 WorldCat member libraries worldwide

Speech is multisensory since it is perceived through several senses. Audition is the most important one as speech is mostly heard. The role of vision has long been acknowledged since many articulatory gestures can be seen on the talker's face. Sometimes speech can even be felt by touching the face. The best-known multisensory illusion is the McGurk effect, where incongruent visual articulation changes the auditory percept. The interest in the McGurk effect arises from a major general question in multisensory research: How is information from different senses combined? Despite decades of research, a conclusive explanation for the illusion remains elusive. This is a good demonstration of the challenges in the study of multisensory integration. Speech is special in many ways. It is the main means of human communication, and a manifestation of a unique language system. It is a signal with which all humans have a lot of experience. We are exposed to it from birth, and learn it through development in face-to-face contact with others. It is a signal that we can both perceive and produce. The role of the motor system in speech perception has been debated for a long time. Despite very active current research, it is still unclear to which extent, and in which role, the motor system is involved in speech perception. Recent evidence shows that brain areas involved in speech production are activated during listening to speech and watching a talker's articulatory gestures. Speaking involves coordination of articulatory movements and monitoring their auditory and somatosensory consequences. How do auditory, visual, somatosensory, and motor brain areas interact during speech perception? How do these sensorimotor interactions contribute to speech perception? It is surprising that despite a vast amount of research, the secrets of speech perception have not yet been solved. The multisensory and sensorimotor approaches provide new opportunities in solving them. Contributions to the research topic are encouraged for a wide spectrum of research on speech perception in multisensory and sensorimotor contexts, including novel experimental findings ranging from psychophysics to brain imaging, theories and models, reviews and opinions
La parole : des modèles cognitifs aux machines communicantes( Book )

2 editions published in 2000 in French and held by 73 WorldCat member libraries worldwide

La musique est-elle une science? by Alain Schuhl( Book )

2 editions published in 2005 in French and held by 45 WorldCat member libraries worldwide

Avant de devenir une mélodie, la musique est une succession de notes et de sons étudiés par une science nommée acoustique. L'ouvrage s'intéresse à cette science qui permet de décrire la façon dont le son se crée au coeur de l'instrument et d'expliquer pourquoi tel son est audible et tel autre pas. Cette science s'appuie elle-même sur la physique, la physiologie et les mathématiques
D'où nous vient la parole? by Jean-Luc Schwartz( Book )

2 editions published in 2008 in French and held by 40 WorldCat member libraries worldwide

D'où nous vient la parole? Du babil aux longs discours, comment apprend-on à parler? Pourquoi chaque langue est-elle différente ... et pourtant semblable aux autres? Que font notre corps et notre cerveau lorsque nous parlons? En quoi le langage est-il une capacité unique, propre à notre espèce? saurons-nous un jour percer le " mystère de l'évolution "? [4e de couverture]
La parole : origines, développement, structures( Book )

3 editions published in 2011 in French and held by 17 WorldCat member libraries worldwide

Apport de la psychoacoustique à la modélisation du système auditif chez l'homme : étude de phénomènes de propagation des ondes cochléaires by Jean-Luc Schwartz( )

4 editions published in 1981 in French and held by 9 WorldCat member libraries worldwide

THEORETICAL AND EXPERIMENTAL FACTORS OF THE RESEARCH. DEMONSTRATION OF THE IMPORTANCE IN HEARING OF THE MECHANISMS OF PROPAGATION OF INFORMATION IN THE STRUCTURES OF THE INNER EAR. STUDY OF A PARTICULAR AUDITORY SIMULATION SET-UP-FIXED LOW-FREQUENCY TRAPEZOID STIMULATION APPLIED TO THE ENTRY TO THE INNER EAR. THIS STUDY GIVES THE FIRST CONFIRMATION IN MAN OF AN IMPORTANT RESULT SUGGESTED BY THE MODEL
Vocalize to localize by Christian Abry( Book )

4 editions published between 2004 and 2009 in English and held by 5 WorldCat member libraries worldwide

Vocalize-to-Localize? Meerkats do it for specific predators ... And babies point with their index finger toward targets of interest at about nine months, well before using language-specific that-demonstratives. With what-interrogatives they are universal and
Représentations auditives de spectres vocaliques by Jean-Luc Schwartz( Book )

4 editions published in 1987 in French and held by 4 WorldCat member libraries worldwide

CE TRAVAIL SE COMPOSE EN TROIS PARTIES. LA PREMIERE PARTIE EST CONSACREE AUX TECHNIQUES D'ESTIMATION PAR MASQUAGE DES CARACTERISTIQUES QUANTITATIVES DES REPRESENTATIONS AUDITIVES "PERIPHERIQUES". LA DEUXIEME PARTIE TRAITE DE L'ETUDE DES TRAITEMENTS AUDITIFS DES SPECTRES VOCALIQUES. DANS LA TROISIEME PARTIE LES AUTEURS ABORDENT L'ETUDE DES SYSTEMES VOCALIQUES EN PROPOSANT CE QUE POURRAIT ETRE UNE "THEORIE DES FORMES ET DE LA STABILITE" A LA LUMIERE DE LA THEORIE DE L'INTEGRATION SPECTRALE E LARGE BANDE, AVEC LES CONCEPTS DE STABILITE INTRINSEQUES ET EXTRINSEQUES, ET EN ESSAYANT D'INSCRIRE CETTE THEORIE DANS UN SYSTEME GENERAL DES SYSTEMES VOCALIQUES
Special issue on audio visual speech processing( Book )

2 editions published in 2004 in English and held by 4 WorldCat member libraries worldwide

250 ans après Vaucanson, les robots de l'an 2000 : en hommage à Christian Benoît : s'inspirer du vivant pour créer de la technologie by Pascal Perrier( )

2 editions published between 2000 and 2002 in French and held by 4 WorldCat member libraries worldwide

Analyse de scènes auditives computationnelle (casa) : un nouvel outil de marquage du plan temps-fréquence par détection d'harmonicité exploitant une statistique de passages par zéro by François Gaillard( )

2 editions published in 1999 in French and held by 4 WorldCat member libraries worldwide

L'analyse de scenes auditives computationnelle (casa) se propose de modeliser notre capacite a structurer notre environnement sonore. Pour ce faire, l'une des approches envisagees consiste a considerer que cette capacite de notre systeme auditif resulte de l'utilisation, en cooperation, de plusieurs images du plan temps-frequence, construites a partir de l'extraction d'indices primitifs des signaux. Dans ce cadre, et au carrefour du traitement du signal, de la physiologie et de la reconnaissance de la parole, ce travail de these presente une methode de marquage du plan temps-frequence basee sur les proprietes harmoniques des sons voises. Cette methode utilise le principe d'une methode ancienne d'extraction de pitch, la methode ppz (i.e. Methode des passages par zero), connue pour sa sensibilite a la presence d'interference. Ce travail de these permet de montrer que cette sensibilite peut etre tournee en avantage pour la detection d'harmonicite en conditions interferantes. En effet, la statistique des passages par zero fournit un indice de fiabilite permettant de classer chaque region du plan temps-frequence en deux categories, selon qu'elle contient, ou non, une source harmonique et dominante. A partir de formalisations theoriques et de simulations, un modele complet de marquage du plan temps-frequence est alors developpe ; ce modele est ensuite evalue en differents paradigmes d'interferences, incluant les paradigmes de doubles voyelles et de signaux bruites, puis sur des signaux a fortes variations prosodiques. Enfin, la plausibilite physiologique de ce modele est discutee
De la substance à la forme : rôle des contraintes motrices orofaciales et brachiomanuelles de la parole dans l'émergence du langage by Amélie Rochet-Capellan( )

2 editions published in 2007 in French and held by 3 WorldCat member libraries worldwide

Et si les propriétés sensori-motrices de la parole modelaient le langage ? Cette hypothèse a propulsé le langage dans le monde de la complexité et de la cognition en-corporée. Nous introduisons ici différents types d'arguments montrant le rôle de la motricité de la parole dans la genèse du langage. Motricité orofaciale, d'abord, avec l'idée que les propriétés de la coordination inter-articulateurs contraindraient la morphogenèse du langage. Motricité orofaciale et brachiomanuelle, ensuite, avec l'hypothèse que le langage émergerait de la coordination main-bouche portant l'acte de pointage par la voix et par la main. Dans ce cadre, nos expériences analysent les mouvements enregistrés chez des locuteurs du français dans différentes tâches afin d'établir les propriétés des coordinations mâchoire-langue-lèvres dans la parole, puis mâchoire-main dans le pointage. Ces recherches s'intègrent au cadre de recherche global et récent proposant d'étudier le langage comme un système-complexe
Influence du son lors de l'exploration de scènes naturelles dynamiques : prise en compte de l'information sonore dans un modèle d'attention visuelle by Antoine Coutrot( )

2 editions published in 2014 in French and held by 3 WorldCat member libraries worldwide

We study the influence of different audiovisual features on the visualexploration of dynamic natural scenes. We show that, whilst the way a person explores a scene primarily relies on its visual content, sound sometimes significantly influences eye movements. Sound assures a better coherence between the eye positions of different observers, attracting their attention and thus their gaze toward the same regions. The effect of sound is particularly strong in conversation scenes, where the related speech signal boosts the number of fixations on speakers' faces, and thus increases the consistency between scanpaths. We propose an audiovisual saliency model able to automatically locate speakers' faces so as to enhance their saliency. These results are based on the eye movements of 148 participants recorded on more than 75,400 frames (125 videos) in 5 different experimental conditions
La communication parlée dossier( Book )

1 edition published in 1996 in French and held by 3 WorldCat member libraries worldwide

Émergence des représentations perceptives de la parole : des transformations verbales sensorielles à des éléments de modélisation computationnelle by Anahita Basirat( )

2 editions published in 2010 in French and held by 3 WorldCat member libraries worldwide

The aim of this thesis is to study the principles of speech scene analysis (in analogy to auditory scene analysis). The literature on speech perception suggests that these principles are partly different from those underlying auditory scene analyses. We use the Verbal Transformation Effect to investigate these « speech specific » mechanisms. The behavioural and neuroimaging results obtained in this work suggest that the perceptuo (multisensory)-motor processes are involved in the perceptual organization of speech. We implement some of these mechanisms in the TRACE model of speech perception. Our results can be understood within the framework of PACT (Perception for Action Control Theory), suggesting a link between speech perception and production systems in the perceptual organization of speech
La séparation de sources audiovisuelles by David Sodoyer( )

2 editions published in 2004 in French and held by 3 WorldCat member libraries worldwide

Ln the present time where multimedia technologies invade our day-to-day existence, this speech processing work is focussed on the association of two research areas : blind source separation and audio-visual interactions in speech communication. We present and develop a speech separation system, exploiting visual information provided by the speaker's lips. After a short review of blind source separation techniques presented in the last twenty years, we recall some of the literature about audio-visual speech, its perception and it processing. A first theoretical step consists in studying a source separation algorithm using spectral information, which allows to set foundations for our work. Next, thanks to audiovisual properties (coherence and complementary), we replace spectral information by audio-visual information described by a joint probability between an audio spectrum and lips shape. A study on this audio-visual model allows to implement and to assess this audio-visual speech source separation system. The results show the interest of the system, displa and discuss the gains provided by visual information in comparison with classical blind source separation algorithms, and present the perspectives in more complex situations
Modélisation bayésienne du développement conjoint de la perception, l'action et la phonologie by Marie-Lou Barnaud( )

1 edition published in 2018 in French and held by 3 WorldCat member libraries worldwide

Through perception and production tasks, humans are able to manipulate not only high-level units like words or sentences but also low-level units like syllables and phonemes. Studies in phonetics mainly focus on the second type of units. One of the main goal in this field is to understand how humans acquire and manipulate these units and how they are stored in the brain. In this PhD thesis, we address this set of issues by using computer modeling, performing computer simulations with a Bayesian model of communication, named COSMO (“Communicating Objects using Sensory-Motor Operations”). Our studies extend in three ways.In a first part, we investigate the cognitive content of phonetic units. It is well established that phonetic units are characterized by both auditory and motor representations. It also seems that these representations are both used during speech processing. We question the functional role of a double representation of phonetic units in the human brain, specifically in a perception task. By examining their respective development, we show that these two representations have a complementary role during perception: the auditory representation is tuned to recognize nominal stimuli whereas the motor representation has generalization properties and can deal with stimuli typical of adverse conditions. We call this the “auditory-narrow/motor-wide” property.In a second part, we investigate the variability of phonetic units. Despite the universality of phonetic units, their characterization varies from one person to another, both in their articulatory/motor and acoustic content. This is called idiosyncrasies. In our study, we aim at understanding how they appear during speech development. We specifically compare two learning algorithms, both based on an imitation process. The first version consists in sound imitation while the second version exploits phoneme imitation. We show that idiosyncrasies appear only in the course of a phoneme imitation process. We conclude that motor learning seems rather driven by a linguistic/communication goal than motivated by the reproduction of the stimulus acoustic properties.In a third part, we investigate the nature of phonetic units. In phonetics, there is a debate about the specific status of the syllable vs phoneme in speech communication. In adult studies, a consensus is now found: both units would be stored in the brain. But, in infant studies, syllabic units seem to be primary. In our simulation study, we investigate the acquisition of both units and try to understand how our model could “discover” phonemes starting from purely syllabic representations. We show that contrary to syllables and vowels, consonants are poorly characterized in the auditory representation, because the categories overlap. This is due to the influence of one phoneme on its neighbors, the well-known “coarticulation”. However, we also show that the representation of consonants in the motor space is much more efficient, with a very low level of overlap between categories. This is in line with classical theories about motor/articulatory invariance for plosives. In consequence, phonemes, i.e. vowels and consonants, seem well displayed and likely to clearly emerge in a sensory-motor developmental approach such as ours.Through these three axes, we implemented different versions of our model. Based on data from the literature, we specifically cared about the cognitive viability of its variables and distributions and of its learning phases. In this work, modeling computation has been used in two kinds of studies: comparative and explanatory studies. In the first ones, we compared results of two models differing by one aspect and we selected the one in accordance with experimental results. In the second ones, we interpreted a phenomenon observed in literature with our model. In both cases, our simulations aim at better understanding data from the literature and provide new predictions for future studies
 
moreShow More Titles
fewerShow Fewer Titles
Audience Level
0
Audience Level
1
  General Special  
Audience level: 0.14 (from 0.01 for Vocalize t ... to 0.96 for Apport de ...)

WorldCat IdentitiesRelated Identities
Vocalize to localize
Covers
Primate communication and human language : vocalisation, gestures, imitation and deixis in humans and non-humansOrigins of human language : continuities and discontinuities with nonhuman primates
Alternative Names
Jean-Luc Schwartz researcher

Jean-Luc Schwartz wetenschapper

Schwartz, J.-L.

Schwartz, Jean-Luc 1958-...

슈와르츠, 장뤽

Languages
English (45)

French (31)