fred voisin’s website

computer music producer, since 1989

Publications


Publications scientifiques

Cf. également :
ORCID iD iconMy Orcid page

Chapitre de livre

2021
2015
2006
1998
1997
1995

  • Voisin, Frédéric & Cloarec-Heiss, France. 1995. Echelles musicales et données linguistiques: vers une histoire des societes oubanguiennes. Dans Dehoux, Vincent & Fürniss, Suzanne & Lebomin, Sylvie & Olivier, Emmanuelle & Rivière, Hervé & Voisin, Frédéric (éd.), Ndroje Balendro : Musiques, terrains et disciplines - Textes offerts à Simha Arom, pp. 119-140. Peeters - Société d'Etudes Linguistiques et Anthropologiques de France (SELAF). (https://hal.archives-ouvertes.fr/hal-01612187) (Consulté 2019février 6 .)

Article de colloque

2021


  • Voisin, Frédéric & Bidotti, Arnaud & Mourey, France. 2021. Designing Soundscapes for Alzheimer’s Disease Care, with Preliminary Clinical Observations. Dans Kronland-Martinet, Richard & Ystad, Sølvi & Aramaki, Mitsuko (éd.), Perception, Representations, Image, Sound, Music (Lecture Notes In Computer Science), 533-553. Cham: Springer International Publishing. (doi:10.1007/978-3-030-70210-6_34) (Consulté sans date.)
    Résumé : Acoustic environment is a prime source of conscious and unconscious information which allows listeners to place themselves, to communicate, to feel, to remember. Recently, there has been a growing interest to the acoustic environment and its perceptual counterparts of care facilities. In this contribution, the authors describe the process of designing a new audio interactive apparatus for Alzheimer’s Disease care in the context of an active multidisciplinary research project led by a sound designer since 2018, in collaboration with a residential longterm care (EHPAD) in France, a geriatrician, a gerontologist, psychologists and caregivers. The apparatus, named «Madeleines Sonores» in reference to Proust’s madeleine, has been providing virtual soundscapes for two years 24/7 to elderly people suffering from Alzheimer disease. The configuration and sound processes of the apparatus are presented in relation to Alzheimer Disease care. Preliminary psychological and clinical observations are discussed in relation to dementia and to the activity of caring to evaluate the benefits of such a disposal in Alzheimer’s disease therapy and in caring dementia.
    Mots-clés : Alzheimer’s Disease, Mental Health Care, Sound design, Soundscapes.
2019

  • Voisin, Frédéric. 2019. Designing Virtual Soundscapes for Alzheimer's Disease Care. Dans , Proceedings of the 14th International Symposium on Computer Music Multidisciplinary Research. Marseille, France: CNRS. (https://hal.archives-ouvertes.fr/hal-02370448) (Consulté 2021août 28 .)
    Résumé : Sound environment is a prime source of conscious and unconscious information which allows listeners to place themselves, to communicate, to feel, to remember. The author describes the process of designing a new audio interactive apparatus for Alzheimer's care, in the context of an active multidisciplinary research project led by the author in collaboration with a longterm care centre (EHPAD) in Burgundy (France), a geriatrician, a gerontologist, psychologists and caregivers. The apparatus, named Madeleines Sonores in reference to Proust's madeleine, have provided virtual soundscapes sounding for a year for 14 elderly people hosted in the dedicated Alzheimer's unit of the care centre, 24/7. Empiric aspects of sonic interactivity are discussed in relation to dementia and to the activity of caring. Scientific studies are initiated to evaluate the benefits of such a disposal in Alzheimer's disease therapy and in caring dementia.
    Mots-clés : Alzheimer's Desease, Caring, Cognitive Rehabilitation, Dementia, Quality of Life, Sonic Interaction, Sound design, Soundscapes, Virtual Reality.
2016


  • Lemaitre, Guillaume & Houix, Olivier & Voisin, Frédéric & Misdariis, Nicolas & Susini, Patrick. 2016. Comparing identification of vocal imitations and computational sketches of everyday sounds. Dans , Meeting of the Acoustical Society of America (The Journal Of The Acoustical Society Of America), vol. 140, 3390 - 3390. Honolulu, United States. (doi:10.1121/1.4970854) (https://hal.archives-ouvertes.fr/hal-01448968) (Consulté 2019février 6 .)
    Résumé : Sounds are notably difficult to describe. It is thus not surprising that human speakers often use many imitative vocalizations to communicate about sounds. In practice,vocal imitations of non-speech everyday sounds (e.g. the sound of a car passing by) arevery effective: listeners identify sounds better with vocal imitations than with verbal descriptions, despite the fact that vocal imitations are often inaccurate, constrained by the human vocal apparatus. The present study investigated the semantic representations evoked by vocal imitations by experimentally quantifying how well listeners could match sounds to category labels. Itcompared two different types of sounds: human vocal imitations, and computational auditory sketches (created by algorithmic computations), both based on easily identifiable sounds (sounds of human actions and manufactured products). The results show that performance with the best vocal imitations was similar to the best auditory sketches for most categories of sounds.More detailed analyses showed that the acoustic distance between vocal imitations and referent sounds is not sufficient to account for such performance. They suggested that instead of reproducing the acoustic properties of the referent sound as accurately as vocally possible, vocal imitations focus on a few important features dependent on each particular sound category.


  • Scurto, Hugo & Lemaitre, Guillaume M & Françoise, Jules & Voisin, Frédéric & Bevilacqua, Frédéric & Susini, Patrick. 2016. Combining gestures and vocalizations to imitate sounds. Dans , International Symposium on Gesture Studies (ISGS) (Proceedings Of The International Symposium On Gesture Studies). Paris, France. (doi:10.1121/1.4933639) (https://hal.archives-ouvertes.fr/hal-01466192) (Consulté 2019février 6 .)
    Résumé : Communicating about sounds is a difficult task without a technical language, and naïve speakers often rely on different kinds of non-linguistic vocalizations and body gestures (Lemaitre et al. 2014). Previous work has independently studied how effectively people describe sounds with gestures or vocalizations (Caramiaux, 2014, Lemaitre and Rocchesso, 2014). However, speech communication studies suggest a more intimate link between the two processes (Kendon, 2004). Our study thus focused on the combination of manual gestures and non-speech vocalizations in the communication of sounds. We first collected a large database of vocal and gestural imitations of a variety of sounds (audio, video, and motion sensor data). Qualitative analysis of gestural strategies resulted in three hypotheses: (1) voice is more effective than gesture for communicating rhythmic information, (2) textural aspects are communicated with shaky gestures, and (3) concurrent streams of sound events can be split between gestures and voice. These hypotheses were validated in a second experiment in which 20 participants imitated 25 specifically synthesized sounds: rhythmic noise bursts, granular textures, and layered streams. Statistical analyses compared acoustics features of synthesized sounds, vocal features, and a set of novel gestural features based on a wavelet representation of the acceleration data.
2011
2004

  • Voisin, Frédéric & Meier, Robin. 2004. Playing Integrated Music Knowledges with Artifical Neural Networks. Dans , Sound and Music Computing '04 (SMC04). Paris, France: IRCAM and Centre Pompidou. (https://hal.archives-ouvertes.fr/hal-01612524) (Consulté 2019février 6 .)
    Résumé : This text presents some aspects of the Neuromuse project developed at CIRM. We will present some experiences in using self-organizing maps (SOM) to generate meaningful musical sequences, in real-time interfaces for music production.
    Mots-clés : Artificial Neural Networks, Computer Music.
1994

  • Voisin, Frédéric & Arom, Simha. 1994. De l'Afrique à l'Indonésie : expérimentations interactives sur les échelles musicales. Dans Deliège, Irène (éd.), 3rd International Conference for Music Perception and Cognition (ICMPC) (Proceedings Of The 3Rd International Conference For Music Perception And Cognition (Icmpc), pp. 99-100. Liège, Belgium: European Society for the Cognitive Sciences of Music (ESCOM). (https://hal.archives-ouvertes.fr/hal-01612534) (Consulté 2019février 6 .)
    Mots-clés : Cognitive Sciences, Ethnomusicology.
1990

Article de revue

2016


  • Lemaitre, Guillaume & Houix, Olivier & Voisin, Frédéric & Misdariis, Nicolas & Susini, Patrick. 2016. Vocal Imitations of Non-Vocal Sounds. PLoS ONE 66. 128 - 128. (doi:10.1371/journal.pone.0168167.t004) (https://hal.archives-ouvertes.fr/hal-01429918) (Consulté 2019février 6 .)
    Résumé : Imitative behaviors are widespread in humans, in particular whenever two persons communicate and interact. Several tokens of spoken languages (onomatopoeias, ideophones, and phonesthemes) also display different degrees of iconicity between the sound of a word and what it refers to. Thus, it probably comes at no surprise that human speakers use a lot of imi-tative vocalizations and gestures when they communicate about sounds, as sounds are notably difficult to describe. What is more surprising is that vocal imitations of non-vocal everyday sounds (e.g. the sound of a car passing by) are in practice very effective: listeners identify sounds better with vocal imitations than with verbal descriptions, despite the fact that vocal imitations are inaccurate reproductions of a sound created by a particular mechanical system (e.g. a car driving by) through a different system (the voice apparatus). The present study investigated the semantic representations evoked by vocal imitations of sounds by experimentally quantifying how well listeners could match sounds to category labels. The experiment used three different types of sounds: recordings of easily identifiable sounds (sounds of human actions and manufactured products), human vocal imitations, and computational " auditory sketches " (created by algorithmic computations). The results show that performance with the best vocal imitations was similar to the best auditory sketches for most categories of sounds, and even to the referent sounds themselves in some cases. More detailed analyses showed that the acoustic distance between a vocal imitation and a referent sound is not sufficient to account for such performance. Analyses suggested that instead of trying to reproduce the referent sound as accurately as vocally possible, vocal imitations focus on a few important features, which depend on each particular sound category. These results offer perspectives for understanding how human listeners store and access long-term sound representations, and sets the stage for the development of human-computer interfaces based on vocalizations.
2009
1994
1993
1991

Rapport

1993
--- Exporter la sélection au format


Autres publications

--- Exporter la sélection au format


Interviews

2009


  • Toeplitz, Kasper T. & Voisin, Frédéric & Aschour, Didier. 2009. MAX/MSP1.1.2. Revue & Corrigée 81. 16-19. (http://www.revue-et-corrigee.net/?v=parutions&parution_id=81) (Consulté sans date.)

2008


  • Delormas, Jérôme & Voisin, Frédéric. 2008. Entretien avec Gérome Delormas. Revue de lux, Scène Nationale de Valence 2. (http://www.lux-valence.com/info/brochures-pdf/) (Consulté sans date.)

2002


  • Gervasoni, Pierre. 2002. Frédéric Voisin et le chant des disques durs. Le Monde. Paris, France, sect. Portrait. (https://www.lemonde.fr/archives/article/2002/04/03/frederic-voisin-et-le-chant-des-disques-durs_4242887_1819218.html?xtmc=frederic_voisin&xtcr=1) (Consulté sans date.)

1993


  • Voisin, Frédéric. 1993. Tribulations centrafricaines. Keyboards Magazine. (http://www.fredvoisin.com/spip.php?article31) (Consulté sans date.)


Voir en ligne : sur HAL, archives ouvertes (publications scientifiques uniquement)