besacier – LIG-Aikuma https://lig-aikuma.imag.fr A Mobile App to Collect Parallel Speech for Under-Resourced Language Studies Mon, 03 Jan 2022 10:54:17 +0000 en-US hourly 1 https://wordpress.org/?v=5.1.13 LIG-AIKUMA used to enrich a child language acquisition corpus collected in Bolivia https://lig-aikuma.imag.fr/lig-aikuma-used-to-enrich-a-child-language-acquisition-corpus-collected-in-bolivia/ Tue, 11 Dec 2018 12:22:35 +0000 https://lig-aikuma.imag.fr/?p=346 Read more about LIG-AIKUMA used to enrich a child language acquisition corpus collected in Bolivia[…]]]> We have been using daylong recordings (i.e., recordings gathered with a device worn by the child, as he/she goes about her normal day/night) for several years, including with children learning languages as diverse as Tsimane’ and Ju|’hoan. One of the most challenging aspects of annotating these data is to figure out the “cast of characters”: Deciding which voice is the mother’s, the siblings’, and sometimes even the child’s can be difficult because the annotator doesn’t know the family (and frequently doesn’t speak the language). (We rely on foreigners because locals who do know the family would have access to private conversations of the family, which seems problematic.)
This summer, we figured out how to surmount this obstacle, thanks to Lig Aikuma. When we picked up the device with the recording, we processed it with DiViMe (divime.readthedocs.io) to perform basic speech detection over samples extracted throughout the recording (15 seconds every 10 minutes). Using the respeak function of Lig Aikuma, we then played back sections that contained speech, and asked the participating family to recognize those voices. We are confident that this is the best solution because we can get naturalistic samples of how the most talkative people sound, including young children (siblings and friends), who may be too shy to speak when we are around.
Thanks, Lig Aikuma team, for this terrific tool!
]]>
LIG-AIKUMA presented at the R&T Days of Institut Cognition in Paris La Vilette https://lig-aikuma.imag.fr/lig-aikuma-presented-at-the-rt-days-of-institut-cognition-in-paris-la-vilette/ Fri, 05 Oct 2018 14:06:59 +0000 https://lig-aikuma.imag.fr/?p=332 Lig-Aikuma : une application mobile pour la collecte de parole sur le terrain

]]>
ICPhS 2019 Special Session https://lig-aikuma.imag.fr/icphs-2019-special-session/ Fri, 20 Jul 2018 07:34:18 +0000 https://lig-aikuma.imag.fr/?p=321 Read more about ICPhS 2019 Special Session[…]]]>

Welcome

This is the web page for the Computational Approaches for Documenting and Analyzing Oral Languages Special Session at ICPhS 2019, the International Congress of the Phonetic Sciences, 5-9 August 2019, Melbourne, Australia.

Summary

The special session Computational Approaches for Documenting and Analyzing Oral Languages welcomes submissions presenting innovative speech data collection methods and/or assistance for linguists and communities of speakers: methods and tools that facilitate collection, transcription and translation of primary language data. Oral languages is understood here as referring to spoken vernacular languages which depend on oral transmission, including endangered languages and (typically low-prestige) regional varieties of major languages.

The special session intends to provide up-to-date information to an audience of phoneticians about developments in machine learning that make it increasingly feasible to automate segmentation, alignment or labelling of audio recordings, even in less-documented languages. A methodological goal is to help establish the field of Computational Language Documentation and contribute to its close association with the phonetic sciences. Computational Language Documentation needs to build on the insights gained through phonetic research; conversely, research in phonetics stands to gain much from the availability of abundant and reliable data on a wider range of languages.

Our special session is mentioned on the ICPhS website here.

Main goals

The special session aims to bring together phoneticians, computer scientists and developers interested in the following goals:

  • Rethinking documentary processes: recording, transcription and annotation;
  • Responding to the urgent need to document endangered languages and varieties;
  • Elaborating an agenda and establishing a roadmap for computational language documentation;
  • Ensuring that the requirements of phonetics research are duly taken into consideration in the agenda of Computational Language Documentation;
  • Attracting computer scientists to ICPhS and engaging them in discussions with phoneticians (and linguists generally).

Main topics

This special session will focus on documenting and analyzing oral languages, including topics such as the following:

  • large-scale phonetics of oral languages,
  • automatic phonetic transcription (and phonemic transcription),
  • mobile platforms for speech data collection,
  • creating multilingual collections of text, speech and images,
  • machine learning over these collections,
  • open source tools for computational language documentation,
  • position papers on computational language documentation.

Session format

Special sessions at ICPhS will normally be 1.5 hours. For our accepted special session, we chose the “workshop” type with a more open format suitable for discussion of methods/tools. The exact format is still to be determined. More details will be provided on the format later.

How does the submission process work?

Papers will be submited directly to the conference by December 4th and will then be evaluated according to the standard ICPhS review process [see here]. Accepted papers will be allocated either to this special session or a general session. When submitting you can specify if you want to be considered for this special session.

Organizers

Laurent Besacier – LIG UGA (France)
Alexis Michaud – LACITO CNRS (France)
Martine Adda-Decker – LPP CNRS (France)
Gilles Adda – LIMSI CNRS (France)
Steven Bird – CDU (Australia)
Graham Neubig – CMU (USA)
François Pellegrino – DDL CNRS (France)
Sakriani Sakti – NAIST (Japan)
Mark Van de Velde – LLACAN CNRS (France)

]]>
LigAikuma presented at BigDataSpeech summer school 2018 https://lig-aikuma.imag.fr/ligaikuma-presented-at-bigdataspeech-summer-school-2018/ Fri, 06 Jul 2018 05:23:02 +0000 https://lig-aikuma.imag.fr/?p=305 The first lab on how to use LigAikuma will be given on July 9th 2018 at the BigDataSpeech 2018 summer school !
Laurent Besacier, will give a preliminary talk in the morning, and a lab in the afternoon !

]]>