Uncategorized – LIG-Aikuma https://lig-aikuma.imag.fr A Mobile App to Collect Parallel Speech for Under-Resourced Language Studies Mon, 03 Jan 2022 10:54:17 +0000 en-US hourly 1 https://wordpress.org/?v=5.1.13 LIG-AIKUMA used to enrich a child language acquisition corpus collected in Bolivia https://lig-aikuma.imag.fr/lig-aikuma-used-to-enrich-a-child-language-acquisition-corpus-collected-in-bolivia/ Tue, 11 Dec 2018 12:22:35 +0000 https://lig-aikuma.imag.fr/?p=346 Read more about LIG-AIKUMA used to enrich a child language acquisition corpus collected in Bolivia[…]]]> We have been using daylong recordings (i.e., recordings gathered with a device worn by the child, as he/she goes about her normal day/night) for several years, including with children learning languages as diverse as Tsimane’ and Ju|’hoan. One of the most challenging aspects of annotating these data is to figure out the “cast of characters”: Deciding which voice is the mother’s, the siblings’, and sometimes even the child’s can be difficult because the annotator doesn’t know the family (and frequently doesn’t speak the language). (We rely on foreigners because locals who do know the family would have access to private conversations of the family, which seems problematic.)
This summer, we figured out how to surmount this obstacle, thanks to Lig Aikuma. When we picked up the device with the recording, we processed it with DiViMe (divime.readthedocs.io) to perform basic speech detection over samples extracted throughout the recording (15 seconds every 10 minutes). Using the respeak function of Lig Aikuma, we then played back sections that contained speech, and asked the participating family to recognize those voices. We are confident that this is the best solution because we can get naturalistic samples of how the most talkative people sound, including young children (siblings and friends), who may be too shy to speak when we are around.
Thanks, Lig Aikuma team, for this terrific tool!
]]>
LIG-AIKUMA presented at the R&T Days of Institut Cognition in Paris La Vilette https://lig-aikuma.imag.fr/lig-aikuma-presented-at-the-rt-days-of-institut-cognition-in-paris-la-vilette/ Fri, 05 Oct 2018 14:06:59 +0000 https://lig-aikuma.imag.fr/?p=332 Lig-Aikuma : une application mobile pour la collecte de parole sur le terrain

]]>
ICPhS 2019 Special Session https://lig-aikuma.imag.fr/icphs-2019-special-session/ Fri, 20 Jul 2018 07:34:18 +0000 https://lig-aikuma.imag.fr/?p=321 Read more about ICPhS 2019 Special Session[…]]]>

Welcome

This is the web page for the Computational Approaches for Documenting and Analyzing Oral Languages Special Session at ICPhS 2019, the International Congress of the Phonetic Sciences, 5-9 August 2019, Melbourne, Australia.

Summary

The special session Computational Approaches for Documenting and Analyzing Oral Languages welcomes submissions presenting innovative speech data collection methods and/or assistance for linguists and communities of speakers: methods and tools that facilitate collection, transcription and translation of primary language data. Oral languages is understood here as referring to spoken vernacular languages which depend on oral transmission, including endangered languages and (typically low-prestige) regional varieties of major languages.

The special session intends to provide up-to-date information to an audience of phoneticians about developments in machine learning that make it increasingly feasible to automate segmentation, alignment or labelling of audio recordings, even in less-documented languages. A methodological goal is to help establish the field of Computational Language Documentation and contribute to its close association with the phonetic sciences. Computational Language Documentation needs to build on the insights gained through phonetic research; conversely, research in phonetics stands to gain much from the availability of abundant and reliable data on a wider range of languages.

Our special session is mentioned on the ICPhS website here.

Main goals

The special session aims to bring together phoneticians, computer scientists and developers interested in the following goals:

  • Rethinking documentary processes: recording, transcription and annotation;
  • Responding to the urgent need to document endangered languages and varieties;
  • Elaborating an agenda and establishing a roadmap for computational language documentation;
  • Ensuring that the requirements of phonetics research are duly taken into consideration in the agenda of Computational Language Documentation;
  • Attracting computer scientists to ICPhS and engaging them in discussions with phoneticians (and linguists generally).

Main topics

This special session will focus on documenting and analyzing oral languages, including topics such as the following:

  • large-scale phonetics of oral languages,
  • automatic phonetic transcription (and phonemic transcription),
  • mobile platforms for speech data collection,
  • creating multilingual collections of text, speech and images,
  • machine learning over these collections,
  • open source tools for computational language documentation,
  • position papers on computational language documentation.

Session format

Special sessions at ICPhS will normally be 1.5 hours. For our accepted special session, we chose the “workshop” type with a more open format suitable for discussion of methods/tools. The exact format is still to be determined. More details will be provided on the format later.

How does the submission process work?

Papers will be submited directly to the conference by December 4th and will then be evaluated according to the standard ICPhS review process [see here]. Accepted papers will be allocated either to this special session or a general session. When submitting you can specify if you want to be considered for this special session.

Organizers

Laurent Besacier – LIG UGA (France)
Alexis Michaud – LACITO CNRS (France)
Martine Adda-Decker – LPP CNRS (France)
Gilles Adda – LIMSI CNRS (France)
Steven Bird – CDU (Australia)
Graham Neubig – CMU (USA)
François Pellegrino – DDL CNRS (France)
Sakriani Sakti – NAIST (Japan)
Mark Van de Velde – LLACAN CNRS (France)

]]>
LigAikuma presented at BigDataSpeech summer school 2018 https://lig-aikuma.imag.fr/ligaikuma-presented-at-bigdataspeech-summer-school-2018/ Fri, 06 Jul 2018 05:23:02 +0000 https://lig-aikuma.imag.fr/?p=305 The first lab on how to use LigAikuma will be given on July 9th 2018 at the BigDataSpeech 2018 summer school !
Laurent Besacier, will give a preliminary talk in the morning, and a lab in the afternoon !

]]>
4th release of LIG-Aikuma https://lig-aikuma.imag.fr/4th-release-of-lig-aikuma/ Fri, 23 Feb 2018 01:48:45 +0000 https://lig-aikuma.imag.fr/?p=212 Read more about 4th release of LIG-Aikuma[…]]]> A new version of LIG-Aikuma is available (see Download section).
This version (v3.1.0) is dedicated to the app installed on phone running from Android 6.0 and higher.
It automatically turn on required component permissions requests.
Indeed, LIG-Aikuma needs 4 main component to work fine: Microphone, Storage, Phone and Location.

    • Microphone is used to record speech.
    • Storage is used to select file from phone and store recordings.
    • Phone is used to share files (through Bluetooth connectivity, Wi-Fi or Network).
    • Location is used to point the position of the recordings.

These components must be granted by the user in order the app to be used. They are automatically enabled during the installation process to facilitate its use by an unexperiment user, but it always can be disabled through the app settings (however, it is not recommended since the app will crash and so become unusable).

]]>
New release of LIG-Aikuma https://lig-aikuma.imag.fr/lig-aikuma_v3/ Tue, 18 Jul 2017 16:24:52 +0000 http://lig-aikuma.imag.fr/?p=115 Read more about New release of LIG-Aikuma[…]]]> LIG is pleased to inform you that an update of Lig-Aikuma (V3) is available (see Download section).
The app can also be downloaded from your phone/tablet on the Google Play Store. (soon)
This new version and the older ones are is still available on the Forge.

Here are the improvements made:

  • Visual upgrade:
    + Waveform visualizer on the Respeaking and Translation modes (possibility to zoom in/out the audio signal)
    + File explorer included in all modes, to facilitate the navigation between files
    + New Share mode to share recordings between devices (by Bluetooth, Mail, NFC if available)
    + French and German languages available. In addition to English, the application now supports French and German languages. Lig-Aikuma uses by default the language of the phone/tablet.
    + New icons, more consistent to discriminate all type of files (audio, text, image, video)
  • Conceptual upgrade:
    + New name for the root project: ligaikuma –> /!\ Henceforth, all data will be stored into this directory instead of “aikuma” (in the previous versions of the app). This change doesn’t have compatibility issues. In the file explorer of the mode, the default position is this root directory. Just go back once with the left grey arrow (on the lower left of the screen) and select the “aikuma” directory to access to your old recordings
    + Generation of a PDF consent form (from informations filled in the metadata form) that can be signed by linguist and speaker thanks to a pdf annotation tool (like Adobe Fill & Sign mobile app)
    + Generation of a CSV file which can be imported in Elan software: it will automatically create segmented tier, as it was done during a respeaking or a translation session. It will also mention by a “non-speech” label that a segment has no speech.
    + Géolocalisation of the recordings
    + Respeak an elicit file: it is now possible to use in Respeaking or Translation mode an audio file initially recorded in Elicitation mode
  • Structural upgrade:
    + Undo button on Elicitation to erase/redo the current recording
    + Improvement session backup on Elicitation
    + Non-speech button in Respeaking and Translation modes to indicate by a comment that the segment does not contain speech (but noise or silent for instance)
    + Automatic speaker profile creation to quickly fill in the metadata infos if several sessions with a same speaker

Contributing coders to this version: Guillaume Baudson and Elodie Gauthier
We thank Henry Salfner for the German translation of the app.

]]>
LIG-Aikuma Release https://lig-aikuma.imag.fr/lig-aikuma_v2/ Sat, 25 Jun 2016 15:27:33 +0000 http://lig-aikuma.imag.fr/?p=113 Read more about LIG-Aikuma Release[…]]]> LIG is pleased to inform you that an update of Lig-Aikuma (V2) is available on the Forge.
This version will be demonstrated at Interspeech 2016 in September…

Here are the improvements made:

  • Visual upgrade:
    + New filename rules (now the format is as follows: YYMMDD-hhmmss_lang_idDevice)
    + Progression bar in all modes
    + Background color and relevant icons on “import file” popup
  • Conceptual upgrade:
    + Birth year field (instead of Age year) –> /!\ because of this change, as discussed in a previous BULB meeting, your current recordings made with V1 will not work directly with this new version of the app – Gilles is working on a script to transform V1 metadata into V2 metadata for compatibility
    + No more generation of the _preview file
    + New Elicitation modes for Images and Videos
  • Structural upgrade:
    + Undo button on Respeaking/Translate to erase/redo the last respeaking segment
    + Backup of the session for all modes

Contributing coders to this version: Calliste Hanriat, David Blachon and Elodie Gauthier

]]>
LIG-Aikuma https://lig-aikuma.imag.fr/lig-aikuma_v1/ Fri, 22 May 2015 13:06:51 +0000 http://lig-aikuma.imag.fr/?p=111 Read more about LIG-Aikuma[…]]]> Based on the Aikuma app by S. Bird and al., LIG-Aikuma is a tool to record, respeak and translate speech through a clean and easy-to-use interface. Furthermore it enables elicitation of speech from text, images or videos. There are clear file naming conventions, extended metadata information and the whole application is optimized for 10 inch screens for tablet use. More features are being developed while it is already used on fieldtrips to collect data in Africa.

Contributing coders to this version: David Blachon and Elodie Gauthier

]]>