Follow this tutorial to learn Lig-Aikuma in 90mn…
A) Before doing this tutorial you must have:
(1) installed lig-aikuma on your tablet or smartphone: https://lig-aikuma.imag.fr/download/
(2) watched the video tutorial online on: https://lig-aikuma.imag.fr/tutorial/
(3) (optional) read the documentation available on: https://lig-aikuma.imag.fr/wp-content/uploads/2017/06/LIG-Aikuma_tutorial-fr.pdf
B) Download elicitation data
We will use Mercer Mayer wordless picture book of 1969 Frog, Where Are You? which have often been largely used for the collection of language acquisition data (see e.g. Berman & Slobin 1994).
You can download an image-by-image version of the book here: FrogStory.
Then, unzip the archive and copy it to your smartphone or tablet (or download this resource directly from your smartphone or tablet)
C) Speech elicitation from images
Launch Lig-Aikuma from your mobile terminal and choose the ‘Elicitation’ mode in the menu and then choose ‘Elicitation by Image’ from the sub-menu.
After entering metadata from the speaker and selecting the directory in which FrogStory images are, you can start recording your narratives for each image in your native language.
After you have recorded yourself describing the 26 images, you can have a look to the wav files that should be available in the directory ligaikuma/recordings. Observe the name conventions for the recorded wav files. Observe also the text file named ‘linker.txt’ (is in the same directory as the recordings) that links the absolute path of each wav file with the absolute path of the associated image. This file lets you know which recording matches which image or video. Also observe the .json file that contains the metadata.
D) Re-speaking (or Translating) your own recordings
Now, you have the choice to either respeak (in the same language) or translate (in a different language) the recordings you just made. However, we are going to concatenate first all the 26 wav files in order to have a single and long wav file, corresponding to your narrative of the FrogStory, that you will have to respeak/translate (this is more realistic in respeaking/translating scenario). To re-concatenate the wav files of a single directory you can use this tool (audio-joiner) that will generate a mp3 file from all your wav and then re-converting mp3 to wav using this other tool (media-io). If these 2 tools do not work directly on your smartphone, then you will have to transfer the 26 wav files on your computer, then concatenate them and transfer back the big wav file to your mobile terminal.
!!you can also do that with a single sox command from a terminal:
sox $(ls -1v | egrep “*.wav”) NomDuFichierFinal.wav
Choose the ‘Respeaking’ or ‘Translating’ mode in the menu and then fill again the meta data (note that you can re-use your speaker profile to avoid re-entering all the information concerning the speaker ; use ‘Import Speaker Profile’ button). Then start respeaking (translating) the long file.
Note that you need to hold the play button to play the recording. Holding the recording button lets you record. You can play the latest recorded segment in the top right corner. If you are not satisfied by the record, you can erase it by press on the back arrow and record it again. It is possible to mark non-speech segments for future treatments by checking the box at the bottom right before or after having recorded a new segment. Finally, validate once you are done by pressing the green quote. It is possible to zoom in/out the waveform and slide left/right the signal. After validating, you get a ‘validation’ menu where you will see the pairs of original and respoken segments. You can check if everything is correct. If not, you can re-record single parts.
The re-spoken (or translated) file will be saved in the same folder as the original recording and follows the same format. Date-Time_ Language_rspk (or Date-Time_ Language_trad). The last part of the name (“rspk” or “trad”) tells you that the file is a respeaking or a translated one.
Observe the _trad.map or the _rspk.map files that give the alignement between initial and respoken (translated) wav files.
E) Saving your field data and share it with others
Method 1: use the share mode with another terminal
Share mode allows you to share audio files with other devices by using all possibility of communication offered by the tablet or smartphone. In the main Lig-Aikuma menu, choose the ‘Share’ mode. First, select files to share with the file explorer and check boxes. To select all audio files in the folder, simply check the box at the bottom right of the screen. If you want to share only a selection of files, check the corresponding boxes for each of the files you want. Finally, press ‘Share’ to send the files. Then, Lig-Aikuma proposes different transfer mode. Choose the convenient one and validate to share the files.
Method 2: use git (better in real scenarios where several terminals are used for a same data collection – need for synchonization)
Copy data to the following git repository: https://github.com/besacier/BigDataSpeech2018
pass: will be given during the summer school
-clone the repo: git clone https://github.com/besacier/BigDataSpeech2018
-copy your recorded data in the repo: cp -R myrecordeddata ./BigDataSpeech2018
-add your folder to the repo: git add myrecordeddata
-commit your data to the repo: git commit -a -m “data from xxxx added”
-push that to the git repo: git push