[visionlist] MAGiC- A multimodal framework for analysing gaze in communication

Ülkü Arslan Aydın ulku.arslan at gmail.com
Thu Mar 19 06:09:00 -04 2020

Dear Colleagues,

I would like to announce the multimodal framework for analysing gaze in
communication (MAGiC).

It integrates video recording data for face tracking, gaze data from eye
trackers and the audio data for speech annotation.

It is an open source project:



*What it does*

Speech and gaze are closely connected modalities in social interaction.
MAGiC is a tool for the analysis of social behavior. It enables researchers
to overlay gaze data on top of dynamic or static scenes and associates it
with the speech information at that moment. MAGiC expands the capacity of
current eye tracking technologies by integrating automated face tracking to
detect whether a human participant is looking at the interlocutor’s face,
and if so, which part of the face is being looked at. Then, it integrates
area-of-interest that is relative to the position of interlocutor's face,
with automatically segmented and semi-automatically annotated speech-data.
Specifically, MAGiC provides functionalities for:

   - speech and gaze analysis
   - synchronizing multiple recordings
   - visualizing and reviewing outcomes
   - generating standard output files (i.e., .wav and .txt files)



For instructions of how to install/compile/use the project please see WIKI

*Developing and Contributing*

We welcome and appreciate contributions from the community. There are many
ways to become involved with MAGiC: including filing issues, writing and
improving documentation, and contributing to the code. Please keep the
following in mind before sending pull request:

   - make sure your branch is rebased on the master branch of this
   - ensure that code is stable enough and does compile.
   - explain clearly what the purpose of the patch is, and how you achieved



MAGiC is licensed under the GNU General Public License (GPL). You have to
respect OpenFace <https://github.com/TadasBaltrusaitis/OpenFace>, Sphinx4
<https://github.com/cmusphinx/sphinx4>, boost, TBB, dlib, and OpenCV
licenses. Thank you!


@article{Arslan Aydin_Kalkan_Acarturk_2018,
                title={MAGiC: A multimodal framework for analysing gaze in
dyadic communication},
                journal={Journal of Eye Movement Research},
                author={Arslan Aydin, Ülkü and Kalkan, Sinan and Acarturk,

Find out more in our Wiki on GitHub <https://github.com/ulkursln/MAGiC/wiki>
 and Youtube Channel

Kind regards..


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20200319/fe319562/attachment.html>

More information about the visionlist mailing list