[visionlist] ASVspoof 2019 CHALLENGE: Future horizons in spoofed/fake audio detection

Md Sahidullah sahidullahmd at gmail.com
Fri Dec 21 08:11:37 -05 2018


[Apologies for possible cross-posting]


**=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=**



*ASVspoof 2019 CHALLENGE:Future horizons in spoofed/fake audio
detection**http://www.asvspoof.org/
<http://www.asvspoof.org/>*

* =*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*
Can you distinguish computer-generated or replayed speech from
authentic/bona fide speech? Are you able to design algorithms to detect
spoofs/fakes automatically?

Are you concerned with the security of voice-driven interfaces?

Are you searching for new challenges in machine learning and signal
processing?



*Join ASVspoof 2019* – the effort to develop next-generation
countermeasures for the automatic detection of spoofed/fake audio. In
combining the forces of leading research institutes and industry, ASVspoof
2019 encompasses two separate sub-challenges in logical and physical access
control, and provides a common database of the most advanced spoofing
attacks to date. The aim is to study both the limits and opportunities of
spoofing countermeasures in the context of automatic speaker verification
and fake audio detection.


*CHALLENGE TASK*

Given a short audio clip, determine whether it represents authentic/bona
fide human speech, or a spoof/fake (replay, synthesized speech or converted
voice). You will be provided with a large database of labelled training and
development data and will develop machine learning and signal processing
countermeasures to distinguish automatically between the two.
Countermeasure performance will be evaluated jointly with an automatic
speaker verification (ASV) system provided by the organisers.




*BACKGROUND:*The ASVspoof 2019 challenge follows on from two previous
ASVspoof challenges, held in 2015 and 2017. The 2015 edition focused on
spoofed speech generated with text-to-speech (TTS) and voice conversion
(VC) technologies. The 2017 edition focused on replay spoofing. The 2019
edition is the first to address all three forms attack and the latest,
cutting-edge spoofing attack technology.



*ADVANCES:*

Today’s state-of-the-art, TTS and VC technologies produce speech signals
that are as good as perceptually indistinguishable from bona fide speech.
The LOGICAL ACCESS sub-challenge aims to determine whether the advances in
TTS and VC pose a greater threat to the reliability of automatic speaker
verification and spoofing countermeasure technologies. The PHYSICAL ACCESS
sub-challenge builds upon the 2017 edition with a far more controlled
evaluation setup which extends the focus of ASVspoof to fake audio
detection in, e.g. the manipulation of voice-driven interfaces (smart
speakers).



*METRICS:*

The 2019 edition also adopts a new metric, the tandem detection cost
function (t-DCF). Adoption of the t-DCF metric aligns ASVspoof more closely
to the field of ASV. The challenge nonetheless focuses on the development
of standalone spoofing countermeasures; participation in ASVspoof 2019 does
NOT require any expertise in ASV. The equal error rate (EER) used in
previous editions remains as a secondary metric, supporting the wider
implications of ASVspoof involving fake audio detection.

*SCHEDULE:*

Training and development data release: 19th December 2018

Evaluation data release: 15th February 2019

Deadline to submit evaluation scores: 22nd February 2019

Organisers return results to participants: 15th March 2019

INTERSPEECH paper submission deadline: 29th March 2019



*REGISTRATION:*

Registration should be performed once only for each participating entity
and by sending an email to registration at asvspoof.org with ‘ASVspoof 2019
registration’ as the subject line. The mail body should include: (i) the
name of the team; (ii) the name of the contact person; (iii) their country;
(iv) their status (academic/non-academic), and (v) the challenge
scenario(s) for which they wish to participate (indicative only). Data
download links will be communicated to registered contact persons only.



*MAILING LIST:*

Subscribe to general mailing list by sending e-mail with subject line
‘subscribe asvspoof2019’ to *sympa at asvspoof.org <sympa at asvspoof.org>*. To
post messages to the mailing list itself, send e-mails to
*asvspoof2019 at asvspoof.org
<asvspoof2019 at asvspoof.org>*



*ORGANIZERS*:*

Junichi Yamagishi, NII, Japan & Univ. of Edinburgh, UK

Massimiliano Todisco, EURECOM, France

Md Sahidullah, Inria, France

Héctor Delgado, EURECOM, France

Xin Wang, National Institute of Informatics, Japan

Nicholas Evans, EURECOM, France

Tomi Kinnunen, University of Eastern Finland, Finland

Kong Aik Lee, NEC, JAPAN

Ville Vestman, University of Eastern Finland, Finland

(*) Equal contribution

*CONTRIBUTORS:*

University of Edinburgh, UK; Nara Institute of Science and Technology,
Japan, University of Science and Technology of China, China; iFlytek
Research, China; Saarland University / DFKI GmbH, Germany; Trinity College
Dublin, Ireland; NTT Communication Science Laboratories, Japan; HOYA,
Japan; Google LLC (Text-to-Speech team, Google Brain team, Deepmind);
University of Avignon, France; Aalto University, Finland; University of
Eastern Finland, Finland; EURECOM, France.


*FURTHER INFORMATION:*

*info at asvspoof.org <info at asvspoof.org>*

-- 
Md Sahidullah
website: *https://sites.google.com/site/iitkgpsahi/
<https://sites.google.com/site/iitkgpsahi/>*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20181221/c9581b33/attachment-0001.html>


More information about the visionlist mailing list