Our paper dealing with AudioSet classification from the perspective of label noise has been accepted in IEEE Signal Processing Letters! This is a collaboration done during an internship in fall of 2019 at Google Research, NYC.

In the paper, we address missing labels in AudioSet using a teacher-student framework with loss masking.

Check it out:
Addressing Missing Labels in Large-scale Sound Event Recognition using a Teacher-student Framework with Loss Masking
E. Fonseca, S. Hershey, M. Plakal, D. P. W. Ellis, A. Jansen, and R. C. Moore.
In IEEE Signal Processing Letters, Vol. 27, pages 1235 - 1239, 2020
[ArXiv][IEEEXplore]

Or read a blog post with the main takeaways!

Do you use large audio datasets? Do they suffer from missing labels? We all like large datasets to train our models, but they inevitably bring in label noise issues, since it is intractable to exhaustively annotate massive amounts of audio. This is especially the case of sound event datasets, where often several events co-occur in the same clip, but many times only some of them are annotated. But, does it really matter? And if so, what can we do about it? We wanted to answer those questions using AudioSet…

Continue reading here: http://www.eduardofonseca.net/papers/2020/05/08/teacher-student-missing-labels.html.