Yannis Stylianou is a Professor of Speech Processing at University of Crete, in Greece and Research Manager at Apple, UK. From 1996 until 2001 he was with AT&T Labs Research (Murray Hill and Florham Park, NJ, USA) and until 2002 he was with Bell-Labs Lucent Technologies, in Murray Hill, NJ, USA. He is with University of Crete since 2002. From 2013 until 2018 (July) he was Group Leader of the Speech Technology Group at Toshiba Cambridge Research Lab in Cambridge, UK. He holds MSc and PhD from ENST-Paris on Signal Processing and he has studied Electrical Engineering at NTUA Athens Greece (1991). He is an IEEE and ISCA Fellow. |
Malcolm Slaney is a research scientist in the Machine Hearing Group at Google Research, where he leads a project on saliency and attention. He received his PhD from Purdue University. He is also a Consulting Professor at Stanford CCRMA, and he has led the Hearing Seminar for more than 20 years, and an Affiliate Faculty in the Electrical Engineering Department at the University of Washington. He has served as an Associate Editor of IEEE Transactions on Audio, Speech and Signal Processing and IEEE Multimedia Magazine and a guest editor for the Proceedings of the IEEE and ACM Transactions on Multimedia Computing. Before joining Google, Dr. Slaney has worked at Bell Laboratory, Schlumberger Palo Alto Research, Apple Computer, Interval Research, IBM's Almaden Research Center, Yahoo! Research, and Microsoft Research. For many years, he has lead the auditory group at the Telluride Neuromorphic (Cognition) Workshop. Dr. Slaney's recent work is on understanding auditory perception and decoding auditory attention from brain signals. He is a Senior Member of the ACM and a Fellow of the IEEE. |
Alistair Conkie works at Apple, California on Text-to-Speech Synthesis (TTS). He started his career in Speech Synthesis at CSTR Edinburgh, Scotland before joining the TTS Research Group at CNET, France Telecom. He worked for a number of years at AT&T Shannon Laboratories in NJ, USA in the team that developed the AT&T Natural Voices TTS System, before joining Apple in 2014. |
Vassilis Tsiaras received his degree in Mathematics from the University of Thessaloniki in 1990, his M.Sc in Mathematics from the QMW College, University of London in 1992 and his Ph.D. in Computer Science from the University of Crete in 2009. He is currently teaching staff at the Technical University of Crete, Greece. His research areas of interest include graph algorithms, biomedical signal processing, statistical speech synthesis, and machine learning. |
Junichi Yamagishi is a Professor at NII, Japan. His research topics include speech processing, machine learning, signal processing, biometrics, digital media cloning and media forensics. He served previously as co-organizer for the bi-annual ASVspoof special sessions at INTERSPEECH 2013-9, the bi-annual Voice conversion challenge at INTERSPEECH 2016 and Odyssey 2018, an organizing committee member for the 10th ISCA Speech Synthesis Workshop 2019 and a technical program committee member for IEEE ASRU 2019. He also served as a member of the IEEE Speech and Language Technical Committee, as an Associate Editor of the IEEE/ACM TASLP and a Lead Guest Editor for the IEEE JSTSP SI on Spoofing and Countermeasures for Automatic Speaker Verification. He is currently a PI of JST-CREST and ANR supported VoicePersona project. He also serves as a chairperson of ISCA SynSIG and as a Senior Area Editor of the IEEE/ACM TASLP. |
Xin Wang is a project reseacher at National Institute of Informatics, Japan. He received the Ph.D. degree from SOKENDAI, Japan, in 2018 for his work on neural F0 modeling for text-to-speech synthesis. Before that, he received M.S. and B.E degrees from University of Science and Technology of China and University of Electronic Science and Technology of China in 2015 and 2012, respectively. He is one of the organizers of ASVspoof challenge 2019 and 2021. |
Yutian Chen is a staff research scientist at DeepMind. He obtained his PhD in machine learning at the University of California, Irvine, and later worked at the University of Cambridge as a research associate (Postdoc) before joining DeepMind. Yutian took part in the AlphaGo and AlphaGo Zero project, developed Game Go AI programs that defeated the world champions. Yutian has conducted research in multiple machine learning areas including Bayesian methods, deep learning, reinforcement learning, generative models and meta-learning with applications in gaming AI, computer vision and text-to-speech. Yutian also serves as reviewers and area chairs for multiple academic conferences and journals. |
Soumi Maiti is with Apple USA. She has a PhD from the Graduate Center CUNY, where she was working on audio processing, speech enhancement, speech synthesis. Her research interests are machine learning, audio processing, natural language processing. |
Petko Petkov holds a BSc in Telecommunications from the Technical University of Sofia, Bulgaria, and a MSc and PhD degrees in Electrical Engineering from KTH, the Royal Institute of Technology, Stockholm, Sweden. Petko has held positions with Global IP Solutions, AB, Toshiba Research Europe and presently Apple, UK, working on various aspects of speech enhancement, ASR and TTS. |
George Kafentzis is a Visiting Professor at the University of Crete, Greece. He holds a PhD degree from the University of Crete on Speech Modifications. His research interests include speech modeling and synthesis, speech transformations, voice quality and glottal source analysis, and emotional speech processing. |
Muhammed Shifas PV is a Ph.D. research scholar at the speech signal processing laboratory (SSPL) of University of Crete (UoC), Greece. He completed the B.Tech degree in electronics and communication engineering at the University of Calicut, India, and the M.Tech in signal processing at the National Institute of Technology, Calicut, India. He is a Marie Sklowdowska Curie Early Stage Researcher (MSCA-ESR) on the European Innovative Training Network, ENRICH. His Ph.D. research activities are focused on neural-based speech enhancement and intelligibility modifications. He was a visiting researcher at Sonova: Phonak and Advanced Bionics, Stafa, Switzerland in 2018, and Fraunhofer Institute for Digital Media Technology (Fraunhofer IDMT), Oldenburg, Germany, in 2019. |