Call for Papers and Participation
Creative Digital Dynamics II: Symposium on AI & Digital Innovations for Voice and Vocal Music
Event date: Friday 7 March 2025 (10.00 – 15.30 Conference; 16.30 – 18.00 Concert)
Location: Edinburgh Futures Institute (Conference); Reid Concert Hall (Concert)
Context: The Institute for Advanced Studies in the Humanities (IASH) at the University of Edinburgh is thrilled to announce an international and interdisciplinary symposium on the intersection of music and AI, with a particular focus on the performance of voice, song, and the composition of vocal music highlighting AI-based singing or voice processing tools. Building on the AI & chamber music conference organised by IASH Fellow Dr Alexandra Huang-Kokina with the University of London earlier this year, this one-day symposium aims to extend the initial critical conversation. It will provide a forum for peer feedback, networking, and community building for scholars, creative practitioners, and postgraduate research students working at the fascinating crossroads of music, creativity, and AI studies.
At the heart of this symposium on vocal music is the promotion of diversity in vocal representations via AI, featuring critical discussions centred around how AI’s creative intelligence challenges and redefines conventional vocal types defined by gender, ethnic-cultural, linguistic, and neural norms. The symposium celebrates underrepresented voices in two notable ways: by using AI to amplify a diverse range of ‘singing voices’ during the symposium’s signature ‘science-fiction opera’ concert at the Reid Concert Hall, and by inviting presenters from diverse backgrounds to contribute ‘critical voices’ to the debates surrounding AI vocal songs. To achieve this, the symposium will feature a diverse programme that includes a keynote speech, expert talks by invited speakers, selected paper presentations, and the premiere of a science-fiction opera prototype. At the forefront of critical AI & music scholarship, this symposium aims to shape both the conceptual & practical development of future AI voice / song processing models, ensuring that new AI tools reflect diverse and globally attuned values.
Confirmed speakers:
Keynote speaker
Professor Ricardo Climent, Professor of Interactive Music and Director of the NOVARS Research Centre at the University of Manchester
Invited speakers
Dr Emmanouil Benetos, Reader in Machine Listening at Queen Mary University of London
Dr Francesco Bentivegna, Lecturer in Digital Theatre at the University of Bristol
Dr Hedvig Jalhed, Senior Lecturer in Chamber Music and Chamber Opera at Mälardalen University and Senior Lecturer in Artistic Research in Music (with specialization in Music Drama) at the Malmö Academy of Music, Lund University
Dr Robert Laidlow, Career Development Fellow in Music at the University of Oxford
Organised by Dr Alexandra Huang-Kokina, IASH Digital Research Postdoctoral Fellow and Dr Caterina Moruzzi, Chancellor’s Fellow in Design Informatics at the University of Edinburgh. Generously supported by the Susan Manning Workshop Fund from IASH at the University of Edinburgh, the Edinburgh Centre for Data, Culture, and Society (CDCS), and the Royal Music Association (RMA).
The event is also co-hosted and supported by the new research cluster ‘Creativity, AI, and the Human’ at the Edinburgh Futures Institute (EFI), led by Dr Moruzzi. The cluster groups around 50 members, all working on topics related to creativity and AI, across all disciplines from the three Colleges of the University of Edinburgh (Arts, Humanities & Social Sciences; Medicine & Veterinary Medicine; Science & Engineering).
CFP & Abstract Submission:
What remains of singing when artificial intelligence remediates and transforms the human voice? The ethical implications of using AI in composing or performing vocal music are complex, raising questions about deepfakes and the commodification or propetisation of vocal attributes integral to our identity politics regarding gender, language, culture, and subjectivity. Artistically, AI voice processing introduces a range of technical media effects that are unfamiliar to traditional vocal music. For example, it generates disembodied voices unlinked to any human body, creating an asynchronous effect akin to dubbing—singing or speaking through a surrogate. Ethically, this lack of a bodily presence in AI-generated voices in a musical setting influences our perception of their creativity—are they merely digital replicas of someone else’s voice, or do they represent original expressions with their own context and traditions? Socially, AI-generated vocal music exposes the processes of AI mediation in the creative arts, revealing how the proliferation of new media practices modulates our structures of feeling and understanding, and reshapes our experienced relations with other human & non-human agents in a multimedia environment.
These aspects of AI and vocal song problematise the notion of ‘musical & vocal creativity’ as inherently human, portraying vocal songs as embedded in digital generative processes and feedback loops of ‘creative data’ derived from the composer, performer, audience, environment, and AI itself. Crucially, this shift points to a new paradigm of vocal music, where the focus on the bodily production of the ‘vocal source’ is replaced by the complex interconnectedness of quasi-vocal capabilities underlying new media technology. The central figure of the ‘singer’ dissolves into an entwined assembly of human, non-human or transhuman agencies, both tangible and invisible. Similarly, binary modes of vocal production (female vs. male, human vs. robots, recitative vs. aria, etc.) expands into a wider spectrum of vocalisations that can be integrated into the fabric of contemporary world music.
Notably, ‘science-fiction opera’ – a distinct literary and musical sub-genre integrating sci-fi narratives with speculative technologies – epitomises the manifold virtues of musicalising the voice through today’s most powerful disruptive technology. From recent proliferations at the MIT Media lab (e.g., VALIS) and the French IRCAM (e.g. La Fabrique des monstres) to the lineages of Swedish sci-fi operas (e.g. The Tale of the Great Computing Machine, Chronos’ Bank of Memories, The Oracle) and the Japanese new music theatre (e.g., Solaris, A Dream of Armageddon, Android Opera Mirror), the range of sci-fi opera testifies to a new spectrum of possibilities in AI-inspired or AI-generated vocal songs such as: voice as mediator of robotic choreography, AI as artificial lyricist, or AI as Large Music Model pretrained with the aesthetics and styles of individual composers.
Building on the heuristic of these works (and many more not mentioned here), this symposium invites performances and presentations that explore artistic applications of AI voice processing tools across a range of purposes, including (but not limited to) multilingual alignment, voice synthesis/ hybridisation/ transformation, or voice identity conversion in music, theatre or other live arts contexts. We will also welcome talks about creative and critical dimensions of AI vocal music, their ethics, and the legal or regulatory aspects of future vocal music.
If you are interested in presenting at the symposium, please submit a 200-word abstract of your talk or creative presentation (in any format), along with a 100-word bio, to alexandra.huang@ed.ac.uk by 10 January 2025. Notification of acceptance will be sent shortly after, by 17 January.
Attendance & Expected Outcomes
The anticipated attendance for this conference is estimated to be 50-70 guests. The event is expected to lead to collaborative outputs, such as a special issue proposal for the Cambridge Opera Journal, focusing on innovative approaches to integrating opera or vocal work with new media technology.
Abstract Submission & Contact
Any questions - please contact the main event organiser at alexandra.huang@ed.ac.uk.