Abstract
This paper presents an automatic music transcription model based on Convolutional Neural Networks (CNNs) that mimics the 'trained ear' in music recognition. The approach pushes forward the fields of signal processing and music technology, with a focus on multi-instrument transcription featuring traditional Turkish instruments like the Qanun and Oud, known for their distinct timbral qualities and early decay characteristics. The study involves creating multipitch datasets from very basic combinations, training the CNN on this data, and achieving high transcription accuracy considering the F1 scores for two-part compositions. The training process equips the model to understand the fundamental traits of individual instruments, enabling it to identify and separate complex patterns in mixed audio. The aim is to enhance the model's ability to distinguish and analyze specific musical elements, supporting applications in music production, audio engineering, and music education.
Original language | English |
---|---|
Title of host publication | 8th International Symposium on Innovative Approaches in Smart Technologies, ISAS 2024 - Proceedings |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
ISBN (Electronic) | 9798331540104 |
DOIs | |
Publication status | Published - 2024 |
Event | 8th International Symposium on Innovative Approaches in Smart Technologies, ISAS 2024 - Istanbul, Turkey Duration: 6 Dec 2024 → 7 Dec 2024 |
Publication series
Name | 8th International Symposium on Innovative Approaches in Smart Technologies, ISAS 2024 - Proceedings |
---|
Conference
Conference | 8th International Symposium on Innovative Approaches in Smart Technologies, ISAS 2024 |
---|---|
Country/Territory | Turkey |
City | Istanbul |
Period | 6/12/24 → 7/12/24 |
Bibliographical note
Publisher Copyright:© 2024 IEEE.
Keywords
- constant Q transform
- convolutional neural network
- Music transcription
- signal processing