Drexel University - Comprehensive, integrated academics enhanced by co-operative education, technology, and research opportunities. | Drexel University
Drexel University
Search events. View events.

All Categories

Click for help in using calendar displays. Print the contents of the current screen.
Display Format: 
Event Details
Notify me if this event changes.Add this event to my personal calendar.
Go Back
Ph.D. Research Proposal: David Grunberg
Start Date: 11/22/2013Start Time: 1:00 PM
End Date: 11/22/2013End Time: 3:00 PM

Event Description
Title:  Developing Noise-Robust Music Information Retrieval Algorithms
Advisor:  Dr. Youngmoo Kim
Date:  Friday, November 22,  2013
Time:  1:00 p.m.
Location:  Hill Conference Room 240, 2nd Floor, LeBow Engineering Center

Abstract

Within the field of Music Information Retrieval, many algorithms have been developed for the analysis and understanding of musical audio. There now exist systems that can reliably identify high-level features such as beat locations, tempo, and key from acoustic musical sources. The vast majority of these algorithms, however, have been developed on CD-quality pieces that are relatively free of acoustic noise, and most music in the real world is not as clean as that which is used to design, train, and evaluate these systems. People often listen to noisy music, ranging from sets at open-air concerts to songs pumped through speakers in noisy restaurants. It would be ideal if our algorithms for identifying high-level music information could function accurately in these environments, but many such techniques fail when used on pieces that have been contaminated by various sources of noise. My goal, therefore, is to develop Music Information Retrieval systems that are robust to real-world acoustic conditions.

The difficulties posed to Music Information Retrieval algorithms by the introduction of various types of noise are considerable. Within clubs and other music venues, noise sources from HVAC systems and electronic equipment can obscure various components of the musical audio. The frequency responses of speakers, microphones, and the room itself can add further distortion. Should a performance incorporate robots, noises from those machines may be added to the mix. Finally, if the music is recorded to be listened to later, the recording equipment may distort the sound and cause additional complications. All of these issues can make it very difficult for traditional Music Information Retrieval algorithms to function accurately.

This work aims to present techniques for bridging the gap between the performance of Music Information Retrieval algorithms in clean and real-world environments. First, datasets of musical audio that is contaminated with different types of noise have been collected and annotated. Next, algorithms for removing noise from this audio will be developed to determine which work best in conjunction with state-of-the-art Music Information Retrieval algorithms. New features will also be developed, including those determined by deep learning algorithms, to assist computers in determining noise-robust high-level features of the music. In sum, this work seeks to alter or develop new Music Information Retrieval algorithms that can function accurately on musical audio contaminated with real-world noise produced by various sources. As an example of the final system's capabilities, it will be used to enable a humanoid robot to respond to acoustic music despite realistically noisy acoustic conditions.
Location:
Hill Conference Room 240, 2nd Floor, LeBow Engineering Center
Audience:
  • Current Students
  • Faculty
  • Staff

  • Display Month:

    Advanced Search (New Search)
    Date Range:
    Time Range:
    Category(s):
    Audience: 

    Special Features: 

    Keyword(s):
    Submit
    Select item(s) to Search
    Select item(s) to Search
    Select item(s) to Search
    Select item(s) to Search