

Music emotion recognition method based on multifeature fusion
source link: https://techxplore.com/news/2022-05-music-emotion-recognition-method-based.html
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

May 2, 2022
Music emotion recognition method based on multifeature fusion
by David Bradley, Inderscience

Software that can correlate musical changes in an audio recording of a song with perceived emotional content would be useful across the music industry, particularly in terms of cataloging music and developing music recommendation systems for streaming services and sales. The same approach might also have utility in musical composition and music teaching as well as in music-based therapy. Research in the International Journal of Arts and Technology, recognizes that there are numerous limitations in the current software and points the way forward to how such software might be improved.
Yali Zhang of the School of Music at Henan Polytechnic in Zhengzhou, China, explains how earlier research has focused on training a probabilistic neural network to recognize the nuance of a piece of music and correlate it with the likely emotional responses intended by the composer. However, such work has large error margins that Zhang hopes to preclude in developing her new approach to music emotion recognition. Zhang's approach involves processing the music signal in order to obfuscate a proportion of the low-frequency information that is not necessarily a part of the music's emotional content. Her approach also frames the sound signal and then divides the frames by a window function so that they can be processed by the emotion recognition software. In addition, noise is reduced by time-domain endpoint detection, she adds.
With the sound file thus pre-processed, the matter of recognition can begin and this involves analyzing pitch changes, the rise and fall of tone, and the rate at which those changes occur. Zhang explains that a "weight coefficient" of musical emotion can thus be extracted from a sound file. The characteristics thus extracted for known sound files with human-described emotive content can then be used to train the system so that it can automatically recognize the emotive content in a previously uncategorized piece of music. The approach reduces the error margins seen in earlier work considerably making the categorization of musical emotive content much more accurate.
Explore further
Recommend
-
37
Google’s Next Generation Music Recognition 2018-09-15admi...
-
39
Image Credit: B-rina Recognizing human emotio...
-
31
Classifying Conversations using Graphs Kevin Shen
-
16
Some apps like Words With Friends 2 can break the Pixel 2's 'Now Playing' music recognition The Pixel 2 has been in and out of the news here for issues both major and minor, and today we have a somewhat humorous one. Apparently, it is...
-
4
Microsoft AI: Mobile Emotion Recognition Application Posted by Suthahar on Tuesday, February 25, 2020
-
4
Why emotion recognition AI can’t reveal how we feel New research exposes the flaws in emotion recognition
-
5
城会玩之微软 Emotion Recognition
-
17
Jetson-based sensor fusion kit features mmWave Radar Mar 17, 2022 — by Eric Brown 287 views
-
8
Study|Survey of Speaker Recognition System in NN Method Posted 1 year ago2021-01-23T15:33:00+08:00 by 陈钱牛
-
10
Faisal Aslam November 2, 2022 3 minute read...
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK