2

bliss music analyzer library

 2 years ago
source link: https://lelele.io/bliss.html
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

An open-source library to make audio playlists by evaluating distance between songs.

Note: this page is about bliss-rs. For the old bliss in C, see here.

Index

What is bliss?

bliss is a library designed to make smart playlists, by evaluating distance between songs. It is mainly useful integrated in existing audio players, or for research purposes.
You can see it in action for MPD through blissify for instance.

The main algorithm works by first extracting common audio descriptors (tempo, timbre, chroma…) from each song into a set of numeric features per song. Once this is done, the distance between two songs can be simply computed using the existing distance() method (which is just an euclidean distance, really).

Playlists can then be made by putting together close songs (see "usage" section for more info).

bliss is written in Rust (see the crate) uses ffmpeg and aubio. Python bindings are also available.

The source code is available here.

It is still in development, so don't hesitate to submit PRs, bug reports, etc.

Download

The simplest way is just to add bliss-rs = "0.2.4" to your Cargo.toml.

If you use MPD and want to make smart playlists right away, install blissify instead: cargo install blissify.

64-bits packages for blissify are available for Archlinux and Debian/Ubuntu:

Library usage

Song::new() does all the heavy lifting, see below:

Compute distance between two songs:

 
     use bliss_audio::{BlissError, Song};

     fn main() -> Result <(), BlissError>{
         let song1 = Song::new("/path/to/song1")?;
         let song2 = Song::new("/path/to/song2")?;

         println!("Distance between song1 and song2 is {}", song1.distance(song2));
     }
   

Analyze several songs and make a playlist from the first song:


    use bliss_audio::{BlissError, Song};
    use noisy_float::prelude::n32;
      
    fn main() -> Result<(), BlissError> {    
        let paths = vec!["/path/to/song1", "/path/to/song2", "/path/to/song3"];    
        let mut songs: Vec = paths    
            .iter()     
            .map(|path| Song::new(path))    
            .collect::, BlissError>>()?;    
                        
        // Assuming there is a first song    
        let first_song = songs.first().unwrap().to_owned();  
     
        songs.sort_by_cached_key(|song| n32(first_song.distance(&song)));  
        println!(  
            "Playlist is: {:?}",  
            songs  
                .iter()  
                .map(|song| &song.path)  
                .collect::>()  
        );  
        Ok(())  
    }   
  

For more information, see the documentation.

Technical details

The analysis process works this way:
Each song analyzed with Song::new, has an analysis field, which an in turn be transformed into a vector using analysis.to_vec().

Each value represents an aspect of the song, and an Analysis can be indexed with AnalysisIndex, to get specific field (e.g. song.analysis[AnalysisIndex::Tempo] gets the tempo value.)
Here's what the different parts represent:

  • Tempo has one associated descriptor, that uses the spectral flux as an onset detection method.
  • Timbre has seven different descriptors: the zero-crossing rate, and the mean / median of the spectral centroid, spectral roll-off, and spectral flatness.
  • Loudness has two descriptors, the mean / median loudness, which is a measurement of how loud the sound is, i.e. the amplitude delta of how much the speaker membrane should move when producing sounds.

    This descriptor is usually not used in research papers since it very much depends on the way songs are recorded / encoded, but it should be an integral part of a playlist-making algorithm. A very soothing track will still wake you up if its volume is turned up to the maximum, even if it resembles a lot to other soothing tracks.

  • Chroma features have ten different descriptors, that are interval features based on this paper.

As you might have noticed, the chroma features make up for half of the features. While the euclidean distance (each numeric feature counts for the same amount as the other in the distance) provides very satisfactory results, experimenting with metric learning, or simply adjusting the distance coefficients could improve your experience, so don't hesitate to do so!

For more on these features, and some discussion on metric learning, see this thesis, that was made specifically for founding the basis of bliss' innards.

And blissv1?

Some people will have noticed that the previous location of bliss' repository was here. This repo contains the old bliss code, which was written in C. However, it has been re-written in Rust, to be able to implement from the ground up a more scientific approach to music information retrieval.

The old C library is still bugfixed though, and its webpage is still accessible, though it is recommanded to use the Rust version, as it is faster and more complete.

Note that the features generated by C-bliss and bliss-rs are also incompatible.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK