Emotinal Music with Face Recognition

Share on facebook
Share on twitter
Share on pinterest
Share on linkedin

Our methods in exploring, experiencing, and getting creative with music have changed drastically over time. And today we are surrounded by all types of new and adapting technology, something not familiar to us even 10 years ago.

Why don’t we use more technology to our advantage when it comes to music? How can we better our current methods in choosing music, making the experience more personal and immersive?

EmoMusic aka Emotional Music

With EmoMusic we aim to give the user a finer tuned experience to what they’re feeling. EmoMusic is not just a visual addition to Spotify but:

  • Connects to your Webcam
  • Detects your current emotional state
  • Plays music from your personal Spotify account that represents your current emotions
  • Uses features of the selected music to drive a correlating visual, personalizing your experience

In our prototype we allowed premium Spotify users to log in, capture an image of themselves, submit to detect their emotion, and a song and correlating color would execute to match. The new and improved EmoMusic now includes control over your Spotify directly from our application (Pause/Play, Skip Functions) and now automatically upon pressing play your emotion is detected. EmoMusic graphics have been revamped drawing different signals to match your emotion, as well as a coordinating gradient color to set the mood.

How?

  1. The user connects to their premium Spotify account, giving permission for EmoMusic to use personal data (Spotify API), as well as accepts the use of their webcam
  2. Upon pressing Play, EmoMusic then sends your captured image to Microsoft Azure Facial Recognition Service, using a pre-trained model to detect your emotion.
  3. This information is returned and use to drive Spotify in choosing the perfect song

Requirements

Actually EmoMusic is just an offline application. If you want to test the application, you are required to have:

  • Spotify Premium Account
  • Microsoft Azure Facial Recognition Service Plan
  • Python

Microsoft Azure Facial Recognition – Running EmoMusic require to create a Face Azure resource. Azure Cognitive Services are represented by Azure resources that you subscribe to.

Create a resource for Face using the Azure portal or Azure CLI on your local machine. Once finished copy your endpoint url and key in .env file

  • AZURE_APIKEY=yourkey
  • AZURE_URI=yourendpoint

Here you can find the reference to use the Face client library.

Python – EmoMusic require a python version >= 3.7v. We suggest to use anaconda to manage your packages. If you have conda already working on your machine, create a new environment and activate it

conda create -n emomusic python=3.7 anaconda
conda activate emomusic

Install python/dotenv library

pip install python/dotenv

Now you can execute the server running

flask run

At this point your server will be running on your local machine 127.0.0.1:5000

Reminder

Before using EmoMusic turn off any application using your webcam (Skype, Zoom, Others.) this can cause conflict with using your camera

Condividi
Share on facebook
Share on twitter
Share on pinterest
Share on linkedin

Chi è Nils Lewin

Mi chiamo Nicola Bombaci alias Nils Lewin, sono un ingegnere informatico con la passione per la musica e l’audio. Durante la mia crescita professionale ho deciso di fondere questi due mondi apparentemente diversi tra loro.
Articoli Correlati

Lascia una risposta

Iscriviti alla newsletter