You are on page 1of 12

Mamba Music

Ayush Khanal, Matt Strong, Mackenzie Lobato, Aren Dalloul,


Brian Nguyen, Jeff Lucca
Intro

AI Generated Music Radio


Overall Architecture
Magenta ML

● Music Generation
● 7 Models
○ MelodyRNN
○ PerformanceRNN
○ PolyphonyRNN
○ ImprovRNN
○ PianorollRNN Nade
○ Music VAE
○ Music Transformer (the best)
Magenta ML
● (Artist, Genre, Length) -> Music!
○ Based on genre we need a “primer sequence” (aka part of a song)
■ Lakh Midi Dataset (https://colinraffel.com/projects/lmd/)
■ Only had artist name, used Spotify API to get genres
● Different Models required different configurations
○ Some models require chords, others, melodies
● HORRIFIC Documentation
● Google Cloud Deep Learning VM
○ Cronjobs to automate generation
Google Cloud Storage
● Storage of static files
● Straightforward Python API
DynamoDB
● songs
○ uuid, title / artist / …
○ GSI on artist for better filtering (one artist per radio)
● users
○ uuid, likes / dislikes
● nouns / adjs
○ ~6500 nouns, ~1500 adjectives
○ Used to generate a random title
Lambda / API Gateway
● REST API to DynamoDB, integrated w/ Lambda
● /Users
○ Manage user metadata
● /Songs
○ Manage song metadata
● /Queue
○ Retrieve songs to play next
○ Takes user and artist as input
○ Recommendation function based on those params
Frontend

Netlify: for hosting our web application

React: for website design

Google: for user authentication / login


Neural Style Transfer
Machine Learning with Tensorflow
Conclusion
● Future improvements
○ Integrate AWS ElasticSearch for better song recommendations
○ Cycle out old songs with bad ratings for newly generated ones
● Link: https://nervous-shaw-eb2ca8.netlify.app/
● GitHub Link: https://github.com/CUBigDataClass/Mamba-Music

You might also like