Hey, so I know James already posted this (and thanks James for writing an article on it!) but I’d like to post it here in the “show off your work” section because I had been planning on posting it here haha. If you have any specific questions, I’d be happy to answer them but I’d like to write a little about how I’m detecting the music. (Teensy 3.6, audio shield and library is performing FFT)
I used the Statistics library to do beat analytics, where a running average and standard deviation is taken for 3 different FFT bins (there are 15 total but I’ve found bins 2, 8, and 12 are good for low, medium, and high frequency). If a new value is detected that is > 3.5 x st. dev, it is registered as a beat. When silence is detected, it clears the running stats. I have a counter variable that counts the # of beats in these 3 bins and every 10 seconds, it decides what set of designs to put the mirror in. Examples are designs purely reacting to low frequency beats, or reacting to both high and low, or not silence but no beats being detected.
I described this mostly to say that I want to make it more complex and I’ve looked for libraries that do this but with no success. I want one that will analyze the intricacies of a song and output info on it, like how active each frequency is, which frequency to react to, detecting when the beat drops, the speed of a beat, etc. If anyone knows of something like this or has any advice, I’d love to hear about it. I think I may resort to doing this myself but I’d like to know if the wheel has been invented.