1963: Robert Moog, an electrician, builds the first modular synthesizer. The device, which could break sound down to its core and control it, set music creation on an unknown path and would go on to revolutionize genres and spark new ones, even if just by accident. Starting out as a massive and expensive machine, the synthesizer has evolved throughout the years, becoming smaller and easier to use.
Fast forward to present day. Tech giant Google has released a touch screen synthesizer that utilizes artificial intelligence to learn characteristics of sounds and create an entirely new sound from that.
According to Google, the NSynth – an algorithm – was created by in-house research project Magenta last year. The NSynth Super, pictured above, was created alongside Google Creative Lab and “gives musicians the ability to make music using completely new sounds generated by the NSynth algorithm from 4 different source sounds.”
So how does it work? Basically, sounds are loaded into the device, then using the dials someone can select the “source sounds” they want to explore – for instance, a guitar and a flute – and then, using the touch screen, can drag their finger up and down, side to side and find completely new sounds that “combine the acoustic qualities of the four source sounds.”
This product is not available for purchase, but it is an open source project. You can head to GitHub and download the source code, schematics and design templates for the NSynth Super.