UConn neuroscientist Ed Large built a model of the brain that can predict the future. And then he taught it to dance.
Synchrony, a musical light show driven by artificial intelligence, will go on sale to the general public this fall. But Large and his colleagues at UConn and Oscilloscape, the company he founded to develop Synchrony, believe that rather than being a toy, the device and the science behind it shed real light on how we hear and understand music and language.
“We wanted to do artificial intelligence for sound. Synchrony is designed to act like your brain. It hears the music like you do,” says Large, a professor in the psychological sciences department.
Stevie Wonder’s “Faith” is playing in the background. Large flicks a switch, and Synchrony lights up.
It’s impressive. Synchrony’s lights flash and shift in patterns that look like they’re dancing, and the colors somehow make sense with the music. When the music switches to Rihanna’s “Pon De Replay,” Synchrony’s style switches, too. This thing is to your typical flashing strip of sound sensitive LEDs what Prince was to your average guitarist. It’s like a DMX show done by a professional, but on a smaller scale. The colors shift and change, it picks up sub-rhythms in the beat like a good, intuitive dancer. When Large says Synchrony hears like a person, it’s easy to believe. It certainly looks like it.
But even if it hears the music like a human, Synchrony is still made of silicon. The brain of the device lives in a small, flat black box. Inside is a bundle of computer chips with a special arrangement called a neural network. As the name implies, neural networks are supposed to mimic networks of brain cells (neurons). Most neural networks use connections between neurons to look for patterns in data. So a neural network built to analyze sound can notice a repeating beat or musical line in a song, and learn to anticipate it.
The prevailing theory of how we hear says the brain does essentially the same thing. When sound enters your ear, it gets picked up by the auditory nerve and sent to the brain stem and then to the thalamus, which collects sensory input of all kinds. The thalamus relays the sound signal to the auditory cortex in the brain above your ear, which then sends messages about the sound throughout the brain. A significant number connect directly to the motor system.
The motor system’s primary job is to move your body. But many scientists now believe that it also helps us find the beat and metric structure in a piece of music. That’s why most people “feel” the beat in a piece of music. And it’s also how we can dance, according to this model.
But not everybody can dance or march in time to a beat. There’s always that one freshman at band camp ….
“Yes, there does seem to be a subset of the population without rhythm. But we don’t know why,” Large says. Neuroscientists guess that in some people, the coupling between the auditory cortex and the motor processing parts of the brain isn’t strong enough to translate into a feeling of movement or beat.
Synchrony has no such difficulty. What it lacks in legs, it makes up for in colored lights. And improvisation. Because yes, Synchrony looks like it improvises. Sections of “Pon De Replay” are very repetitive, yet Synchrony didn’t always do exactly the same thing when Rihanna repeated a line. Like a human dancer, it changed things up a little here and there.
We wanted to do artificial intelligence for sound. Synchrony is designed to act like your brain. It hears the music like you do. — Ed Large
That spontaneity is by design. Just like a human brain, Synchrony’s neural net oscillates even in the absence of music. When there is music, it synchronizes with the beat, but it may find the beat of the bass drum, or the beat of the vocals. Synchronizing is almost like predicting the future, which is something humans do all the time; based on past patterns, we make a prediction about what will happen next and then act accordingly. And Synchrony’s spontaneous oscillations are somewhat similar to human improvisation, or original thoughts; even though we think we know what will happen next, we don’t always respond the same way. Not when we’re chatting with a friend, not when we’re trying to avoid someone coming the other way on the sidewalk, and definitely not when we’re dancing.
Dance, and the music that inspires dance, are prime examples of this random factor. Modern music recording has taken away a large part of this spontaneity, in many ways lessening the appeal of the music in the process, and Large believes that regaining that improvisational, human touch could be another application of Synchrony.
For example, in a group of musicians performing live, the drummer is usually the one who keeps the beat. If the drummer wants to, she can speed up or slow down ever so slightly, and all the other musicians follow that lead.
But in recording studios these days, a robotic metronome keeps the beat instead of a human drummer. This makes it easier for the producer to record different parts of a song separately and weave them all together, but music recorded to a robotically predictable beat in this way lacks the subtleties and emotional color that an ever-so-slightly changeable beat can give.
Large says that Synchrony’s technology could change that. With Synchrony’s ability to follow a beat like a human, a real live drummer could lay down a line to a song and change the beat as she feels it. The other musicians would listen to it and play or sing their parts on top. And then Synchrony could help the producer mesh it all together with the same precision that the robotic metronome allows – but with more emotional texture.
So this season of lights, we can enjoy Synchrony as a decoration. But Large and his colleagues hope that eventually, Synchrony doesn’t just make your music look better. They hope it makes it sound better, too.