A mathematical model of a violin developed at the Stanford University Center for Computer Research in Music and Acoustics, CCRMA, may eventually help to give researchers a better understanding of robotic manipulation, seismic disturbances and aircraft motion in addition to music synthesis, reports Micro-bytes Daily. Although his immediate goal is to produce better string sounds […]
A mathematical model of a violin developed at the Stanford University Center for Computer Research in Music and Acoustics, CCRMA, may eventually help to give researchers a better understanding of robotic manipulation, seismic disturbances and aircraft motion in addition to music synthesis, reports Micro-bytes Daily. Although his immediate goal is to produce better string sounds from synthesis, Chris Chafe, the CCRMA composer who built the violin using a Xerox Lisp computer, points out that the techniques and algorithms being developed may well turn out to be applicable to other disciplines as well. To simulate a vibrating mechanical system in a computer, you can take a delay line, put an impulse across the line, then start the pulse recirculating around the delay line, Chafe explains. What you get is something that sounds like a plucked string if you do it through time. With a few niceties, you can make that sound really good. While the model’s component parts include the violin body, strings, and bow, Chafe’s current focus is on bow-string simulation. The bowing algorithms he is using, Chafe says, enable the bow to read back from the string what’s going on at any one time. The bow can then decide what sort of frictional reaction it should make, alternately sticking to the string or slipping over it, just as a bow wielded by a human violinist does in real life. Although he admits to taking a few short cuts – I’m using a bow that has only one hair because every hair you add slows the system down – Chafe says that the system so far is reacting very much like a real violin. The real beautyWhen the vibrations start up, there’s a short period of instability where the transients that are generated are remarkably lifelike, he says. If a string player hears that, he can tell intuitively how much pressure and velocity are being exerted on the string. That’s the real beauty of this technique. Instead of manipulating the spectrum directly, you have an intuitive handle. The control parameters on the model are not harmonics and keys, but exactly the parameters of bow velocity, bow pressure, bow-string length, and so on. Just by listening, a string player can guess what adjustments need to be made. Since the system doesn’t operate in real time, Chafe has had to generate synthetic control envelopes by developing a rule-based expert system that represents the trained aspects of a player and creates the instrument control envelopes. Chafe has also analysed and captured player gestures and applied them to the system, and it’s here that his work may eventually be useful for robotic control as well. One of the biggest problems Chafe faces is that inability to operate in real time. What the technique demands is a very, very high-speed general-purpose computer, and such a box just isn’t available yet, he says. It might some day run on digital signal processors, Chafe says, but this approach is just beginning right now. We’re looking at at least a year or two lag.