


That's when I determine which notes to tie together with legato transitions, figure out where breaths would be taken, draw in how notes naturally tail off, estimate how much breath a note would be attacked with, create imperfections, etc. So after I record the performance as best I can, I go back and listen to it a couple measures at a time, and start thinking about how a sax player would approach each part. I'm not a sax player, so I can't input the notes on the fly exactly as a sax player would. Whether I record Audio Modeling saxes with an expression controller or a breath controller, I always go back and edit just about every note. It doesn't change the sound, but it does change the way people perceive it.
The best way to defeat the synthiness, in my opinion, is actually in the performance - by meticulous editing and note-shaping, just like you would do with any sampled instrument. As good as Audio Modeling products are, you almost have to treat them like they were cheap instruments recorded with a cheap mic to get the most out of them, which means not being afraid to hit 'em hard with EQ.ģ. Sometimes the EQ needed is broad, and sometimes it is highly surgical. Once I find the best sounding sax for the project, then I adjust formant, temperament, etc.Ģ. On my current project in production, some of my songs are using Tenor Sax 0, some are using a Warm Sax, some are using Sax 4, etc. Some sax models just sound better with certain songs. Cycle through the sax models while your project is playing. Sound-wise, Audio Modeling products are the some of the blankest blank canvases you can have in a VI, which is not a bad thing, but it certainly puts a demand on the skills of the operator.Ī few suggestions for combating the synthiness:ġ. There is a synthiness (is that a word?) to most of their products, but I'm thinking that's the trade-off for them being so playable.
