View Single Post
Old 03-12-2011, 10:06 AM   #95
chip mcdonald
Human being with feelings
chip mcdonald's Avatar
Join Date: May 2006
Location: NA - North Augusta South Carolina
Posts: 3,664

Originally Posted by JohnnyMcFly View Post

to truly model an amp. If we could then why has no one made a port of SPICE that models and runs in real time? Until then VST modeling is all just make
As I said in your forum, I think the SPICE approach is ultimately what needs to happen. It doesn't have to be in real time today - it would be interesting just to hear the possibilities of the approach IMO. I think it will be possible at some point in the future - cascading parallel processors.

I believe, though that another pet theory - that chaotic math is attractive to the human mind, because contemplating something that exceeds human cognition presents more possibilities - is at the root of both the "uncanny valley" in robots and for realism in amp/"analog audio" simulation.

You just don't know what a strat, a fuzz face into a cranked Marshall is going to do exactly when you hit a note. Hit an open chord 4 times, it's going to be near impossible to make all four sound *exactly* the same, minute differences are amplified in non-linear ways with a complexity that is not random, but beyond human computation. I think that is what makes staring at the ocean, the sky, fire, etc. enthralling.

I don't know what the clock cycle overhead would be to make a function call to grab a random number, to modify the balance of the IR mix, resonance, overtones, etc. in an aperiodic fashion - I don't think it would take much at all, but something that introduces a tiny bit of chaotic nuance over a sub-5,6k range you would have something that is suddenly "alive" feeling. In particular, if something like that were applied to the "problem region" of the speech-range (say 500hz-3k)I bet a "X factor" would suddenly come into play.

I'm particularly certain this pertains to the pitch of overtones when recording to multitrack tape: that air-band, "ear tickling" high end smoothness I attribute to compounded imperfections every time the machine mechanism is applied to the process, be it micro-wow and flutter, non-coherence of frequency response across octaves, etc.

I also attribute that to reverb realism, when you crank out the "diffusion" parameters on algorithmic reverbs you gain accuracy but lose the realism as the complexity goes down, hence the "Lexicon vs. IR" confusion - again, if a slight random variable was used to "perturb" the response characteristics of an IR I think the result would be "more natural". You never hear the exact same room response in real life, because your head is never perfectly still in an acoustic space and you're probably hearing something that probabilistically is exciting the room with it's own set of non-linear, near-random behaviors (a human voice, which is never perfectly replicable).

Blah blah blah. Or it could just be I haven't had enough caffeine yet.
]]]>-guitar lessons -<[[[
Experiencing Guitar: Essays from Teaching by Chip McDonald
chip mcdonald is offline   Reply With Quote