05-24-2021, 03:23 PM | #1 |
Human being with feelings
Join Date: Apr 2021
Posts: 11
|
Ambisonic IR
Hi, I am trying to figure out how to create the best and easiest ambisonic impulse responses for software like ambi verb and wwise convolution.
I would like to 1. record IR for a specific place in the sphere wehere I will pan my monophonic sound source. 2. record IR to be used more freely across the whole sphere. Should I for no. 1 place the speaker with the white noise/sine sweep at the position I want to place the virtual sound source and record it there? For no. 2 record several IR around the location and combine them? Or fx recording it from a 2 m distance and use that for the entire sphere? Does anyone have experience with this? |
05-25-2021, 09:55 AM | #2 |
Human being with feelings
Join Date: Jan 2009
Location: Montreal, Canada
Posts: 169
|
If a "true stereo" IR reverb is four IRs (2*2 matrix), a "true 1st order" IR reverb would be 16 IRs (4*4 matrix). It's been on my "to experiment list" for the past ten years...
https://www.avosound.com/en/tutorial...ono-and-stereo |
05-25-2021, 11:02 AM | #3 | |
Human being with feelings
Join Date: May 2006
Location: Saskatoon, Canada
Posts: 2,110
|
Quote:
Anyways, for number 1 you do want to place the speaker at the location you want the emitter at and the mic in the spot you want the listener to be located. In my understanding, if you move that emitter around the listener it will be like rotating the entire room as you can't virtually move that speaker after the fact. However...Zylia just did a very interesting demo of synthesizing multiple ambisonic recording positions in a room. I assume the same idea could be used to synthesize multiple ambisonic IRs, especially since they used wwise and unity to do it. That is my answer to question 2. It is labor and processor intensive so make sure it's going to pay off in the end.
__________________
mymusic http://music.darylpierce.com mywork http://production.darylpierce.com mypodcast https://youtube.com/@ultimatesoundtest |
|
05-27-2021, 03:03 AM | #4 |
Human being with feelings
Join Date: Apr 2021
Posts: 11
|
My main goal is to do a 360 video and audio recording and then ad sound sources with a reverb that is coherent with the space.
Knowing that I can make an ambisonic IR of the speaker placed at the right position of the panned mono source, must be my primary focus after getting into the complexity of this problem. Ambi Verb and Wwise Convolution lets you import a 4 channel ambisonic IR recording. Regarding panning a sound source in the entire sphere it seems to be the course of action to use one IR and trust that the localization information from the direct sound source will overshadow the mismatch with the spatial reverb. AudioEase is offering a set of ambisonic reverbs in their 360 pan suite, but I just thought it would be great to enhance the realism with the correct reverb information. I have also heard of examples where you can re-render the IR using a decoder plugin and then do a soundfield rotation and process the next mono sound source at the new position. And that some are blurring the localization of the direct sound if it clashes with the reverb. Is it possible at all to create a 16 channel IR file and use it with a 360 panner in any software today? Last edited by Jensus; 05-27-2021 at 06:25 AM. |
05-27-2021, 05:16 AM | #5 |
Human being with feelings
Join Date: Feb 2006
Location: France
Posts: 914
|
How to create the 16 channels IR file I don't know, but to process it you can use X-MCFX Volver or the MConvolutionEZ from MeldaProduction, both free :
http://www.angelofarina.it/X-MCFX.htm https://www.meldaproduction.com/MFreeFXBundle |
05-27-2021, 08:02 AM | #6 | |
Human being with feelings
Join Date: Jan 2009
Location: Montreal, Canada
Posts: 169
|
Quote:
For the IR capture and processing, I would use Logic Pro's Impulse Response Utility with the "Quadraphonic" preset for an A-Format mic or "Quadraphonic B-Format encoded" preset for a B-Format microphone. Both have 16 IRs. For the convolution with the IRs, I would probably use X-Volver Essential. Signal flow: For A-Format microphone IRs: B-Format -> Decode to position of emitters -> A-Format IRs convolution -> B-Format encoding B-Format microphone IRs: B-Format -> Decode to position of emitters -> B-Format IRs convolution |
|
05-27-2021, 06:43 PM | #7 |
Human being with feelings
Join Date: May 2006
Location: Saskatoon, Canada
Posts: 2,110
|
The problem you are going to continually run into in what you describe is that the sound source position of the IR determines the localization of the IR. The relationship between the impulse (speaker, acetylene balloon, clapper) and the ambisonic microphone is fixed when you record that IR. You can rotate it but you can't really move the speaker around after the fact so to speak. You could potentially use the information learned from the IR to synthesize the properties of the room and create a reverb model (using something like CATT) that would allow you to freely position sources in that room. It's an interesting question, is there an IR reverb that models in that fashion. Some of the shoebox options (like the one in IEM) offer something like this? Dear VR Pro allows for free position within certain hard-coded (not user created) spaces.
Easiest would be to get a good ambisonic IR of the space and then tweak that to simulate your different positions, or just make several IRs with the source in the likely places you'll want to put your instruments. As an aside, I'm loving the melda MconvolutionEZ for its simplicity and non-crashiness (for me anyways) in comparison to xvolvler. I use gratisvolver to create my impulses as I am not on a Mac.
__________________
mymusic http://music.darylpierce.com mywork http://production.darylpierce.com mypodcast https://youtube.com/@ultimatesoundtest |
05-28-2021, 02:19 AM | #8 |
Human being with feelings
Join Date: Apr 2021
Posts: 11
|
Yes, it will take some trial and error to find a sufficient solution. Perhaps some situations can be handled with simple tools and others requires more precision. Great ideas on where to dive in.
I will primarily record outside for the soundscapes I am working with so it is also important it is relatively easy to handle in that situation. Just an idea. - I have seen people using the possibility of summing the IR's. This I guess would make the reverb less directional and cover the IR's of a larger area. https://www.openair.hosted.york.ac.uk/?page_id=483 - In the Ambi Verb tutorial it says you should record 1 IR 2m from the emitter. Ex. Speaker on stage and microphone 2m from it in the audience. This could be a bit low resulotion imo. The sweep file is only 7 sec, but otherwise I am thinking of sending a white noise signal from a recorder directly into the speaker and then pressing stop, to get a broadband IR signal. https://www.noisemakers.fr/faq/#1524...-004522eb-722b - If I record 4 IR's (n,s,e,w) in the horizontal plane around the ambisonic microphone at a 2m distance and sum them. The convolution would then be a panned mono signal with an ambisonic IR. Would that give the reverb a rough directivity during a convolution in accordance to this logic https://www.avosound.com/en/tutorial...no-and-stereo? |
05-28-2021, 03:07 AM | #9 |
Human being with feelings
Join Date: Jan 2009
Location: Montreal, Canada
Posts: 169
|
|
05-28-2021, 05:40 AM | #10 |
Human being with feelings
Join Date: Apr 2021
Posts: 11
|
Ok, equally distributed from the front?
I will be looking into your description on Logic Pro's Impulse Response Utility Kewl. FYI documentation on the Wwise convolution can be seen here: https://www.audiokinetic.com/library...reverb_plug_in and https://www.audiokinetic.com/learn/videos/H50NRzZnd5k/ |
05-28-2021, 10:37 AM | #11 |
Human being with feelings
Join Date: Jan 2009
Location: Montreal, Canada
Posts: 169
|
|
05-29-2021, 10:29 AM | #12 |
Human being with feelings
Join Date: Apr 2009
Location: Berlin, Germany
Posts: 1,248
|
some notes i just read in the sparta matrix convolver source code:
Example 1, spatial reverberation: if you have a B-Format/Ambisonic room impulse response (RIR), you may convolve it with a monophonic input signal and the output will exhibit (much of) the spatial characteristics of the measured room. Simply load this Ambisonic RIR into the plug-in and set the number of input channels to 1. You may then decode the resulting Ambisonic output to your loudspeaker array (e.g. using sparta_ambiDEC) or to headphones (e.g. using sparta_ambiBIN). However, please note that the limitations of lower-order Ambisonics for signals (namely, colouration and poor spatial accuracy) will also be present with lower-order Ambisonic RIRs; at least, when applied in this manner. Consider referring to Example 3, for a more spatially accurate method of reproducing the spatial characteristics of rooms captured as Ambisonic RIRs. Example 3, more advanced spatial reverberation: if you have a monophonic recording and you wish to reproduce it as if it were in your favourite concert hall, first measure a B-Format/Ambisonic room impulse response (RIR) of the hall, and then convert this Ambisonic RIR to your loudspeaker set-up using HOSIRR. Then load the resulting rendered loudspeaker array RIR into the plug-in and set the number of input channels to 1. Note it is recommended to use HOSIRR (which is a parametric renderer), to convert your B-Format/Ambisonic IRs into arbitrary loudspeaker array IRs as the resulting convolved output will generally be more spatially accurate when compared to linear (non-parametric) Ambisonic decoding; as described by Example 1" |
08-17-2021, 03:48 AM | #13 | |
Human being with feelings
Join Date: Feb 2012
Location: Long Island & Rochester, NY, USA
Posts: 3
|
Quote:
I have not tried this yet, but Wave Arts offers a completely free and cross-platform application for making IT files using white noise to capture a broadband response. I think its manual said the number of channels it can handle are only limited by your the channel count on your interface. They also co-created the excellent freeware true stereo Convology XT convolution reverb plug-in, which I do use. Pick your platform from the popup. MIs Tool is the name of the IR capture app... https://wavearts.com/downloads/ In addition, you'll find the manual for Apple's Impulse Response tool contains numerous graphical, tried-and-true suggestions for multi-speaker, multi-microphone, and single mic and speaker capture methods. I bet something in there can be adapted to your needs. https://tinyurl.com/AppleIR I hope this is if use. Stay well! Cheers, Glenn in Rochester, NY, USA |
|
11-24-2021, 03:54 AM | #14 | |
Human being with feelings
Join Date: Jul 2008
Location: Athens / Greece
Posts: 625
|
Quote:
The "O3A Reverb - Shaped Convolution" by Blue Ripple Sound takes this road, you can have a look at the documentation found at: https://www.blueripplesound.com/site...ide_v2.2.0.pdf You can use that together with a shoebox early-reflection modeler like the "IEM RoomEncoder" or even Blue Ripple Sound's own "O3A Shoebox" processor. For an early-reflections/late-reflections modeling solution based completely on the IEM suite, you can use the "IEM RoomEncoder" together with the "IEM FdnReverb". I tried it using a mono signal encoded to HOA and then loaded a HOA IR I made for testing and it gives erroneous output. Actually whatever the position of the source in the ambisonic field and the behavior of the test IR, the plugin outputs the reflections located wrong. How did you make it work? I test the convolution algorithms' directivity using a test IR file that I created at the SoundFellas Immersive audio Labs. It's a rhombicuboctahedron impulse emission 1 pap per sector per second and you can find it here: https://1drv.ms/u/s!AoFZ1MP3ewRggqY3...X33nw?e=QUgMdF. My tests also output the wrong directivity in ReaVerb when I move the mono source below the mid horizontal plane on the ambisonic encoder/panner. If you get the correct results, please let me know how you did it because I think that something wrong goes on on the engine of this plugin that messes with the directivity. |
|
11-24-2021, 12:23 PM | #15 | ||
Human being with feelings
Join Date: May 2006
Location: Saskatoon, Canada
Posts: 2,110
|
Joystick, my friend, this is a mind bender as the pdf manual description states.
Quote:
Regarding MConvolutionEZ... Quote:
I tested using a pink noise generator, run into SoundParticles SpaceController which I set to stereo input and FOA for output. I then ran a 4 channel instance of MConvolutionEZ with an FOA ambix impulse I made myself from a tone sweep through a coresound tetramic using gratisvolver. Everything perceptually seemed to be coming from the correct locations. I then loaded up a parallel track, fed it the same pink noise through the same panner but with Sparta multiconv. The perceptual results where the same and the two sources nulled perfectly. (to my great relief) Perhaps the problem exhibits only in higher orders?
__________________
mymusic http://music.darylpierce.com mywork http://production.darylpierce.com mypodcast https://youtube.com/@ultimatesoundtest |
||
11-26-2021, 10:21 PM | #16 |
Human being with feelings
Join Date: May 2019
Posts: 376
|
I feel dumb, now. What I do is test out every possible combination of plugin and fiddle around with the settings until it sounds right.
|
02-03-2022, 02:53 AM | #17 |
Human being with feelings
Join Date: Feb 2007
Posts: 41
|
Just seen this thread. This is how I did it. As mentioned, a 4 x 4 matrix is needed for 1st order, and it gets very high, very quickly, channel count wise, for higher orders. It sounded amazing on our large 3D rig, setup for Sounds in Space.
https://youtu.be/KhuW6xQhf6M?t=46m12s Slides also available on this page: https://www.brucewiggins.co.uk/?page_id=881 cheers Bruce |
02-14-2022, 05:00 AM | #18 | |
Human being with feelings
Join Date: Jul 2008
Location: Athens / Greece
Posts: 625
|
Quote:
If you want to apply some extra engineering testing or a scientific experiment, you just take a break from that and do the other. ;-) Here is my process... When I discover something new that I want to add to my production process I usually take one week off to really dig into it. I start by searching online, usually Wikipedia and any other info that comes readily accessible. Then I have the keywords and phrases I need to help me search in journals and libraries like the Audio Engineering Society or The Acoustical Society of America, or even search for open text on PubMed, Google Scholar, and other portals. In the beginning, I was scared by the vast amount of mathematics, physics, life sciences, and computer sciences matter that you find on those papers. But later on, I learned how to read them "diagonally" to get what I need. From all this effort I can finally get if I need to, one or two books, which I then schedule to read within the month, while the info is vivid in my mind. Then I know how to select technologies, plugins, tools, etc. So my questions in any plugin sales department are usually directed to the engineers, hehehe! :-) That's how I do it, and it served me well the last 20 years or so :-P Hope I gave you some insight. |
|
02-14-2022, 06:28 AM | #19 | |
Human being with feelings
Join Date: Jul 2008
Location: Athens / Greece
Posts: 625
|
Quote:
What a nice presentation, thanks for sharing this, I really enjoyed watching it. I approach IR production using a similar philosophy and it sounds great. This is something that we are making for our Echotopia soundscape designer application and it sounds very realistic 3D-wise. |
|
06-14-2022, 06:35 AM | #20 | |
Human being with feelings
Join Date: Jul 2008
Location: Athens / Greece
Posts: 625
|
Quote:
The only way to check this is to use the test files I posted here: https://forum.soundfellas.com/viewtopic.php?t=51, and in order to be sure, you have to check it for every different routing configuration you use with the convolvers. The free convolved seems to produce wrong results and the paid convolver seems to produce correct results when input/plugin channels/output numbers are matching and there is a signal present in all channels declared throughout the chain, and the kernel topo is "Mono to Stereo" which is strange but it works. I reported all my findings to MeldaProduction and waiting for their reply, or hopefully, a fix for both EZ and MB versions of their convolver. Btw, their convolvers seem to be the best regarding performance and resource management as software applications. I have great results measuring the RT CPU load, so if they fix ambisonics behavior they will probably become my favorite tool for the job. Reaper's own convolution plugin also handles HOA erroneously. I will post my findings here when I have concluded my research. |
|
06-15-2022, 08:53 PM | #21 | |
Human being with feelings
Join Date: May 2006
Location: Saskatoon, Canada
Posts: 2,110
|
Quote:
__________________
mymusic http://music.darylpierce.com mywork http://production.darylpierce.com mypodcast https://youtube.com/@ultimatesoundtest |
|
10-08-2022, 08:54 AM | #22 | |
Human being with feelings
Join Date: Jul 2022
Posts: 4
|
Quote:
I tried to use the "wet sweep deconvolve" mode with a 4 channel sweep (B-format) for the wet sweep and tried with mono, stereo, 4 channel sweep for the dry sweep, same lenght as the wet one. The .wav it exports are not useable Impulse Responses |
|
10-09-2022, 06:07 PM | #23 |
Human being with feelings
Join Date: May 2006
Location: Saskatoon, Canada
Posts: 2,110
|
What seems to be the problem with the impulse output? I just tried it and the result seems to work fine for me. It does need to be trimmed as the actual IR winds up in the middle (timewise) of an output file the same length as the dry source sweep and wet recorded sweep. The results are FUMA as opposed to ambix
__________________
mymusic http://music.darylpierce.com mywork http://production.darylpierce.com mypodcast https://youtube.com/@ultimatesoundtest |
10-10-2022, 11:18 AM | #24 | |
Human being with feelings
Join Date: Jul 2022
Posts: 4
|
Quote:
I tried to deconvolve with the "Wet sweep deconvolve" mode with a 4 channel inverted dry sweep wav and a FUMA wet sweep (non-inverted) wav file. Am I doing something wrong ? The 2 files are the exact same time (10sec files). Thank you for your help! |
|
10-10-2022, 08:17 PM | #25 |
Human being with feelings
Join Date: May 2006
Location: Saskatoon, Canada
Posts: 2,110
|
Play the dry sweep file (Low to High) through the room/gear that you wish to get an impulse from. Record this ambisonically (FUMA).
Set GratisVolver to "Wet sweep deconvolve". Put the Inv dry sweep file into GratisVolver as the "Inverted dry sweep WAV-file". Load your recorded sweep into the "1,2 or 4-channel wet sweep WAV-file" dialog. Select to set your output file. Now you should get the desired impulse (I hope).
__________________
mymusic http://music.darylpierce.com mywork http://production.darylpierce.com mypodcast https://youtube.com/@ultimatesoundtest |
11-21-2022, 12:59 PM | #26 |
Human being with feelings
Join Date: Jul 2022
Posts: 4
|
It is working great, thank you plush2
|
12-01-2022, 11:16 AM | #27 |
Human being with feelings
Join Date: Feb 2020
Location: San Diego, CA
Posts: 19
|
We are working on this article discussing how to use an ambisonic microphone to create spatial audio impulse responses from an acoustic space. Would love some feedback and of course, happy to answer any questions.
https://voyage.audio/impulse-respons...h-spatial-mic/
__________________
Download The Free Spatial Mic Reaper Session: https://voyage.audio/listen-to-spatial-mic/ |
12-05-2022, 09:20 AM | #28 | |
Human being with feelings
Join Date: May 2006
Location: Saskatoon, Canada
Posts: 2,110
|
Quote:
You could add the above mentioned CATT Gratisvolver as an existing tool for impulse creation. It would be nice to see more detail about how to do the sine sweep capture method although it is a more complicated procedure.
__________________
mymusic http://music.darylpierce.com mywork http://production.darylpierce.com mypodcast https://youtube.com/@ultimatesoundtest |
|
Thread Tools | |
Display Modes | |
|
|