Go Back   Cockos Incorporated Forums > REAPER Forums > REAPER for Spatial Audio

Reply
 
Thread Tools Display Modes
Old 04-14-2022, 03:00 PM   #41
Joystick
Human being with feelings
 
Joystick's Avatar
 
Join Date: Jul 2008
Location: Athens / Greece
Posts: 627
Default

Quote:
Originally Posted by Desa View Post
My only limitation now is the ADM file to send to distributors.
Well, they sure know where to put the paywall ;-)
__________________
Pan Athen
SoundFellas Creative Audio Studios www.soundfellas.com
Creator of Echotopia www.soundfellas.com/software/echotopia
Joystick is offline   Reply With Quote
Old 04-14-2022, 03:03 PM   #42
Dodecahedron
Human being with feelings
 
Join Date: Apr 2022
Posts: 33
Default

Quote:
Originally Posted by Desa View Post
This is exactly what I want to do... in fact, I often go immersive also in my stereo production (and here, EPS is a fantastic change for me, for the reason you mention).
My only limitation now is the ADM file to send to distributors.
What exactly do you mean with "going immersive in Stereo productions"?
As for creating the ADM for distribution: As it stands now, you will probably need a Dolby Atmos production environment to do that. So depending on your system you could go with something like the Dolby Atmos Production Suite (can be made to somewhat support Reaper), or (if you are on Windows) one of the built-in renderers (for example in Resolve Studio).
Dodecahedron is offline   Reply With Quote
Old 04-14-2022, 03:12 PM   #43
Desa
Human being with feelings
 
Desa's Avatar
 
Join Date: Apr 2021
Posts: 31
Default

Quote:
Originally Posted by Dodecahedron View Post
What exactly do you mean with "going immersive in Stereo productions"?
As for creating the ADM for distribution: As it stands now, you will probably need a Dolby Atmos production environment to do that. So depending on your system you could go with something like the Dolby Atmos Production Suite (can be made to somewhat support Reaper), or (if you are on Windows) one of the built-in renderers (for example in Resolve Studio).
Cubase just released a new version with Dolby Atmos (previously available only on Nuendo)... so, that could be another option. Yep, I'm on windows.
As for the immersive setup I use sometimes (depending on type of project), I use to have some multichannel using ReasurroundPan to spatialize some sounds and go to Inspirata professional to create some immersive convolution reverb.
Clearly, I'll downmix in stereo everything, at the and, but the result is better than using some normal panning, for me.
And everything is very correlated yet spacious.
Now, the new option is to use the binaural monitoring signal with EPS as a starting point for the downmix.
__________________
Sarah De Carlo

http://sarahdecarlo.it
Desa is offline   Reply With Quote
Old 04-14-2022, 03:13 PM   #44
Dodecahedron
Human being with feelings
 
Join Date: Apr 2022
Posts: 33
Default

Quote:
Originally Posted by Joystick View Post
Well, they sure know where to put the paywall ;-)
Well, yes and no. The specs to the Dolby profile are openly accessible. It's an open standard, so anyone can implement support for it. If you want to write a tool, that can take, for example, a 7.1.4 channel based input, and create ADM metadata that conforms to the Dolby profile (which in this case means splitting the thing into a 7.1 bed + 4 objects), you absolutely can do that. I mean, you would still want to have an Atmos renderer capable of playing that file for QC, but that's a different story.
Dodecahedron is offline   Reply With Quote
Old 04-14-2022, 03:19 PM   #45
Dodecahedron
Human being with feelings
 
Join Date: Apr 2022
Posts: 33
Default

Quote:
Originally Posted by Desa View Post
Cubase just released a new version with Dolby Atmos (previously available only on Nuendo)... so, that could be another option. Yep, I'm on windows.
As for the immersive setup I use sometimes (depending on type of project), I use to have some multichannel using ReasurroundPan to spatialize some sounds and go to Inspirata professional to create some immersive convolution reverb.
Clearly, I'll downmix in stereo everything, at the and, but the result is better than using some normal panning, for me.
And everything is very correlated yet spacious.
Now, the new option is to use the binaural monitoring signal with EPS as a starting point for the downmix.
I haven't checked the renderer and workflow in Cubase yet, but if it is similar to the way it works in Nuendo, this should be perfectly fine.
Hm, I wouldn't use the Binaural monitor for a Stereo downmix for loudspeakers tbh. But I am with you on the immersive reverb thing, although I must say, that I am by far the most comfortable in HOA, especially when it comes to ambience, rooms and reverbs.
Dodecahedron is offline   Reply With Quote
Old 04-14-2022, 03:24 PM   #46
Desa
Human being with feelings
 
Desa's Avatar
 
Join Date: Apr 2021
Posts: 31
Default

Quote:
Originally Posted by Dodecahedron View Post
I haven't checked the renderer and workflow in Cubase yet, but if it is similar to the way it works in Nuendo, this should be perfectly fine.
Hm, I wouldn't use the Binaural monitor for a Stereo downmix for loudspeakers tbh. But I am with you on the immersive reverb thing, although I must say, that I am by far the most comfortable in HOA, especially when it comes to ambience, rooms and reverbs.
I'm demoing the DearVr Pro (I have the Music edition). Is a good option for HOA or the free options available are better?

Would you like to give me some advice on this too?

Thanks, and thanks to you too Pan.
__________________
Sarah De Carlo

http://sarahdecarlo.it
Desa is offline   Reply With Quote
Old 04-14-2022, 03:39 PM   #47
Joystick
Human being with feelings
 
Joystick's Avatar
 
Join Date: Jul 2008
Location: Athens / Greece
Posts: 627
Default

Quote:
Originally Posted by Dodecahedron View Post
If you want to write a toolÖ
Do you mean that anybody can write a tool that will export full featured Dolby Atmos master files without any restriction or partnership with Dolby?

Then why nobody wrote that tool already?
__________________
Pan Athen
SoundFellas Creative Audio Studios www.soundfellas.com
Creator of Echotopia www.soundfellas.com/software/echotopia
Joystick is offline   Reply With Quote
Old 04-14-2022, 03:41 PM   #48
Dodecahedron
Human being with feelings
 
Join Date: Apr 2022
Posts: 33
Default

Quote:
Originally Posted by Desa View Post
I'm demoing the DearVr Pro (I have the Music edition). Is a good option for HOA or the free options available are better?

Would you like to give me some advice on this too?

Thanks, and thanks to you too Pan.
I have used Dear VR Pro, which, if my memory serves me right, can go up to third order Ambisonics. Is this the same for the "Music" edition? I mostly use things like the IEM Plugin Suite, the Ambix plugins and some other stuff, most of which is open source. They go up to seventh order, which you may or may not need, but you can do all kinds of very interesting things with them, that, in many cases, go a little deaper than what you can do with the Dear VR stuff. But the latter is perhaps more convenient, depending on your use case.
How much do you already know about HOA? Like i stated above, it's an insane rabbit hole, and I don't want to write a novel about Ambisonic music production in a thread about a different software. But you can of course ask me specific questions, or maybe we can open a different thread, write over PM, on Discord or whatever.
Dodecahedron is offline   Reply With Quote
Old 04-14-2022, 03:45 PM   #49
Joystick
Human being with feelings
 
Joystick's Avatar
 
Join Date: Jul 2008
Location: Athens / Greece
Posts: 627
Default

Quote:
Originally Posted by Desa View Post
Öthanks to you too Pan.
Anytime.

Iím also on the same page with Dodecahedron, I love the sound of ambisonics. I use 3rd and 5th order for all my production for about 3 or 4 years now, maybe more.

I use the excellent Blue Ripple Sound plugins and for higher than 3rd order I use the IEM suite.

Iím very happy now that EAR implemented the input for ambisonic tracks in the NGA pipeline. I think itís the best solution for immersive reverberation that translated well in any playback scenario, which is one of the main scopes for MPEG-H.
__________________
Pan Athen
SoundFellas Creative Audio Studios www.soundfellas.com
Creator of Echotopia www.soundfellas.com/software/echotopia
Joystick is offline   Reply With Quote
Old 04-14-2022, 03:55 PM   #50
Dodecahedron
Human being with feelings
 
Join Date: Apr 2022
Posts: 33
Default

Quote:
Originally Posted by Joystick View Post
Do you mean that anybody can write a tool that will export full featured Dolby Atmos master files without any restriction or partnership with Dolby?

Then why nobody wrote that tool already?
It seems like it. They have the full specification online:
https://professionalsupport.dolby.co...language=en_US
They even state that this is ment for providing interoperability.
As for why no one has implemented it: probably because it's pretty involved, depending on what you want to do. Going from something like the EBU profile to the Dolby one is not an easy task. You'd have to deal with all kinds of things like automatic data reduction, coordinate conversion (not just spherical to cartesian, the Dolby renderer is ... let's say it's a little special) etc. As for just encoding beds and possibly beds + static objects as in 7.1.4, that could be done relatively easily, and there are examples for this: The Mach1 format can be decoded to Atmos bed, for whatever that's worth.
Dodecahedron is offline   Reply With Quote
Old 04-14-2022, 04:03 PM   #51
Dodecahedron
Human being with feelings
 
Join Date: Apr 2022
Posts: 33
Default

Quote:
Originally Posted by Joystick View Post
Anytime.

Iím also on the same page with Dodecahedron, I love the sound of ambisonics. I use 3rd and 5th order for all my production for about 3 or 4 years now, maybe more.

I use the excellent Blue Ripple Sound plugins and for higher than 3rd order I use the IEM suite.

Iím very happy now that EAR implemented the input for ambisonic tracks in the NGA pipeline. I think itís the best solution for immersive reverberation that translated well in any playback scenario, which is one of the main scopes for MPEG-H.
Yes, that's my experience exactly regarding reverb. One thing I havent tested much, though, is using HOA with the EPS binaural monitor.
Dodecahedron is offline   Reply With Quote
Old 04-14-2022, 04:09 PM   #52
Joystick
Human being with feelings
 
Joystick's Avatar
 
Join Date: Jul 2008
Location: Athens / Greece
Posts: 627
Default

Quote:
Originally Posted by Dodecahedron View Post
Yes, that's my experience exactly regarding reverb. One thing I havent tested much, though, is using HOA with the EPS binaural monitor.
Sounds like time for a critical listening session! Letís arrange a listening session and discuss at the same time on Discord.
__________________
Pan Athen
SoundFellas Creative Audio Studios www.soundfellas.com
Creator of Echotopia www.soundfellas.com/software/echotopia
Joystick is offline   Reply With Quote
Old 04-14-2022, 04:15 PM   #53
Desa
Human being with feelings
 
Desa's Avatar
 
Join Date: Apr 2021
Posts: 31
Default

Quote:
Originally Posted by Joystick View Post
Sounds like time for a critical listening session! Letís arrange a listening session and discuss at the same time on Discord.
Would be great.
I just need some more stimulus to procrastinate the work of an avalanche of projects with deadlines coming up
__________________
Sarah De Carlo

http://sarahdecarlo.it
Desa is offline   Reply With Quote
Old 04-14-2022, 04:16 PM   #54
Joystick
Human being with feelings
 
Joystick's Avatar
 
Join Date: Jul 2008
Location: Athens / Greece
Posts: 627
Default

Quote:
Originally Posted by Dodecahedron View Post
It seems like it. They have the full specification online:
https://professionalsupport.dolby.co...language=en_US
They even state that this is ment for providing interoperability.
As for why no one has implemented it: probably because it's pretty involved, depending on what you want to do. Going from something like the EBU profile to the Dolby one is not an easy task. You'd have to deal with all kinds of things like automatic data reduction, coordinate conversion (not just spherical to cartesian, the Dolby renderer is ... let's say it's a little special) etc. As for just encoding beds and possibly beds + static objects as in 7.1.4, that could be done relatively easily, and there are examples for this: The Mach1 format can be decoded to Atmos bed, for whatever that's worth.
I know. That’s why I said that they know where to put the paywall. Artists and creatives that want to create immersive content in commercial formats will eventually need to get Logic, DaVinci, ProTools, etc. I don’t consider open=free. I would pay for a Dolby Atmos authoring solution that doesn’t lock me with a specific DAW or OS. They should provide tools in the form of a plugin or similar, just like EAR does.

My hopes are that MPEG-H will become a commercial music standard too. It should be.
__________________
Pan Athen
SoundFellas Creative Audio Studios www.soundfellas.com
Creator of Echotopia www.soundfellas.com/software/echotopia
Joystick is offline   Reply With Quote
Old 04-14-2022, 04:22 PM   #55
Joystick
Human being with feelings
 
Joystick's Avatar
 
Join Date: Jul 2008
Location: Athens / Greece
Posts: 627
Default

Quote:
Originally Posted by Desa View Post
Would be great.
I just need some more stimulus to procrastinate the work of an avalanche of projects with deadlines coming up
Same here. Next month Iím releasing my own audio application: https://soundfellas.com/software/echotopia/

Maybe the motivation will come from the posibility to enhance our workflow using open and evergreen technologies, so we produce once and render to many. Same dream that the Unity game engine offers for game developers who want to produce a game one time and build for different gamong platforms. I think we are very close to that. Those are exciting times!
__________________
Pan Athen
SoundFellas Creative Audio Studios www.soundfellas.com
Creator of Echotopia www.soundfellas.com/software/echotopia
Joystick is offline   Reply With Quote
Old 04-14-2022, 04:25 PM   #56
Dodecahedron
Human being with feelings
 
Join Date: Apr 2022
Posts: 33
Default

Quote:
Originally Posted by Joystick View Post
Sounds like time for a critical listening session! Letís arrange a listening session and discuss at the same time on Discord.
I'd love to do that. How would we go about this?
Dodecahedron is offline   Reply With Quote
Old 04-14-2022, 04:31 PM   #57
Desa
Human being with feelings
 
Desa's Avatar
 
Join Date: Apr 2021
Posts: 31
Default

Quote:
Originally Posted by Joystick View Post
Same here. Next month Iím releasing my own audio application: https://soundfellas.com/software/echotopia/

Maybe the motivation will come from the posibility to enhance our workflow using open and evergreen technologies, so we produce once and render to many. Same dream that the Unity game engine offers for game developers who want to produce a game one time and build for different gamong platforms. I think we are very close to that. Those are exciting times!
It looks like a huge, very interesting project.
My compliments and sincere good luck!
__________________
Sarah De Carlo

http://sarahdecarlo.it
Desa is offline   Reply With Quote
Old 04-14-2022, 04:51 PM   #58
Dodecahedron
Human being with feelings
 
Join Date: Apr 2022
Posts: 33
Default

Quote:
Originally Posted by Joystick View Post
I know. Thatís why I said that they know where to put the paywall. Artists and creatives that want to create immersive content in commercial formats will eventually need to get Logic, DaVinci, ProTools, etc. I donít consider open=free. I would pay for a Dolby Atmos authoring solution that doesnít lock me with a specific DAW or OS. They should provide tools in the form of a plugin or similar, just like EAR does.

My hopes are that MPEG-H will become a commercial music standard too. It should be.
I know what you mean. I just wanted to state that it's not like you can't interface with the technology. And for channel based media this shouldn't be to difficult.
Dodecahedron is offline   Reply With Quote
Old 04-15-2022, 06:10 AM   #59
Dodecahedron
Human being with feelings
 
Join Date: Apr 2022
Posts: 33
Default

Quote:
Originally Posted by matt_f View Post
Hi all - just to answer a few questions on the EAR Production Suite;



The ADM and rendering standards don't support object-based reverbs at the moment. There has been some research in this area however (see "AES E-LIBRARY: Object-Based Reverberation for Spatial Audio"), but it could be some time before we see such a solution implemented in NGA standards. At the moment, the best solution would probably be as you suggest - to bake in your object reverb in to a channel-based or HOA asset (HOA is supported from EPS version v0.7.0).
Regarding the question on limiting, the REAPER extension is fed audio for individual assets directly from the Scene plug-in during export, so any processing post-Scene is not reflected in the exported ADM. This is because it needs to ensure the assets are kept separated when exporting, and the Scene is generally the final point in the signal chain before the assets are mixed by down-stream monitoring plug-ins. The topic of level and particularly loudness for NGA content is an active area within the industry and it poses many challenges. As an interim solution, you could use a side-chain feed from your monitoring output to a limiter placed pre-Scene. However, it is not ideal as an all-round NGA solution since this essentially bakes-in your attenuation which would be suitable only for that particular set-up.
Is that object based reverb you mentioned the same thing that has been implemented in the VISR Production Suite?
Regarding the limiter: What about just using a mono side chain?
Dodecahedron is offline   Reply With Quote
Old 04-15-2022, 06:53 AM   #60
Dodecahedron
Human being with feelings
 
Join Date: Apr 2022
Posts: 33
Default

Quote:
Originally Posted by matt_f View Post
You are right in your second post - the REAPER pan controls become somewhat redundant in this paradigm. 3D position metadata according to the input plugins parameters is sent from the input plugin to the Scene plugin, which is then collated with metadata received from other input plugins and passed on to monitoring plugins for rendering according to the ADM rendering specification.
Regarding reverbs, I guess this depends on what the use case is - and I'll preface this by saying there aren't solutions for many of the use cases yet (see my reply to krabbencutter). If you were going to include a pre-baked reverb in your ADM, then the object that feeds that reverb should not have any interactivity (because the reverb wouldn't be coherent if, for example, the object was moved or attenuated.) This limits the usefulness of objects, other than at least only occupying one channel of audio. If the position and gain of the object can not be modified through interactivity settings, then it might as well be baked-in as part of the channel-based/HOA reverb asset unless you want to allow the reverb to be turned off separately.
OK, I have no idea how much of a nightmare it would be to implement this, but what about sending positional metadata of objects via OSC messages to an external HOA panner? IEM, Sparta and a couple of other tools already support OSC, which is very useful in a lot of situations, so maybe there's a way to establish communication with EPS. That would mean that we could feed objects (including position) into an HOA reverb, or actually any multichannel reverb.
Dodecahedron is offline   Reply With Quote
Old 04-16-2022, 03:31 AM   #61
Joystick
Human being with feelings
 
Joystick's Avatar
 
Join Date: Jul 2008
Location: Athens / Greece
Posts: 627
Default

Quote:
Originally Posted by Desa View Post
It looks like a huge, very interesting project.
My compliments and sincere good luck!
Thanks! :-)
__________________
Pan Athen
SoundFellas Creative Audio Studios www.soundfellas.com
Creator of Echotopia www.soundfellas.com/software/echotopia
Joystick is offline   Reply With Quote
Old 04-16-2022, 03:40 AM   #62
Joystick
Human being with feelings
 
Joystick's Avatar
 
Join Date: Jul 2008
Location: Athens / Greece
Posts: 627
Default

Quote:
Originally Posted by matt_f View Post
If you were going to include a pre-baked reverb in your ADM, then the object that feeds that reverb should not have any interactivity (because the reverb wouldn't be coherent if, for example, the object was moved or attenuated.) This limits the usefulness of objects, other than at least only occupying one channel of audio.
A solution coming from the interactive audio sector and specifically game audio would be something that we did back in the days of Demoscene to save resources.

The interactive objects should have the reverb baked on them.

Even if the reverb has different behavior around the scene, at least the object would fit the aesthetic. If the interaction of the object is limited to specific parts of the scene by the author, then it should be easier for the author to choose a baked reverb that fits the aesthetic of those parts.

That way you don't limit the accessibility of your program and you don't sacrifice the aesthetic too.

It's a technique that is tried through decades of games production and it works well enough.
__________________
Pan Athen
SoundFellas Creative Audio Studios www.soundfellas.com
Creator of Echotopia www.soundfellas.com/software/echotopia
Joystick is offline   Reply With Quote
Old 04-16-2022, 11:09 AM   #63
Dodecahedron
Human being with feelings
 
Join Date: Apr 2022
Posts: 33
Default

Quote:
Originally Posted by Joystick View Post
A solution coming from the interactive audio sector and specifically game audio would be something that we did back in the days of Demoscene to save resources.

The interactive objects should have the reverb baked on them.

Even if the reverb has different behavior around the scene, at least the object would fit the aesthetic. If the interaction of the object is limited to specific parts of the scene by the author, then it should be easier for the author to choose a baked reverb that fits the aesthetic of those parts.

That way you don't limit the accessibility of your program and you don't sacrifice the aesthetic too.

It's a technique that is tried through decades of games production and it works well enough.
OK, so you mean basically mono reverb? Because, while that's very easy to do, it would also mean that the reverb is very localized, and not very enveloping.
Dodecahedron is offline   Reply With Quote
Old 04-16-2022, 04:51 PM   #64
BPBaker
Human being with feelings
 
BPBaker's Avatar
 
Join Date: Oct 2013
Location: Brooklyn, NY
Posts: 221
Default

Quote:
Originally Posted by Desa View Post
I'm demoing the DearVr Pro (I have the Music edition). Is a good option for HOA or the free options available are better?

Would you like to give me some advice on this too?

Thanks, and thanks to you too Pan.
I've just been digging into EPS having mostly focused on HOA in the last few years. But that lead me to this thread and also testing out DearVR Pro for HOA reverb (which I bought a while back but hadn't really explored until just now).

Apart from the limitation of only having a mono input, the reflection modeling seems quite good to me in HOA output. That said, I seem to have found a kind of glaring problem with the reverb in HOA:

When disabling the "position" and "reflection" gain and only using the "reverb" panel, a number of ambisonic output channels are completely disabled. When set to ďambixĒ, channels 4, 5, 8, 11, 14, and 16 are completely silent. When set to FUMA, channels 2, 6, 9, 11, 14 and 15. (Stands to reason as these are corresponding spherical harmonics in FUMA and ACN ordering.) So as is, DearVR actually doesnít currently create a complete ambisonic image when used for reverb without reflections:

I left an email through their website but haven't heard back yet. Here's hoping they can fix it! (And also add ambisonic/multichannel input.) ;-)

BTW @Desa - how's your experience using Inspirata? I gather it's multichannel but not HOA? Worth it?
BPBaker is offline   Reply With Quote
Old 04-16-2022, 04:55 PM   #65
BPBaker
Human being with feelings
 
BPBaker's Avatar
 
Join Date: Oct 2013
Location: Brooklyn, NY
Posts: 221
Default

Also count me similarly impressed with the EPS binaural output--I actually had that reaction before reading the other reactions on this thread. I also haven't done a close comparison with my other binauralization plugins yet but would like to.
BPBaker is offline   Reply With Quote
Old 04-16-2022, 05:58 PM   #66
Desa
Human being with feelings
 
Desa's Avatar
 
Join Date: Apr 2021
Posts: 31
Default

BTW @Desa - how's your experience using Inspirata? I gather it's multichannel but not HOA? Worth it?[/QUOTE]

Hi Brendan.

I confirm that inspirata (from the provessional version up) works only with multichannel output (on their website you can find the specifications of all the speakers setups).

Eliminated the problems of the first versions with stability problems, now Inspirata is a mature product, and as far as I'm concerned irreplaceable for some functions in my workflow, and not necessarily in immersive use (where it certainly excels).

The first important thing to understand about Inspirata, is that it works in insert or send like any other plugin, but when you are working exclusively in insert (100% wet, for the reasons I'm about to explain), it has the ability to modify the direct signal in the room as well.

Let me explain: using it in send (or insert but with the normal percentage of wet signal that you would use for other reverb plugins), you will only use the ER-LATE Reverberant signal, exploiting, even if only for reflections, the position of the input signal, of the listener, plus the various options of directivity, crosstalk and width of the signal. Even in stereo use (for which the Personal version is sufficient), you certainly already have a very realistic reverb, taking advantage of the spaces of many rooms they have recorded.
Where everything changes, is in using the 100 wet insert, where at that point, you can replace the normal dry signal with a new (direct) signal created by the characteristics of the room itself. Obviously, it is important in this case that the signal is totally wet (and therefore managed exclusively by the plugin), otherwise you would have the dry and the 'direct' signal at the same time, with immediately obvious problems.
This way of working may not always be suitable for organic instrumental sounds, orchestras etc, because the way in which inspirata transforms the dry signal into a 'direct signal', it is completely saturating it with the sonic characteristics of its rooms, so it tends to be rather aggressive on orchestral sounds (at least, according to my taste). As for foley and, to my taste, synths too, this type of processing creates a completely new sound that is much more real and alive.
In fact, in foley sounds is where for me Inspirata excels and has become my first (and only) choice. It's hard to put into words, it's something you have to test.
Furthermore, using the direct signal, not only at this point the audio source is localized in surround as regards the reflections, but also as regards the panning of the signal itself.
Using the direct signal, you have the total perception of the localization of the sound.

An experiment that I have done just in these days is to compare it with reverb and reflection in DearVr Pro.
For workflow speed I obviously prefer DearVr Pro, since with inspirata I have to use two plugins if I don't want to use the direct signal created by the room, and pan with reasurroundpan.
As for quality and realism of the reflections, my choice is Inspirata without a shadow of a doubt.

I don't want to go too far in order not to go too OT, but I hope I have largely answered your curiosity.

Sorry for my english
Ciao!
__________________
Sarah De Carlo

http://sarahdecarlo.it
Desa is offline   Reply With Quote
Old 04-17-2022, 04:24 AM   #67
Joystick
Human being with feelings
 
Joystick's Avatar
 
Join Date: Jul 2008
Location: Athens / Greece
Posts: 627
Default

Quote:
Originally Posted by Dodecahedron View Post
OK, so you mean basically mono reverb? Because, while that's very easy to do, it would also mean that the reverb is very localized, and not very enveloping.
Yes, but I propose it only as a solution for objects that offer interactivity for the end-user and that kind of interactivity that would make the end result uncanny if used dry with other groups carrying their reflections.

It's a design decision where you trade between what you want with what you don't care so much.

As a designer, I only strive for elegance by following the design intent. Trying to be perfect in every aspect of the product is a known production fallacy.
__________________
Pan Athen
SoundFellas Creative Audio Studios www.soundfellas.com
Creator of Echotopia www.soundfellas.com/software/echotopia
Joystick is offline   Reply With Quote
Old 04-17-2022, 06:26 AM   #68
Dodecahedron
Human being with feelings
 
Join Date: Apr 2022
Posts: 33
Default

Quote:
Originally Posted by BPBaker View Post
I've just been digging into EPS having mostly focused on HOA in the last few years. But that lead me to this thread and also testing out DearVR Pro for HOA reverb (which I bought a while back but hadn't really explored until just now).

Apart from the limitation of only having a mono input, the reflection modeling seems quite good to me in HOA output. That said, I seem to have found a kind of glaring problem with the reverb in HOA:

When disabling the "position" and "reflection" gain and only using the "reverb" panel, a number of ambisonic output channels are completely disabled. When set to ďambixĒ, channels 4, 5, 8, 11, 14, and 16 are completely silent. When set to FUMA, channels 2, 6, 9, 11, 14 and 15. (Stands to reason as these are corresponding spherical harmonics in FUMA and ACN ordering.) So as is, DearVR actually doesnít currently create a complete ambisonic image when used for reverb without reflections:

I left an email through their website but haven't heard back yet. Here's hoping they can fix it! (And also add ambisonic/multichannel input.) ;-)

BTW @Desa - how's your experience using Inspirata? I gather it's multichannel but not HOA? Worth it?
OK, good to know about Dear VR Pro. Sounds like an error indeed. Let us know if you hear back from them.
Also, what kind of tools do you normally use for Ambisonic or multichannel reverb?
Dodecahedron is offline   Reply With Quote
Old 04-17-2022, 06:36 AM   #69
Dodecahedron
Human being with feelings
 
Join Date: Apr 2022
Posts: 33
Default

Quote:
Originally Posted by Desa View Post
An experiment that I have done just in these days is to compare it with reverb and reflection in DearVr Pro.
For workflow speed I obviously prefer DearVr Pro, since with inspirata I have to use two plugins if I don't want to use the direct signal created by the room, and pan with reasurroundpan.
As for quality and realism of the reflections, my choice is Inspirata without a shadow of a doubt.

I don't want to go too far in order not to go too OT, but I hope I have largely answered your curiosity.
That sounds very intriguing. If you have any audio examples of how it compares to Dear VR, possibly through the EPS binaural monitor, I'd be extremely interested.
Dodecahedron is offline   Reply With Quote
Old 04-17-2022, 06:41 AM   #70
Dodecahedron
Human being with feelings
 
Join Date: Apr 2022
Posts: 33
Default

Quote:
Originally Posted by Joystick View Post
Yes, but I propose it only as a solution for objects that offer interactivity for the end-user and that kind of interactivity that would make the end result uncanny if used dry with other groups carrying their reflections.

It's a design decision where you trade between what you want with what you don't care so much.

As a designer, I only strive for elegance by following the design intent. Trying to be perfect in every aspect of the product is a known production fallacy.
OK, I get it now. Makes sense of course. Have you found, that the kind of reverb makes a difference here? I could imagine, that such a thing might be less noticeable on shorter verbs, that would be more just like early reflections.
Dodecahedron is offline   Reply With Quote
Old 04-17-2022, 07:25 AM   #71
Desa
Human being with feelings
 
Desa's Avatar
 
Join Date: Apr 2021
Posts: 31
Default

Quote:
Originally Posted by Dodecahedron View Post
That sounds very intriguing. If you have any audio examples of how it compares to Dear VR, possibly through the EPS binaural monitor, I'd be extremely interested.
As soon as I have a moment of pause I'll prepare a comparative video.
__________________
Sarah De Carlo

http://sarahdecarlo.it
Desa is offline   Reply With Quote
Old 04-17-2022, 07:35 AM   #72
Joystick
Human being with feelings
 
Joystick's Avatar
 
Join Date: Jul 2008
Location: Athens / Greece
Posts: 627
Default

Quote:
Originally Posted by Dodecahedron View Post
Have you found, that the kind of reverb makes a difference here? I could imagine, that such a thing might be less noticeable on shorter verbs, that would be more just like early reflections.
That boils down to psychoacoustics and cognitive aesthetics. Depending on the level of the abruptness of material envelopes and frequency content it would differentiate on listener's ability between localization and understanding of surface materials.

But a creator could just as easily decide those things to taste. There is a threshold which below it the end result's differences are not perceivable, also different use case scenarios using different devices in different environments and contexts, affect the perceived result as well.

Fidelity should be considered as one of the pillars of a product, the other is quality in use, and the rest is the ability to translate, and serve the design intent in the context of place and point of consumption.

I use what I like to call "The Mother of All Questions" when I manage production. One simple question that can help any creator answer every other question. That is: The product was made to achieve something, does it achieve it? Yes = Good / No = Iterate to the next version so that it does.
__________________
Pan Athen
SoundFellas Creative Audio Studios www.soundfellas.com
Creator of Echotopia www.soundfellas.com/software/echotopia
Joystick is offline   Reply With Quote
Old 04-17-2022, 12:36 PM   #73
Dodecahedron
Human being with feelings
 
Join Date: Apr 2022
Posts: 33
Default

Quote:
Originally Posted by Desa View Post
As soon as I have a moment of pause I'll prepare a comparative video.
Great, thanks! I'm looking forward to it.
Dodecahedron is offline   Reply With Quote
Old 04-17-2022, 12:47 PM   #74
Dodecahedron
Human being with feelings
 
Join Date: Apr 2022
Posts: 33
Default

Quote:
Originally Posted by Joystick View Post
That boils down to psychoacoustics and cognitive aesthetics. Depending on the level of the abruptness of material envelopes and frequency content it would differentiate on listener's ability between localization and understanding of surface materials.

But a creator could just as easily decide those things to taste. There is a threshold which below it the end result's differences are not perceivable, also different use case scenarios using different devices in different environments and contexts, affect the perceived result as well.

Fidelity should be considered as one of the pillars of a product, the other is quality in use, and the rest is the ability to translate, and serve the design intent in the context of place and point of consumption.

I use what I like to call "The Mother of All Questions" when I manage production. One simple question that can help any creator answer every other question. That is: The product was made to achieve something, does it achieve it? Yes = Good / No = Iterate to the next version so that it does.
Hm, I guess it would also depend on the kind of sound, and probably to an extent on the visuals.
Dodecahedron is offline   Reply With Quote
Old 04-17-2022, 12:50 PM   #75
Dodecahedron
Human being with feelings
 
Join Date: Apr 2022
Posts: 33
Default

While we're talking about reverb in an OBA context, this paper might be interesting for some of you:
https://www.orpheus-audio.eu/wp-cont...-broadcast.pdf
Dodecahedron is offline   Reply With Quote
Old 04-18-2022, 12:06 PM   #76
Desa
Human being with feelings
 
Desa's Avatar
 
Join Date: Apr 2021
Posts: 31
Default

I tried to make a comparison of the binaural monitoring between Ear (spectrum analyzer bottom left) and DearVr Monitor (spectrum analyzer bottom right), then comparing them to the normal stereo downmix with ReasurroundPan and Ear Stereo Monitor (spctrum analyzers above).
The big difference, which I had already perceived when listening but which I now also see in the spectrum analysis, is that the subsonic area is much more pronounced in DearVrMonitor and appears filtered in Ear Binaural Monitoring, where it is evident to me that a high pass filter is present.
To get a similar listening experience, I need to insert a high pass filter around 40-50Hz at the DearVr Monitor output.

This would explain the qualitative impact in listening in EAR, as the sound, which is not saturated in the headphones by those subsonic frequencies, arrives much more spacious and in front.

The implications in the mixing phase are certainly to be taken into consideration if we have a lot of content in that frequency area, because if you were to mix in binaural monitoring, it will be necessary to consider that that area will not be represented consistently.

Gif here: (sorry, I don't know how to post the gif, any help?)

https://media.giphy.com/media/4Z4P1S...cyBQ/giphy.gif
__________________
Sarah De Carlo

http://sarahdecarlo.it

Last edited by Desa; 04-18-2022 at 12:14 PM.
Desa is offline   Reply With Quote
Old 04-18-2022, 12:51 PM   #77
Joystick
Human being with feelings
 
Joystick's Avatar
 
Join Date: Jul 2008
Location: Athens / Greece
Posts: 627
Default

Quote:
Originally Posted by Desa View Post
The implications in the mixing phase are certainly to be taken into consideration if we have a lot of content in that frequency area, because if you were to mix in binaural monitoring, it will be necessary to consider that that area will not be represented consistently.
Don't forget that binaural rendering is a very destructive process because it simulates some kind of head with ears (and in some cases even more parts of the upper body). Not your listeners' or yours.

I've done blind tests with a group of listeners and found out that you might have frequency content differences, but you also have spatialization differences. So, maybe some plugins tend to have worse frequency fidelity but have a more realistic spatial image.

After all, the sound changes a lot from our morphology (head, ears, body, etc.).

So what you see as a difference in the spectra might also happen when your music reaches the ears and head of a listener. So, it's not so easy to draw a conclusion from simply measuring spectra.

In order to measure how well a binaural renderer performs you need at least three sets of data.

1) The model measuring a listener's head/ears, etc.
2) Measure how the renderer performs.
3) Measure the in an anechoic room the music from speakers delivered to the first listener's ears using high fidelity microphones within the ears of the listener.

A more easy experiment would be to create a test where listeners of a random group hear the original track without headphones and then hear different binaural renderings of the same song with a clean version through headphones as control, and then see which version they think is closer to the original (first) one. This should be conducted with the original playing first and the rest in random order with the listener on control to hear whatever she/he likes (without knowing which track is in which button and only that track 1 is the reference) but no eye contact with anyone else, so in a room alone.
__________________
Pan Athen
SoundFellas Creative Audio Studios www.soundfellas.com
Creator of Echotopia www.soundfellas.com/software/echotopia
Joystick is offline   Reply With Quote
Old 04-19-2022, 08:16 AM   #78
matt_f
Human being with feelings
 
matt_f's Avatar
 
Join Date: Nov 2018
Posts: 29
Default

On the EPS binaural renderer - this is based on the BEAR (Binaural EBU ADM Renderer) which is now open-source; https://github.com/ebu/bear. For those attending AES Europe next month, an e-Brief is due to be presented online; https://aeseuropespring2022.sched.co...-model-content
matt_f is offline   Reply With Quote
Old 04-19-2022, 08:20 AM   #79
matt_f
Human being with feelings
 
matt_f's Avatar
 
Join Date: Nov 2018
Posts: 29
Default

Quote:
Originally Posted by Dodecahedron View Post
Well, yes and no. The specs to the Dolby profile are openly accessible. It's an open standard, so anyone can implement support for it. If you want to write a tool, that can take, for example, a 7.1.4 channel based input, and create ADM metadata that conforms to the Dolby profile (which in this case means splitting the thing into a 7.1 bed + 4 objects), you absolutely can do that. I mean, you would still want to have an Atmos renderer capable of playing that file for QC, but that's a different story.
For the C++ devs out there and anyone who fancies building such a tool, there are the open-source libadm and libbw64 libraries available which would take care of a large chunk of the work. libadm can author and export ADM metadata, and libbw64 can write Broadcast WAV 64-bit files (ITU-R BS.2088) including the chunks required for ADM metadata.
See https://github.com/ebu/libadm and https://github.com/ebu/libbw64

Quote:
Originally Posted by Dodecahedron View Post
Is that object based reverb you mentioned the same thing that has been implemented in the VISR Production Suite?
Regarding the limiter: What about just using a mono side chain?
I believe so - it's referenced from here; https://cvssp.org/data/s3a/public/VI...-reverberation
As for the limiter, yes I think mono sidechain feed applying attentuation to all 64 channels to the Scene should work, although I've not tried it. The problem might be a chasing-your-own-tail situation - the limiter analyses the monitoring output and applies attentuation at the Scene, which lowers the monitoring output, so the limiter releases, and the cycle begins again.

Quote:
Originally Posted by Dodecahedron View Post
OK, I have no idea how much of a nightmare it would be to implement this, but what about sending positional metadata of objects via OSC messages to an external HOA panner?
It's certainly possible. Not sure how much work it would be. Theres a mechanism for sending frequent metadata from the Object plugins to the Scene which you could probably hook in to and fire off an OSC message at the same time.
matt_f is offline   Reply With Quote
Old 04-19-2022, 08:21 AM   #80
matt_f
Human being with feelings
 
matt_f's Avatar
 
Join Date: Nov 2018
Posts: 29
Default

Quote:
Originally Posted by Joystick View Post
Same here. Next month Iím releasing my own audio application: https://soundfellas.com/software/echotopia/
I second Desa's comment - that looks fantastic!
matt_f is offline   Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -7. The time now is 06:45 PM.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.