Go Back   Cockos Incorporated Forums > REAPER Forums > REAPER Pre-Release Discussion

Reply
 
Thread Tools Display Modes
Old 02-28-2016, 04:44 PM   #41
peter5992
Human being with feelings
 
peter5992's Avatar
 
Join Date: Mar 2008
Location: Oakland, CA
Posts: 10,478
Default

Just stumbled across TransMIDIfier --

http://www.bewaryprods.com/software/...TransMIDIfier/

Anyone used this before?

This might just be what we need.

peter5992 is offline   Reply With Quote
Old 02-28-2016, 05:00 PM   #42
BobF
Human being with feelings
 
BobF's Avatar
 
Join Date: Apr 2013
Posts: 699
Default

Quote:
Originally Posted by peter5992 View Post
Just stumbled across TransMIDIfier --

http://www.bewaryprods.com/software/...TransMIDIfier/

Anyone used this before?

This might just be what we need.

That's a slick looking piece of kit. Thanks for posting
__________________
Reaper/Studio One Pro/Win10Pro x64
i7-6700@3.8Ghz/32G/43" 4K/UMC1820
Event PS8/KKS61MK2/Maschine MK3/K12U
BobF is offline   Reply With Quote
Old 02-28-2016, 05:50 PM   #43
kerryg
Human being with feelings
 
Join Date: Mar 2007
Posts: 340
Default

I'm
Quote:
Originally Posted by peter5992 View Post
Just stumbled across TransMIDIfier --

http://www.bewaryprods.com/software/...TransMIDIfier/

Anyone used this before?

This might just be what we need.

I've used a number of these devices; MidiPipe, ControllerMate, Johan Loonenga's Midi Harmonizer and a number of other devices. This is exactly what I don't want to do - have an external translator app that has to be booted separately, a patch recalled, and a corresponding patch loaded in an external app. No thanks; warts and all, Logic and Sibelius' existing implementations would still be better than this and would have the great advantage of saving all this inside one project to be recalled at need.
kerryg is offline   Reply With Quote
Old 02-29-2016, 05:19 AM   #44
IXix
Human being with feelings
 
Join Date: Jan 2007
Location: mcr:uk
Posts: 3,889
Default

Quote:
Originally Posted by schwa View Post
1. Percussion notation, or any situation where the written notation differs from the desired MIDI output. We could potentially expand the existing MIDI note name interface to support mappings like this:

36 "Kick" 64
44 "Hat pedal" 62 "X"

Meaning, a MIDI note with pitch 36 will be displayed in the piano roll with the name "Kick", and displayed in the notation editor with pitch 64 (F4). A MIDI note with pitch 44 will be displayed in the piano roll with the name "Hat pedal" and displayed in the notation editor with pitch 62 (D4) and an "X" note head.

Is this reasonable?
Yes, that would be great BUT there are more note head types than just X so please consider that. Off the top of my head there are triangles, squares and crossed (X) circles.
IXix is offline   Reply With Quote
Old 02-29-2016, 05:42 AM   #45
EvilDragon
Human being with feelings
 
EvilDragon's Avatar
 
Join Date: Jun 2009
Location: Croatia
Posts: 24,790
Default

There are even more head types than that.
EvilDragon is online now   Reply With Quote
Old 02-29-2016, 06:40 AM   #46
bob
Human being with feelings
 
Join Date: Apr 2010
Location: Scottish refugee in Germany
Posts: 4,368
Default

Quote:
Originally Posted by BobF View Post
That's a slick looking piece of kit. Thanks for posting
There are some amazingly clever people out there.I'm a slow learner. This will take ages before it sinks in.
__________________
SoundCloud Channel
https://soundcloud.com/stream
bob is offline   Reply With Quote
Old 02-29-2016, 07:37 AM   #47
ceanganb
Human being with feelings
 
Join Date: May 2009
Location: Brazil
Posts: 323
Default

Quote:
Originally Posted by EvilDragon View Post
I have a hunch that this thread is going to end up (eventually) with Expression Maps implemented... In due time, in due time. Not for 5.20.
Sorry for spending a post here, but that would be so amazing.
__________________
Ceanganb
ceanganb is offline   Reply With Quote
Old 02-29-2016, 08:21 AM   #48
memyselfandus
Human being with feelings
 
memyselfandus's Avatar
 
Join Date: Oct 2008
Posts: 1,598
Default

Yep! Would be insane
memyselfandus is offline   Reply With Quote
Old 02-29-2016, 01:04 PM   #49
IXix
Human being with feelings
 
Join Date: Jan 2007
Location: mcr:uk
Posts: 3,889
Default

Quote:
Originally Posted by EvilDragon View Post
There are even more head types than that.
Oh yeah, I forgot smiley face, sad face, winking face...
IXix is offline   Reply With Quote
Old 02-29-2016, 02:11 PM   #50
hamish
Human being with feelings
 
hamish's Avatar
 
Join Date: Sep 2007
Location: The Reflection Free Zone
Posts: 3,026
Default

Quote:
Originally Posted by IXix View Post
Oh yeah, I forgot smiley face, sad face, winking face...
Option: use smileys to represent pitch (sad - low, happy - high)
hamish is offline   Reply With Quote
Old 02-29-2016, 03:26 PM   #51
ijijn
Human being with feelings
 
ijijn's Avatar
 
Join Date: Apr 2012
Location: Christchurch, New Zealand
Posts: 482
Default

Quote:
Originally Posted by hamish View Post
Option: use smileys to represent pitch (sad - low, happy - high)
Or detect an embarrassed wrong note while the others turn to stare menacingly.
ijijn is offline   Reply With Quote
Old 02-29-2016, 04:47 PM   #52
Icchan
Human being with feelings
 
Icchan's Avatar
 
Join Date: Dec 2011
Location: Finland
Posts: 792
Default

I only ask that the community would be able to create these presets for different VSTi's and then Cockos would add them into the Reaper by default.

Right now there's not so much that Cockos has added to Reaper from the community. That would of course mean some sort of license agreements etc. but i'm sure it'll be possible.

That would lift the creation of the presets and mappings from Cockos's shoulders and Reaper would get huge bunch of them directly from users and best ones would be added directly to the default installer.
Icchan is offline   Reply With Quote
Old 02-29-2016, 04:49 PM   #53
ijijn
Human being with feelings
 
ijijn's Avatar
 
Join Date: Apr 2012
Location: Christchurch, New Zealand
Posts: 482
Default

In other matters, I think the key point to creating MIDI events from notation, especially in the context of a DAW, is for such events to be unambiguous and unintrusive.

On one hand, you should be able to choose to ignore the markings completely: in this case they are simply there for human readability on the screen or (eventually) the printed page. On the other hand, the computer could take a more active role and crank out a steady stream of highly specific keyswitches, length modification, CCs and other funtimes. Both of these should ideally work side by side, on a per-channel basis.

As has been stated many times, every instrument has its own quirks and this will likely always be the case to some extent. Crescendo means different things to different instruments: CC1, CC7, CC11, aftertouch, a pre-recorded articulation activated with a keyswitch, or any combination of these. You may even wish to go beyond how the instrument was designed to be used, in order to exaggerate or soften a particular effect. Should pizzicato mean anything profound to a bass trombone in your current context? Maybe. Freedom of interpretation is paramount.

To my mind, text events (of some kind) make the most sense as a broker to get from blobs and scribbles on the page to pretty-sounding waveforms, as they can include as much information as necessary to get the job done and will go unnoticed if left alone. Power, flexibility and transparency. Yay.

But one thing we desperately need to make all this work is a consistent model for distinguishing between voice/staff-specific per channel text events and system-specific per track text events, so that the scope of instructions is kept to the right level. A con sord. marking in the first violins on channel 1 shouldn't send the second bassoon on channel 14 reaching for a spare sock. Generally speaking, per channel makes the most sense as a default, unless you are using a one instrument per track model or the instruction is meant to be universal: disco falls for everyone!

On a related note, adding a channel component to the notation text event structure itself would be a definite advantage. Perhaps a multi-channel event could simply broadcast the same message on multiple channels. Done.

Following on from this, armed with the granularity of channel filter information, the notation view could even display the same event on multiple staves, rather than forcing the user to copy and paste. Perhaps events concerning all channels could appear at the top of the editor in nice big letters to show it's a global, while text for any subset of channels would appear in the usual places, or whatever other options people want to set, because Reaper is all about the options.

Finally, on a technical level, controlling the response per channel, or per track in some instances, by way of internal processing (with tweakable, curated presets) or external scripting is surely the most manageable approach to make everyone happy. Or me anyway. Aside from chasing, I had it working quite nicely via JSFX with a small subset of instructions* until a recent pre-release. Where did all the text events go?!

Thoughts, feelings, disagreements?

Thanks for reading, and love to you all.

ij

* give me some text events to hook into and a couple of weeks and I'll have a public version available for you
ijijn is offline   Reply With Quote
Old 02-29-2016, 04:49 PM   #54
hamish
Human being with feelings
 
hamish's Avatar
 
Join Date: Sep 2007
Location: The Reflection Free Zone
Posts: 3,026
Default

Quote:
Originally Posted by ijijn View Post
Or detect an embarrassed wrong note while the others turn to stare menacingly.
Yes!!! Option: Bum note detection and shaming (requires smiley note heads on)
hamish is offline   Reply With Quote
Old 02-29-2016, 05:18 PM   #55
reddiesel41264
Human being with feelings
 
reddiesel41264's Avatar
 
Join Date: Jan 2012
Location: North East UK
Posts: 493
Default

Quote:
Originally Posted by Icchan View Post
I only ask that the community would be able to create these presets for different VSTi's and then Cockos would add them into the Reaper by default.

Right now there's not so much that Cockos has added to Reaper from the community. That would of course mean some sort of license agreements etc. but i'm sure it'll be possible.

That would lift the creation of the presets and mappings from Cockos's shoulders and Reaper would get huge bunch of them directly from users and best ones would be added directly to the default installer.
I don't think licensing would be an issue, it would be similar to users creating and sharing sound sets for Sibelius. I think it's a jolly good idea - especially since the Devs don't have access to every single VI that everybody is using.
__________________
http://librewave.com - Freedom respecting instruments and effects
http://xtant-audio.com/ - Purveyor of fine sample libraries (and Kontakt scripting tutorials)
reddiesel41264 is offline   Reply With Quote
Old 02-29-2016, 05:39 PM   #56
ijijn
Human being with feelings
 
ijijn's Avatar
 
Join Date: Apr 2012
Location: Christchurch, New Zealand
Posts: 482
Default

Quote:
Originally Posted by reddiesel41264 View Post
I don't think licensing would be an issue, it would be similar to users creating and sharing sound sets for Sibelius. I think it's a jolly good idea - especially since the Devs don't have access to every single VI that everybody is using.
Absolutely, I think that's the golden ticket to success right there. Then it's a question of finding a visible and vaguely sensible means of sharing them while developing strategies to keep in step with evolving sound libraries and an improving infrastructure.
ijijn is offline   Reply With Quote
Old 02-29-2016, 05:46 PM   #57
peter5992
Human being with feelings
 
peter5992's Avatar
 
Join Date: Mar 2008
Location: Oakland, CA
Posts: 10,478
Default

Quote:
Originally Posted by kerryg View Post
I'm

I've used a number of these devices; MidiPipe, ControllerMate, Johan Loonenga's Midi Harmonizer and a number of other devices. This is exactly what I don't want to do - have an external translator app that has to be booted separately, a patch recalled, and a corresponding patch loaded in an external app. No thanks; warts and all, Logic and Sibelius' existing implementations would still be better than this and would have the great advantage of saving all this inside one project to be recalled at need.
I don't know; it was just a suggestion. I downloaded the program and installed it, and it seems to reroute midi input from my various keyboards -- which is not what I'm looking for. I thought it might be a go between between reaper sending midi messages from whatever is on a track, through this program, to whatever patch I choose to play from. And if I just save a template and load it up, that would be little extra effort.

Yes, I agree, that the built in sounds in Sibelius are so easy to use - but here is the point: THEY GIVE SO LITTLE CONTROL OVER THE ULTIMATE OUTPUT. It is great for composition, but poor at best for production.
peter5992 is offline   Reply With Quote
Old 02-29-2016, 05:49 PM   #58
peter5992
Human being with feelings
 
peter5992's Avatar
 
Join Date: Mar 2008
Location: Oakland, CA
Posts: 10,478
Default

Quote:
Originally Posted by Icchan View Post
I only ask that the community would be able to create these presets for different VSTi's and then Cockos would add them into the Reaper by default.

Right now there's not so much that Cockos has added to Reaper from the community. That would of course mean some sort of license agreements etc. but i'm sure it'll be possible.

That would lift the creation of the presets and mappings from Cockos's shoulders and Reaper would get huge bunch of them directly from users and best ones would be added directly to the default installer.
Yes -- that's exactly what I'm talking about.

Delving into the depths of the various VSTs is well above and beyond from what we can expect from Cockos developers - too much work. We users gotta step up to the plate ourselves here.
peter5992 is offline   Reply With Quote
Old 02-29-2016, 07:14 PM   #59
kerryg
Human being with feelings
 
Join Date: Mar 2007
Posts: 340
Default

Quote:
Originally Posted by peter5992 View Post
Yes, I agree, that the built in sounds in Sibelius are so easy to use - but here is the point: THEY GIVE SO LITTLE CONTROL OVER THE ULTIMATE OUTPUT. It is great for composition, but poor at best for production.
I neglected to mention that I wasn't using Sibelius with its internal sounds but with NI Kontakt using the VSL Factory Patches, and using the Soundset Project's Kontakt 5 Factory Library mapping. http://www.soundsetproject.com/sound...ctory-library/

What they do with articulations is pretty cool - not 100% comprehensive but quite useful. Point entirely taken about the lack of control of final output, I'm hoping Reaper will do far better for that.
kerryg is offline   Reply With Quote
Old 03-01-2016, 07:04 AM   #60
peter5992
Human being with feelings
 
peter5992's Avatar
 
Join Date: Mar 2008
Location: Oakland, CA
Posts: 10,478
Default

Quote:
Originally Posted by kerryg View Post
I neglected to mention that I wasn't using Sibelius with its internal sounds but with NI Kontakt using the VSL Factory Patches, and using the Soundset Project's Kontakt 5 Factory Library mapping. http://www.soundsetproject.com/sound...ctory-library/

What they do with articulations is pretty cool - not 100% comprehensive but quite useful. Point entirely taken about the lack of control of final output, I'm hoping Reaper will do far better for that.
Yeah, I am familiar with Jon's soundsets ... I have pretty much all of them for Eastwest's VSTs, and Kontakt as well. He put a lot of effort into writing those. It's a step up from the default playback, but still ways away from what you might achieve if you put real effort into midi programming ... not to mention that setting it up is anything but intuitive, with lots of opportunity for things to go wrong (with all kinds of crazy effects like the wrong instruments playing back etc.). Right now I'm just using NotePerformer, works pretty well and loads super fast.

I'm kind of hoping that with this Reaper notation editor we can bridge the gap between notation and playback, or at least make it a bit easier.
peter5992 is offline   Reply With Quote
Old 03-02-2016, 11:50 AM   #61
kerryg
Human being with feelings
 
Join Date: Mar 2007
Posts: 340
Default

I think we might want to take particular note of external apps like ControllerMate, MidiPipe, MIDIHarmonizer and now TransMIDIfier that composers are turning to in order to solve problems like this and ask "what are they bringing to the game, what problems are they solving, that makes them compelling enough to overcome the extra complexity of using them as add-ons?" and think about what might be done to incorporate the same sort of functionality internally. This of course has much more far-reaching consequences than simply notation; it stands a chance of making the MIDI engine itself vastly more powerful.

What these external apps add is the ability to route and translate MIDI from one thing to another - basically the model for us might be the Environment in Logic, except with the addition of saveable presets that can be freely exported and swapped (this was often a whopping PITA with the environment). Basically such a layer would allow [any notation or MIDI gesture] mapped to [any notation or MIDI gesture] with flexibility around re-channelization, scaling of values etc. It would ideally be integrated into Reaper at a much more basic level than plugins, include an architecture to export and import settings and affect the notation editor as well.

This would be pretty ambitious. but allow for some extraordinary things, amongst which would be the capability to include both percussion and articulation in it.
kerryg is offline   Reply With Quote
Old 03-02-2016, 12:21 PM   #62
mpb2016
Human being with feelings
 
Join Date: Feb 2016
Posts: 99
Default

Quote:
Originally Posted by kerryg View Post
I think we might want to take particular note of external apps like ControllerMate, MidiPipe, MIDIHarmonizer and now TransMIDIfier that composers are turning to in order to solve problems like this and ask "what are they bringing to the game, what problems are they solving, that makes them compelling enough to overcome the extra complexity of using them as add-ons?" and think about what might be done to incorporate the same sort of functionality internally.
Yes, I totally agree, things like transmidifier, BRSO Articulate, FlexRouter and all the others are basically trying to solve the lack of standardization among different VSTi's switching articulations, massaging CCs etc, but ideally this should be possible to do right in the DAW and that would really be a huge jump forward on the midi side. And I think a lot of this functionality is almost already there in Reaper.

I can see the similarities between, for example, BRSO Articulate (http://www.syntheticorchestra.com/articulatereaper/) and the articulation selection menu in the notation editor in Reaper and if it was possible to map the articulations in a similar way it's done in BRSO for example, it would be great, IMHO

Last edited by mpb2016; 03-02-2016 at 03:21 PM.
mpb2016 is offline   Reply With Quote
Old 03-02-2016, 02:59 PM   #63
paaltio
Human being with feelings
 
Join Date: Aug 2011
Location: Los Angeles, CA
Posts: 308
Default

Quote:
Originally Posted by schwa View Post
2. Linking articulation and dynamics to MIDI messages. For example, a staccato marking triggering a key switch, or a crescendo marking triggering CC volume messages.
This would be awesome!

Two things I should mention that weren't included in the example:

1) MIDI channel mapping would be great for non-keyswitching patches where you have to have articulations on different MIDI channels (e.g. being able to specify something like "staccato = change the note's MIDI channel to 3")

2) Supporting at least two different CC messages per articulation would be important, to support Vienna articulation matrices. Maybe it can just send all the messages if you specify multiple ones for the same articulation?
paaltio is offline   Reply With Quote
Old 03-02-2016, 04:30 PM   #64
peter5992
Human being with feelings
 
peter5992's Avatar
 
Join Date: Mar 2008
Location: Oakland, CA
Posts: 10,478
Default

Quote:
Originally Posted by kerryg View Post
I think we might want to take particular note of external apps like ControllerMate, MidiPipe, MIDIHarmonizer and now TransMIDIfier that composers are turning to in order to solve problems like this and ask "what are they bringing to the game, what problems are they solving, that makes them compelling enough to overcome the extra complexity of using them as add-ons?" and think about what might be done to incorporate the same sort of functionality internally. This of course has much more far-reaching consequences than simply notation; it stands a chance of making the MIDI engine itself vastly more powerful.

What these external apps add is the ability to route and translate MIDI from one thing to another - basically the model for us might be the Environment in Logic, except with the addition of saveable presets that can be freely exported and swapped (this was often a whopping PITA with the environment). Basically such a layer would allow [any notation or MIDI gesture] mapped to [any notation or MIDI gesture] with flexibility around re-channelization, scaling of values etc. It would ideally be integrated into Reaper at a much more basic level than plugins, include an architecture to export and import settings and affect the notation editor as well.

This would be pretty ambitious. but allow for some extraordinary things, amongst which would be the capability to include both percussion and articulation in it.
I love that idea - I wish a day had 48 hours so I could jump into it right away. Yes, we should definitely swap notes. If we set this up right it would be a killer, and serve us well for years to come ....
peter5992 is offline   Reply With Quote
Old 03-02-2016, 05:44 PM   #65
ijijn
Human being with feelings
 
ijijn's Avatar
 
Join Date: Apr 2012
Location: Christchurch, New Zealand
Posts: 482
Default

Quote:
Originally Posted by paaltio View Post
This would be awesome!

Two things I should mention that weren't included in the example:

1) MIDI channel mapping would be great for non-keyswitching patches where you have to have articulations on different MIDI channels (e.g. being able to specify something like "staccato = change the note's MIDI channel to 3")

2) Supporting at least two different CC messages per articulation would be important, to support Vienna articulation matrices. Maybe it can just send all the messages if you specify multiple ones for the same articulation?
Hey gang,

You can do a lot of similar stuff already via set-and-forget JSFX plugins. Case 2 is especially easy using VI Sculpt and VI Officer together and is exactly what they were designed for. Links are in my signature if you'd like to try.

VI Officer was originally inspired by VSL's matrices and is a 6x6 grid that creates an output stream from two input CCs. I typically use length and overlap, which are output options from VI Sculpt; this whole system could easily be adapted to take advantage of any notated articulation hooks should they arise. This new output can take the form of (latching or non-latching) keyswitches, a CC with varying values, or a range of CCs producing proportionally-variable values. These behaviours can be filtered by channel, so by using multiple instances you can cover a range of different options, all on the same track. Another advantage of using JSFX over any other system I know of becomes evident when dealing with pre-recorded material: via PDC it knows what's coming up next, so can adapt perfectly to every situation. It's almost like having ARA or direct access to the editor data.

Speaking of case 1, this is also easily solved with simple translation and routing plugins, with the added benefit that you can still host other instruments on the same track that would otherwise use all of the remaining channels. It's a very modest task to implement this and once we have a model for how these notation events manifest, I'll be getting to work on hashing out a solution to share with y'all.

When it comes to programming virtual instruments, if you expect the central DAW itself to do everything you want perfectly then a) you're going to be disappointed that it doesn't quite cover all of your (often wacky) use cases, or b) you're going to be waiting an extremely long time, then refer to scenario a). Besides, the JSFX platform is a part of Reaper, so the way I see it, actually putting it to good use is a flexible, native solution to the problem and why Justin included it in the first place. And continuing to speak personally, this is the prime reason why I use Reaper exclusively for such tasks. It's a happy accident that it also fits my workflow style perfectly.

I understand the mindset though. I really do. I like to keep things clean, integrated and focused. I never play unofficial mods of games. I respect the canon. I don't like installing a string of messy dependencies just to get an application to run. But sometimes we need to feel our way a little outside our emotional comfort zones, especially when it concerns the inherent modularity required for architecting complex solutions to complex problems, and super-especially when it would enrich our creative lives. We can still tuck the wires away when we're done. Nobody else will know, and maybe you'll even begin to forget in time. Meanwhile we can incorporate more and more built-in features to streamline the process.

Schwa is doing such a fabulous job with this evolving notation editor and with just a little vision and planning from the wider community, we can take the ball from the slavering maw of our neutrally-vowelled sock puppet master and run with it. I'll certainly be putting in the hours.

Finally, there are revamps for both of these plugins in the works. There are a lot of sliders in VI Sculpt but please don't let that put you off. That's just to give you extra control when you really need it. I generally only need to change a handful of sliders to set up any given instrument. There's a GUI version coming soon anyway, which should help matters too.

If you give it a go and have troubles or any other feedback, I'm more than happy to have a chat about it. Good luck to you all.

Sorry if this comes off as rather long-winded and rantish. I'm just (maybe a little too) passionate about this topic, as it's a major focus for me.

Let's get cooking!
ij
ijijn is offline   Reply With Quote
Old 03-02-2016, 05:58 PM   #66
kerryg
Human being with feelings
 
Join Date: Mar 2007
Posts: 340
Default

I don't want to propose that we succumb to "mission creep" and expand a simple request - how to display percussion notation - into something so huge it'd divert significant resources from the many tasks on the dev's plates. But I thought this might be a good time to step back and ask if there's a way to kill all the birds with one stone. Schwa, it's your thread, is this crazy talk?
kerryg is offline   Reply With Quote
Old 03-02-2016, 06:07 PM   #67
hamish
Human being with feelings
 
hamish's Avatar
 
Join Date: Sep 2007
Location: The Reflection Free Zone
Posts: 3,026
Default

So true. Default GM Percussion map and elementary link of dynamics to velocities vs Logic environment or Cubase expression map. It'll be a long journey.

I see a lot more portable pre releases on my desktop...
hamish is offline   Reply With Quote
Old 03-02-2016, 06:53 PM   #68
schwa
Administrator
 
schwa's Avatar
 
Join Date: Mar 2007
Location: NY
Posts: 15,747
Default

The first step will be simply expanding the existing MIDI note name map to support mapping pitches (so C0 displays as E4 or whatever) and notation glyphs (so C0 displays as an X or triangle or whatever).
schwa is offline   Reply With Quote
Old 03-02-2016, 11:37 PM   #69
paaltio
Human being with feelings
 
Join Date: Aug 2011
Location: Los Angeles, CA
Posts: 308
Default

Quote:
Originally Posted by ijijn View Post
You can do a lot of similar stuff already via set-and-forget JSFX plugins. Case 2 is especially easy using VI Sculpt and VI Officer together and is exactly what they were designed for. Links are in my signature if you'd like to try.
I'm already using my own https://stash.reaper.fm/v/18047/artic...apper_v0.1.zip

Built-in will always be better.
paaltio is offline   Reply With Quote
Old 03-03-2016, 03:30 AM   #70
mpb2016
Human being with feelings
 
Join Date: Feb 2016
Posts: 99
Default

Quote:
Originally Posted by schwa View Post
The first step will be simply expanding the existing MIDI note name map to support mapping pitches (so C0 displays as E4 or whatever) and notation glyphs (so C0 displays as an X or triangle or whatever).

I was more thinking long term and that it would push Reaper ahead in comparison with other less customizable DAWs that dont have notation or any good way of handling articulations. Expression maps were a huge deal for a lot of people when it appeared in Cubase but there already exists a variety of more or less difficult to understand external solutions so I guess the same thing can be accomplished already in Reaper.

Personally im happy just to have a notation view at all and not being a coder I understand it's probably a overwhelming task to implement major changes like that, I just got excited by the speed the notation got introduced and saw further possibilities with it

Thanks for your excellent job!
mpb2016 is offline   Reply With Quote
Old 03-03-2016, 04:51 AM   #71
ijijn
Human being with feelings
 
ijijn's Avatar
 
Join Date: Apr 2012
Location: Christchurch, New Zealand
Posts: 482
Default

Quote:
Originally Posted by paaltio View Post
Nice one! Kind of reminiscent of this old chestnut of mine, but with a file-reading twist.

Isn't that almost the opposite of want you wanted here though?

Surely the new approach would look something like:
  1. You notate an articulation in Reaper's score view
  2. Reaper propagates this change, either as a change in length, some sort of "1:staccato" type message, or a combination of such things
  3. The instrument somehow responds appropriately, so there's some suitable translation going on somewhere, either within Reaper's midi pre-processor (for want of a better term) or via plugins
This is well within the realms of possibility now. Then, as an added bonus, you only need the one channel per instrument.

Quote:
Originally Posted by paaltio View Post
Built-in will always be better.
Hmm... maybe, whatever better means in this context, but built-in (in the strictest sense) will also always be a compromise on some level. Reaper is quite hackable via extensions in addition to the various scripting options, so isn't the appearance of "nativity" and overall quality/usability the most important thing?

I certainly agree that having integrated access to a wider variety of common tasks out of the box would be nice. Of course it would. But for dealing with the mind-boggling multitude of options out there, and especially in this early development phase, yada^3...

Unless I misinterpreted schwa's focus, which I thought was fairly clear, this thread was intended as a place to discuss data structures and basic approaches to deal with core tasks in a fairly elegant and future-proof, extensible way, rather than a bombardment of wishlist-style features that can be achieved right now with fairly little effort and a modicum of judicious "externalism". And further to that point, I would say that something I'm using right now to do an important job is "better" than something that doesn't exist yet, not only because it's helpful for actually getting things done but also because having access to these features with very low stakes in an open, flexible testing ground can be extremely useful for developmental purposes before setting things in stone later on; this sort of thing can potentially be beneficial for scripters, for Reaper as a host, and for the many fine sample library folks out there. And my sincerest apologies to all if I've added more noise to the signal than I meant to.

Further thoughts on data structures

Schwa, regarding the topic of options for mapping data back and forth, I would simply encourage you to keep following your nose with a sensible and strongly identifiable brokering layer structure that can then be easily interpreted by processes both internal and external alike, and, most importantly and I'm extremely eager to see this, for there to be some elementary per channel functionality options in terms of both display and performance directions.

Also, and maybe this could provide some amount of further inspiration, I've been toying with my own take on UACC as a logical hub for articulation switching, but using CC0 and CC32 together as a 14-bit value instead of a regular 7-bit CC32. That's probably not quite enough data for comprehensive coverage of everything you can do in music(!), but it's a start. Here's some of my thoughts behind the process...

Many common articulations can be expressed simply and conveniently as bit flags. An instrument can typically be muted or not, or in the case of brass, for example, different mutes are potentially available. So, one interpretation of "mute-ness" could take up 2 bits: 0 = none, 1 = regular con sord. or straight mute, 2 = Harmon mute and 3 = miscellaneous mute du jour. Or you could simply model con/senza sord. using 1 bit, depending on how detailed you want to be with it, but going this route would no doubt cause jazz-oriented libraries to suffer.

Tremolo is typically another 1-bit setting: you're doing it or you're not. But then do we model measured tremolo (of different lengths) vs scrubbing, and maybe add some more bits? Or could we handle measured tremolo in a different way by using those shorter actual note values as secondary data? In which case, 1 bit for generic, messy and/or ethereal scrubbing would be fine, along with some other wizardry for note subdivisions.

Note length/articulation is another multi-bit setting, incorporating:
  • staccatissimo/staccato (probably mutually exclusive, therefore different values within a single bit group)
  • tenuto lines (which would need to work both with or without the staccato dots)
  • slurs (picking this up for notes that are part of a slurred group, and ideally also the number of slurs deep, to disambiguate between phrase markings and bowings*, for example, and to inform the interpretation of other articulation markings), as well as
  • accents of varying types, etc.
Then there are trills: the bit value could correspond to the interval played: 0 = no trill, 1 = minor 2nd, ... up to whatever you need. Violins routinely go up to perfect 4th trills in many libraries and you would need a value of 5 to model this, so maybe allow 3 bits for this, which gives some headroom up to a perfect 5th.

Even if the instruments themselves can't understand this format, and obviously at this point in time none of them would, such an approach could serve as an intermediate ground after which it could be translated in some way, internally or externally, as desired. Then when interfacing with actual instruments, we can simply build up a database of available articulations (I have a plugin that detects this information for UACC-enabled instruments and could be retasked for such a job) and then find a best fit solution for any given articulation scenario.

In conclusion, I would find either the text-based or multi-bit internal interpretation of the notational input extremely useful. One could easily be converted to the other in most cases. I do feel that in this unfolding drama, text items probably have the edge as the ultimate source in that they can be assigned arbitrarily and thus have an infinite number of possibilities, but perhaps a secondary conversion process to bit flags could be useful somewhere in the pipeline. In addition, text events have the further advantage that they can be ignored most easily, for instant back-compatibility or if you'd like to turn off all tweaking of data at the notation end.

Oh yes, having distinct text event types, or even the same internal type with different header information, would be another step forward. Distinguishing expression text (mf, dolce) from technique text (pizz.) from lyrics and so on could assist the layout engine in deciding where to put things and also give processes more context to choose what to do with the information. Lyrics would be amazing to have for integration with choir libraries. A plugin (yes, I know...) could easily read the lyric strings and set things accordingly: I have about half a dozen different word-enabled choral offerings and will be testing this out on them soon.

One more thing: noteheads, articulation markings and other note-event specific gems could be represented as accompanying messages sent quasi-simultaneously (just prior) or transmit themselves parasitically inside the host message (possibly as something like a midi_recv with offset, msg1, msg2, msg3 AND msgArgs[]). I suppose consistency and coverage are the most important considerations here, along with fitting nicely into existing paradigms.

Anyway, those are my latest ideas on the subject. As always, looking forward to the next tasty pre-release.

All the best,
ij

*or these could be declared specifically within the editor

Last edited by ijijn; 03-04-2016 at 04:45 PM. Reason: needed more padding
ijijn is offline   Reply With Quote
Old 03-09-2016, 03:43 PM   #72
hamish
Human being with feelings
 
hamish's Avatar
 
Join Date: Sep 2007
Location: The Reflection Free Zone
Posts: 3,026
Default

Given that the note-map has been deferred til post 5.20, and given that we have had a sneak peek with pre16 maybe now would be a good time to discuss any ideas that it has raised.

P9 had this to say:

Quote:
Originally Posted by planetnine View Post
If you insert a note on the percussion stave on a position that two or more MIDI notes are mapped to, it would default to one of them, and vertical note movement with the mouse or numpad keys would scroll through the other mapped MIDI articulations (eg open/half/closed hihat). That's not rocket science to create from a mapping table.

It might get slightly more involved if the mapping source was a MIDI trigger articulation and a CC04 value was needed for the degree of HH "openess", but it's not insurmountable. The UX logic just needs to be thrashed out to make it workflow-friendly (trigger is the HH note from electronic kits that uses the plate hit and pedal CC04 combined to determine the HH sound).
In addition to the snare and HH examples, there are quite a lot of kits libraries that have multi note toms as well, ie L and R hands.

In this example we wouldn't connect a different note head, and even probably don't need any sticking indication, but the notation editor will need to be directed to send the correct MIDI #.

How could that work? I guess with connected articulation sign (L.H. or R.H.) that is optionally visible.

Last edited by hamish; 03-09-2016 at 03:51 PM.
hamish is offline   Reply With Quote
Old 03-09-2016, 04:59 PM   #73
ijijn
Human being with feelings
 
ijijn's Avatar
 
Join Date: Apr 2012
Location: Christchurch, New Zealand
Posts: 482
Default

Quote:
Originally Posted by hamish View Post
Given that the note-map has been deferred til post 5.20, and given that we have had a sneak peek with pre16 maybe now would be a good time to discuss any ideas that it has raised.

P9 had this to say: ...

In addition to the snare and HH examples, there are quite a lot of kits libraries that have multi note toms as well, ie L and R hands.

In this example we wouldn't connect a different note head, and even probably don't need any sticking indication, but the notation editor will need to be directed to send the correct MIDI #.

How could that work? I guess with connected articulation sign (L.H. or R.H.) that is optionally visible.
Yes indeed, these are certainly all really interesting thoughts.

As I see it, success in this area boils down to a clean separation between how it looks (noteheads, articulations, "handedness", etc.) and what it means, in terms of keyswitches, CCs, remapped pitches...

Once the appearance side is taken care of, via a uniform, predictable, tappable and easily extensible system, then it's a matter of plumbing the appearance to the meaning, which can be done via cunning internal approaches of Cockos's devising and/or external plugins of various kinds from all manner of sources for all manner of situations, which gives us the flexibility to get things done in the way that's best for each of us. Having these distinct layers is the key. Does that make sense to everyone? Or anyone?

Regarding bidirectionality, which is definitely a great idea, there could be an action/button/whatever to map all/selected notes back to the notation once a relationship is set up. Any conflicts or ambiguity could be resolved manually by choosing from the established options, although I don't imagine this would happen very often except for some very convoluted routing.

This process would be especially useful if you've been working with your notes for a while already in their official (extreme) pitch positions and would like to tidy up the notation, rather than starting from scratch in a new project, or as part of the process of auditioning sounds and deciding on suitable maps for them after the fact.
ijijn is offline   Reply With Quote
Old 03-19-2016, 08:04 PM   #74
ijijn
Human being with feelings
 
ijijn's Avatar
 
Join Date: Apr 2012
Location: Christchurch, New Zealand
Posts: 482
Default

Okay team, here's a quick update...

Regarding how things stand in the notation camp regarding flexibility and extensibility, the good news is that the per-voice/voiceGroup staff display option is basically doable without changing anything under the hood: it's essentially a visual re-shuffling of the data and could be added later when the time is ripe. Such a view would be necessary before I could personally consider using the notation editor in a big way, and I would be hard pressed to overstate how keen I am to do that.

The reason for my reluctance is because even a modestly busy and fairly well-behaved multi-channel track gives the misleading surface impression of a Xenakis masterwork, largely on account of the fairly wide note ranges and rhythmic independence: here's a sample track with 10 (primarily monophonic) channels on the go, and even after assigning voices it still looks rather scribbly (there are really only 2 voices per staff to go around in terms of the visuals)...



whereas being able to drill down into channel/voice mode and explode these lines into their own staves without having to resort to separate tracks () would be simply glorious.


The slightly unfortunate news is that the clef system and general staff-ness properties would most likely need a slight overhaul to facilitate various scenarios (I outlined some possible solutions in the discussion of voice groups) and some of the various notation text events would require a subtly different (or at least elaborated) internal structure to mirror their context: a ppp marking on a staff that represents voices 5-6 of channel 1, for instance, should reflect as much in its parameters, so that it doesn't bleed into any of the other data streams and muddy the waters.

A simple NOTA dynamic ppp <channel/s> <voice/s> ypos... would do it, or NOTE <channel> <pitch> dynamic ppp... for a note-snapped event, and something like -1 could work as a placeholder meaning "all" whenever there is a genuinely global event that needs it. Then it would just be a question of slotting those values into the mix based on context. Plus other largely cosmetic but very useful things such as creating and storing a staff name in the absence of using the track name, probably via a simple text box.

Bonus points are awarded for implementing linked and unlinked options within a group, which would immediately allow dynamic separation within the staff, so you could have pp for "voice 1" and mf for "voice 2", for example.


I'm still a little at a loss as to why, in the current implementation, voices are not being tapped for their incredible potential as sub-channel properties but rather as another sub-track paradigm, which is what channels are anyway, don't you think?

Separating lines visually by channel would have made just as much sense in the absence of voices, so voices don't actually add anything meaningful to the mix as it stands aside from specificity of up/down-ness in appearance. These channels could have spun out into more staves when their number increased past 2, 4, etc. with odd channels stem up and even channels stem down. By comparison, the use of voices in the current setup limits the options (only 2-3 rather than 16) more than it contributes to them.

I don't actually suggest that we use channels for this purpose, but I do suggest that we carefully consider the role of voices as yet another resource in the musical chain. There is a missed opportunity in the hierarchy here: if we think of voices within channels within tracks then our creative options are widened immensely and there is a less confusing sense of how everything fits together.

I'm holding out hope that this can be addressed at some point before the features are set in stone. It would be a terrible shame if such an amazingly powerful tool were not to see the light of day.
ijijn is offline   Reply With Quote
Old 03-20-2016, 03:05 AM   #75
planetnine
Human being with feelings
 
planetnine's Avatar
 
Join Date: Oct 2007
Location: Lincoln, UK
Posts: 7,924
Default

Quote:
Originally Posted by ijijn View Post
Yes indeed, these are certainly all really interesting thoughts.

As I see it, success in this area boils down to a clean separation between how it looks (noteheads, articulations, "handedness", etc.) and what it means, in terms of keyswitches, CCs, remapped pitches...

Once the appearance side is taken care of, via a uniform, predictable, tappable and easily extensible system, then it's a matter of plumbing the appearance to the meaning, which can be done via cunning internal approaches of Cockos's devising and/or external plugins of various kinds from all manner of sources for all manner of situations, which gives us the flexibility to get things done in the way that's best for each of us. Having these distinct layers is the key. Does that make sense to everyone? Or anyone?

Regarding bidirectionality, which is definitely a great idea, there could be an action/button/whatever to map all/selected notes back to the notation once a relationship is set up. Any conflicts or ambiguity could be resolved manually by choosing from the established options, although I don't imagine this would happen very often except for some very convoluted routing.

This process would be especially useful if you've been working with your notes for a while already in their official (extreme) pitch positions and would like to tidy up the notation, rather than starting from scratch in a new project, or as part of the process of auditioning sounds and deciding on suitable maps for them after the fact.

The bidirectional mapping thing just offers multiple articulations/source notes for each mapped stave position. It only splits one way (ie stave to source notes, no multiple stave positions for one note), the bidirectional description is only due to the fact you set it one way (raw note source to mapped stave) and the mapping is used in both directions (source to stave for positioning and appearance and stave to source for editing or additions).

Without some form of mapping bidirectionality, you would only be able to edit the source in piano-roll or in Alt-4 with the mapping switched off and the notes in their raw positions. The mapping bidirectionality allows editing in the mapped state.

I agree that the mapping needs to be based on the source articulation or its channel and note, and the note head etc should just be a result of the way that articulation is set/the user has set that particular note-position. What it means is set by the MIDI source note and channel; what it looks like is set by the stave mapping (in fact it would be good if any handedness lettering notation and suchlike could be set in addition to the note head in the mapping table).

For HH triggers and CC values, the CC would have to be in ranges, and each range would be a separate entry in the mapping table (eg. for closed/half/open HH). Complications would be issues like making sure there are no range overlaps.


I think we are thinking along similar lines...



>
__________________
Nathan, Lincoln, UK. | Item Marker Tool. (happily retired) | Source Time Position Tool. | CD Track Marker Tool. | Timer Recording Tool. | dB marks on MCP faders FR.

Last edited by planetnine; 03-20-2016 at 03:11 AM.
planetnine is offline   Reply With Quote
Old 03-20-2016, 04:21 AM   #76
mschnell
Human being with feelings
 
mschnell's Avatar
 
Join Date: Jun 2013
Location: Krefeld, Germany
Posts: 14,686
Default

Quote:
Originally Posted by kerryg View Post
I don't want to do - have an external translator app that has to be booted separately
Maybe a VST version of the thingy is available ?!?!?

(I did some of this stuff using JSFX programming: you can do everything you can ímagine, but it does need some coding skills...)

-Michael
mschnell is offline   Reply With Quote
Old 03-20-2016, 08:17 AM   #77
memyselfandus
Human being with feelings
 
memyselfandus's Avatar
 
Join Date: Oct 2008
Posts: 1,598
Default

Quote:
Originally Posted by ijijn View Post
Okay team, here's a quick update...

Regarding how things stand in the notation camp regarding flexibility and extensibility, the good news is that the per-voice/voiceGroup staff display option is basically doable without changing anything under the hood: it's essentially a visual re-shuffling of the data and could be added later when the time is ripe. Such a view would be necessary before I could personally consider using the notation editor in a big way, and I would be hard pressed to overstate how keen I am to do that.

The reason for my reluctance is because even a modestly busy and fairly well-behaved multi-channel track gives the misleading surface impression of a Xenakis masterwork, largely on account of the fairly wide note ranges and rhythmic independence: here's a sample track with 10 (primarily monophonic) channels on the go, and even after assigning voices it still looks rather scribbly (there are really only 2 voices per staff to go around in terms of the visuals)...



whereas being able to drill down into channel/voice mode and explode these lines into their own staves without having to resort to separate tracks () would be simply glorious.


The slightly unfortunate news is that the clef system and general staff-ness properties would most likely need a slight overhaul to facilitate various scenarios (I outlined some possible solutions in the discussion of voice groups) and some of the various notation text events would require a subtly different (or at least elaborated) internal structure to mirror their context: a ppp marking on a staff that represents voices 5-6 of channel 1, for instance, should reflect as much in its parameters, so that it doesn't bleed into any of the other data streams and muddy the waters.

A simple NOTA dynamic ppp <channel/s> <voice/s> ypos... would do it, or NOTE <channel> <pitch> dynamic ppp... for a note-snapped event, and something like -1 could work as a placeholder meaning "all" whenever there is a genuinely global event that needs it. Then it would just be a question of slotting those values into the mix based on context. Plus other largely cosmetic but very useful things such as creating and storing a staff name in the absence of using the track name, probably via a simple text box.

Bonus points are awarded for implementing linked and unlinked options within a group, which would immediately allow dynamic separation within the staff, so you could have pp for "voice 1" and mf for "voice 2", for example.


I'm still a little at a loss as to why, in the current implementation, voices are not being tapped for their incredible potential as sub-channel properties but rather as another sub-track paradigm, which is what channels are anyway, don't you think?

Separating lines visually by channel would have made just as much sense in the absence of voices, so voices don't actually add anything meaningful to the mix as it stands aside from specificity of up/down-ness in appearance. These channels could have spun out into more staves when their number increased past 2, 4, etc. with odd channels stem up and even channels stem down. By comparison, the use of voices in the current setup limits the options (only 2-3 rather than 16) more than it contributes to them.

I don't actually suggest that we use channels for this purpose, but I do suggest that we carefully consider the role of voices as yet another resource in the musical chain. There is a missed opportunity in the hierarchy here: if we think of voices within channels within tracks then our creative options are widened immensely and there is a less confusing sense of how everything fits together.

I'm holding out hope that this can be addressed at some point before the features are set in stone. It would be a terrible shame if such an amazingly powerful tool were not to see the light of day.
Great work!!! very good. any unforeseen feature losses that would happen with your suggestions?
memyselfandus is offline   Reply With Quote
Old 03-20-2016, 10:24 AM   #78
memyselfandus
Human being with feelings
 
memyselfandus's Avatar
 
Join Date: Oct 2008
Posts: 1,598
Default

Add ability to create/load custom tuning for track and pretty much done with microtonal tuning the GUI isn't as important as the function. please look at this









ALSO Font loading for Text











PLEASE PLEASE PLEASE have a look at it being possible to add these functions. Reaper is already microtonal friendly..

- Extremely flexible midi and audio routing. No built-in softsynths except ReaSynth. No problems with sysexes or multiple midi channels.

The QWERTY keyboard can be used as a midi keyboard, like most DAWs. But in Reaper, you can assign any midi note to any key. So you aren't stuck with 7 white & 5 black keys. Here's how: https://stash.reaper.fm/v/8772/reaper-vkbmap.txt

The appearance of the piano roll on the midi editor can be completely customized, e.g. more than 12 notes per octave.

Other daws are behind Reaper with this stuff
memyselfandus is offline   Reply With Quote
Old 03-20-2016, 12:27 PM   #79
ijijn
Human being with feelings
 
ijijn's Avatar
 
Join Date: Apr 2012
Location: Christchurch, New Zealand
Posts: 482
Default

Quote:
Originally Posted by memyselfandus View Post
Great work!!! very good. any unforeseen feature losses that would happen with your suggestions?
Thanks youyourselfandyouguys! I'll try to digest your recent suggestions shortly. For a start, I imagine that for microtonality, which is a fascinating area and one I should explore more fully, additional accidentals paired with text meta-events, as with everything else so far, would be a solid foundation to build on.

Hmmm, as for feature losses, I can't think of any, but I'm certainly keen to hear if there's something I'm missing. The intention is for it to be an elegant superset of features around what exists already.

From a display angle, it's largely an exercise in hiding/combining staves to keep things sane. If we start with the assumption that we have access to 16 channels with 16 voices per channel on a single track, say, then this gives us 256 voices up our sleeves. We probably don't want all 256 (empty) voices to show up all the time, each on its own staff: that would be very silly.

Perhaps by default the notation editor could show "voice group 1" (usually voices 1 & 2) of channel 1, and then when we add notes and change their channel/voice numbers, more staves could spring up to accommodate. This way it's always very tidy while keeping everything accessible.

This architecture would require the grouping of voices into staves (defaulting to pairs, I'd imagine, but an override later would be useful) and staves into instruments: one staff for predominantly melodic instruments, but two or more for keyboard instruments, harp, etc. (using a brace, and what lovely braces they are after that recent update) or with a bracket/other connecting strategy for a related group of instruments, or ossia, or similar. Then clef events and sneaky non-global key signatures (and maybe even time signatures, that would be amazing to see at some stage) could be a per-staff thing, which reinforces the way notation is typically used.

Some of you may be wondering about the interaction of voices with things like CCs. Being a light traveller, I always generate my own CCs on the fly after routing voices to buses, so it doesn't concern me directly, but I suppose having per-voice CCs (via those text meta-events again, why mess with perfection?) would be another helpful step to shape each voice as you like, as well as providing a suitable link between visible notation and audible result when it comes to interpreting the notation in meaningful ways.

Finally, if you don't want to use voices at all, each channel still has the per-staff grunt it needs. Ditto for one line per track. It just scales up or down as needed.

Last edited by ijijn; 03-20-2016 at 01:30 PM. Reason: added some thoughts
ijijn is offline   Reply With Quote
Old 03-20-2016, 05:09 PM   #80
memyselfandus
Human being with feelings
 
memyselfandus's Avatar
 
Join Date: Oct 2008
Posts: 1,598
Default

Quote:
Originally Posted by ijijn View Post
Thanks youyourselfandyouguys! I'll try to digest your recent suggestions shortly. For a start, I imagine that for microtonality, which is a fascinating area and one I should explore more fully, additional accidentals paired with text meta-events, as with everything else so far, would be a solid foundation to build on.

Hmmm, as for feature losses, I can't think of any, but I'm certainly keen to hear if there's something I'm missing. The intention is for it to be an elegant superset of features around what exists already.

From a display angle, it's largely an exercise in hiding/combining staves to keep things sane. If we start with the assumption that we have access to 16 channels with 16 voices per channel on a single track, say, then this gives us 256 voices up our sleeves. We probably don't want all 256 (empty) voices to show up all the time, each on its own staff: that would be very silly.

Perhaps by default the notation editor could show "voice group 1" (usually voices 1 & 2) of channel 1, and then when we add notes and change their channel/voice numbers, more staves could spring up to accommodate. This way it's always very tidy while keeping everything accessible.

This architecture would require the grouping of voices into staves (defaulting to pairs, I'd imagine, but an override later would be useful) and staves into instruments: one staff for predominantly melodic instruments, but two or more for keyboard instruments, harp, etc. (using a brace, and what lovely braces they are after that recent update) or with a bracket/other connecting strategy for a related group of instruments, or ossia, or similar. Then clef events and sneaky non-global key signatures (and maybe even time signatures, that would be amazing to see at some stage) could be a per-staff thing, which reinforces the way notation is typically used.

Some of you may be wondering about the interaction of voices with things like CCs. Being a light traveller, I always generate my own CCs on the fly after routing voices to buses, so it doesn't concern me directly, but I suppose having per-voice CCs (via those text meta-events again, why mess with perfection?) would be another helpful step to shape each voice as you like, as well as providing a suitable link between visible notation and audible result when it comes to interpreting the notation in meaningful ways.

Finally, if you don't want to use voices at all, each channel still has the per-staff grunt it needs. Ditto for one line per track. It just scales up or down as needed.
I love that you are here ijijn and I love reading your awesome posts great ideas!
memyselfandus is offline   Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -7. The time now is 06:19 AM.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.