The Big Bundles... (A question about VST effects/mixing bundles)

Yeah it’s the same thing.

You can set the loudness for Spotify etc.

Essentially making the Christmas tree into a sausage.

Don’t do everything in the limiter though. I usually have some saturation and compression as well.

2 Likes

Yeah, I have to be way more mindful on how to utilize FF.

I take it that you set up the signal chain with the limiter at the end. Or on the master bus.

I’m definitely going to watch some of the vids to get trained.

How do you feel about setting saturation and some effects, then printing it to audio?

I would leave compression and limiting alone since those are active processes.

And thank you for all the help @monquixote and @alechko.

And thank you @CCMP for pushing me to get the full MacOS versions of FF.

It’s getting hard to not consider myself an actual musician with the amount of tools I own and know how to use to make music.

2 Likes

Fav comment on this forum since I joined :rofl:

1 Like

There are two options

Mix the track and export and then do a separate mastering session on the bounced wav

Do everything in one and put the mastering chain on the master bus of your project.

I do the latter but I’m a beginner as well so I’m not saying it’s the right way.

This is how I do it. But I do little bounces in place when using multiple instances of the same instrument or plug-in.

The M1 MBP, is a beast, but something happens in the render. Especially for non optimized stuff, like Animoog Z. So out of abundance of caution, I try to commit is small steps as much as possible, and leave mastering to the master bus, if it was a one shot like Ozone.

Now that we are having this conversation, I think since it’s now a more involved process and a bit more complicated with FF being broken up to a limiter and compressor, then a final mix down and bounce will be better.

Even with Pro Q3, unless I’m really shaping the sound, I think I may EQ on the final mix down as well, just so I don’t have multiple EQ’s fighting each other.

My dumbass might be cutting and boosting the same frequency over multiple instances, wondering why I’m getting mud.

throw an instance of pro q3 on the channels you having trouble with and use the built in analyzer to see the clashes, it’s one of the best features ever

it’s automatically detecting other instances and they will show under the analyzer button, it will color the clashing frequencies in red and it’s very easy to see

2 Likes

Oh shit. No way. Do they sum together or just throw a warning?

Meaning do they work together?

it’s only analyzer so it’s only showing but it’s very easy to create counter cuts, the stronger the clash the more red it becomes, super easy to use, just try it. you can also create dynamic ducking or boosting based on sidechain or frequency response

2 Likes

Word.

I’m excited to get to it now.

I just got home. So after I get some tracks started, I can start digging in.

Those vids are real going to help.

2 Likes

I will say though. I tend to learn more by fulfilling a need. So hopefully some problems creep up that helps me apply the solutions at hand!

Whoa. First impressions.

Pro Q is WAYYYY better than Ozone’s EQ.

It’s more reactive and feels like it really really adjusts the frequencies. Not just kinda in a ham fisted way.

This is night and day.

And it just sounds better.

Better than Soundtoys as well. Just my opinion. And I’ve only touched Pro Q

2 Likes

“One of us! One of us…”

2 Likes

Might that be a bit over complex?

I pretty much never do that and I’ve got a shitty mid range windows laptop.
That said I can’t use Pigments as it’s wrecks my laptop.

1 Like

This is actually what I mean… I was using a bunch of pigments for the last DnB challenge… It was for the videogame theme… and everytime i needed another instance of pigments, I bounced the old one in place…

The Animoog as well because they got these orbs, and they behave wierd even after you stop pressing the keys… Its not just an animation, its actually representing something happening under the hood… If one orb is still on a path, it messes up the beginning attack of whatever sound it usually makes.

I actually think committing to audio (print fx) is a key towards progress. Treat everything like a recorded instrument.

You can also do unlimited saves, so you can always create incremental numbering as you go so it’s not like you’ve lost it forever.

1 Like

I like tweaking stuff as I go so I think I’d find it super annoying to keep dropping to audio, but maybe I should try it.

The only time I EQ things on the go is usually to subtract things out that are not needed. I rarely boost unless I’m in the mixing stages.

The exception to this is if I’m using the EQ for creating a lo-fi effect. Maybe I want my drums to sound like an old recording and I’ll use EQ quickly shape up and then there’s a boost with some heavy cuts.

2 Likes

Seriously try it. It’s freeing.

I would try it on something new though. But save/number/save/etc…

EDIT: Usually hosts have a way to add in notes. I make a habit of writing in the instrument and if using a preset, then write in the preset name. I do this in Live.

There’s an additional benefit and that’s compatibility. You never know when you might decide to revisit something and that might be 10 or even 15 years later. In that time maybe you switched from Apple to Win, maybe there’s a major processor change and you find your current plugins don’t load for old projects.

Another benefit is that you’ve also just created a massive audio library. Maybe you hit a dead-end on what you started, but down the road you need something or even just inspiration and now you have old recorded audio to fill in the gaps wherever needed.

This becomes harder and harder to do as time passes and as software ages out. Companies fold, dependencies change. WAV audio is long lasting standard.

Yep @monquixote I’d go along with @elxsound here. I see it as like developing a bunch of photos (oops, just carbon dated myself) aka; you only have a few goes at something. I really enjoy a) sound design sessions where you punch in some effects and turn those into random samples to drop into tracks for texture. And b) giving myself a window of time to render an instrument into something finished. I might have a couple of gos at programming/playing a part, but after that I get the basic sound down and render. That can always be re-processed in multiple ways of course.

But this idea of taking a bunch of plugins and rendering the output makes so much sense. I know the subscription services take a lot of heat (I quite enjoy Arcade) but if you use them as a sample making it can be fun because you can sometimes find loops within the mashups you made in the app. Same with Atlas too (althought you own that one of course!) That way you have the recording always natively available in your DAW and you effectively have built your own sample library.

1 Like