Creating Randy Tucker and Whiskey Rodeo – Part 4

Noise, Cuts, and Mastering Madness

on 2025-05-31 in Insights | ,

Thanks for joining me on my creative journey! As the creator behind “Randy Tucker” and “Whiskey Rodeo”, I’m excited to share my workflow with you! You can find other parts in this series of articles at the links below:

Creating Randy Tucker and Whiskey Rodeo Part 1
Creating Randy Tucker and Whiskey Rodeo Part 2
Creating Randy Tucker and Whiskey Rodeo Part 3

Balancing Noise and Creative Vision

One thing you learn quickly when working with generative AI is that randomness isn’t a bug—it’s a feature. If you’re trying to produce consistent, genre-specific AI music, that randomness can be a bit maddening. But if you know how to work with it, it becomes part of the creative process.

The Role of Randomness in Generative AI

AI platforms like Suno rely on large datasets tagged with musical features, not true genre comprehension. That means when you give it a well-structured lyric and say “Appalachian folk,” you’re at the mercy of whatever data the AI associates with those words. Sometimes you get bluegrass. Sometimes you get Irish jigs. Sometimes both, in the same track.

This isn’t necessarily bad. Sometimes it’s illuminating and showcases musical history. But it means the more precise you want your results to be, the more intentional you have to be up front.

When I write songs, I take the time to pay close attention to structure, rhyme, and meter. That gives the AI a better shot at generating clean phrasing. I also try to be clear and consistent with my musical style prompts—but even then, surprises happen. “Ghost of the Company Town,” for example, ended up with about 40 different versions during production of Whiskey Rodeo’s Corporate Cowboy and its companion Loose Threads. Despite tight prompts, Suno 4.0 blurred the line between Appalachian ballads and Celtic folk across takes.

Embracing Stochastic Noise

Sometimes, though, I don’t want precision. I want the AI to riff, to explore, to take creative license. In those cases, shorter or vaguer prompts leave more room for interpretation.

This is where it’s helpful to shift your mindset. Don’t think like a musician trying to force a song into existence. Think like a producer evaluating takes. I usually generate 4 to 8 versions of a song unless I have a very specific vision. More than that, and you risk creative fatigue. Less than that, and you might miss the gem in the noise.

If one of those takes grabs me—good vocals, interesting instrumentation, a cool accident—then I have something to work with. From there, it becomes about curation.

Cutting an Album

In music, “cutting” originally referred to recording a track to vinyl or tape. These days, I use it to describe the process of reviewing, ranking, and selecting songs for an album.

For each song, I take notes on every generated version. I rank them based on:

Most of the time I go with my highest-ranked version, but sometimes a lower-ranked take fits better into the flow of the album. My goal is to create a coherent listening experience, where each song flows naturally into the next.

This means I end up with a lot of extra tracks. And honestly? That’s a feature, not a flaw. Some of my favorite versions don’t make the main album cut. That’s where companion releases like Loose Threads come in. These side albums let me showcase alternate versions that bring a different vibe or interpretation to a track. I relax my standards a little, but I still focus on quality and variety.

Mastering: Art, Science, and Guesswork

With the release of Suno 4.5, the AI tends to generate “pre-mastered” songs. It even adds studio fades sometimes (whether you want them or not). That said, there are still cases where mastering your own tracks makes a big difference, especially if you’re working with earlier models or want tighter control over your final mix.

Here’s my basic mastering workflow:

  1. Trim excess length – AI tracks sometimes run long. If the song drags, I trim and add a studio fade.
  2. Peak normalization – Normalize the track to -6dB to -3dB. This gives headroom for processing.
  3. Adjust dynamics – If parts of the track are way louder or quieter than others, I bring them closer together manually.
  4. Tone shaping – Use a multiband EQ and compression to dial in the sonic character. Tools like iZotope Ozone or MuseFX work well here.
  5. Loudness normalization – Most platforms target -14 to -16 LUFS. I go a bit louder: -12.0 LUFS usually sounds better across devices.
  6. Limiter – Cap the signal at -0.1 dB to prevent clipping.
  7. Final listen – I test the album on multiple devices: headphones, car stereo, phone speakers, home theater. I adjust any songs that sound off in the context of the whole.

For final tweaks, I may bring songs that seem too quiet up as high as -10.0 LUFS if needed to match adjacent tracks. There are no hard rules. Trust your ears.

Final Thoughts

I had a lot of fun working on Whiskey Rodeo’s debut album Corporate Cowboy. At this point, I’ve made several full-length AI-generated albums, and my process has evolved through trial, error, and a lot of listening. Working with AI is about balance: between randomness and control, between input and interpretation, and between artistic intention and what the machine spits out.

If you have something to say? AI can help you say it, but it can’t give your work meaning. It’s up to you to do that with your own creative vision. So don’t skip that step! Take the time to carefully think through your ideas and refine them for best results.

Hopefully this series of articles has been helpful. Stay tuned for more insights and updates on new projects. Until then, thank you for joining me on this musical adventure!

✒️ Modal Shift | ⚙️ ChatGPT