Forum
Prosper Proper - Summer22k
Hi
I just passed 2 month not achieving any musical project, but yesterday I made this in something like 6 hours: Audius
It came pretty straightforward and si not calculated at all, so, might seem long or with wierd transition, feel free to tell.
Have a good day
Prosper
1 post - 1 participant
[video] Syncing and sequencing VCV Rack with Renoise
A short tutorial I made on the subject, since I couldn’t find anything around the web (except for forum posts).
I know it’s the same as syncing Renoise to any other DAW/synth, etc, but VCV Rack works a little bit different, since it uses CV, gates etc.
Have fun.
1 post - 1 participant
Suggestion: Community thread for swapping FX Chains :)
Hello Renoise’ers!
What do you think about a FX-Chains or FX-Effect’s swapping channel? Users could upload their self maked FX Constructs with a short Description and/or a short example sample. What you think about this idea?
- Good
- Bad
- Not needed this shit
- Exist already
happy tracking
2 posts - 2 participants
Playing a two-hour live set at a festival this weekend
Going live with renoise again this weekend! I usually bounce back and forth between two instances, with various control parameters mapped to a small control surface. This will be my longest set yet, so, wish me luck!
I’m closing out the festival, so it’s guaranteed to be a show-stopper
Here’s the festival if anyone’s interested: https://www.chilluminati.events/event/sacred-earth-open-air-2021/
I’m not listed on the poster because I was a late addition to the lineup. Psyched to score a spot, and psyched to play out again! There’s nothing quite like hearing your tunes getting bumped on a 20,000+ watt sound system…
1 post - 1 participant
Audiomodern Riffer sync issue in Renoise
Hi all,
I am a relatively new user to Renoise, and I’m currently loving the software. It’s a refreshing break from most contemporary DAWs.
However, I have run into a sync issue with Audio Modern’s Riffer when using the plugin as a MIDI trigger device. I routed the MIDI output of Riffer to an instance of Serum via the MIDI routing setting, Riffer is above Serum in the device list. Link to plugin below for reference.
Audiomodern™ Riffer | The Creative MIDI Sequencer | Audiomodern™MIDI generator & MIDI sequence generator. Generate beautiful patterns and melodies for your sounds VST • VST3 • AU • AAX Plugin for MAC, PC + iOS
My issue is this, when starting playback in Renoise, Riffer is always out of sync and never starts the sequence playback at the very start, it starts about 3/4 into the sequence. I can adjust this slightly by changing the Line Per Beat setting and the Pattern Line Length, this seems to affect the starting point of the Riffer playback. I can admittedly find a sweet spot at 60bpm with a Line Per Beat setting at 32, and Pattern Line Length at 256. Playback in Riffer now loops from the starting point using the above settings.
However, if a higher BPM is used then Riffer is out of sync again, and starts to continuously drift, and any adjustments to the Lines Per Beat and Pattern Line Length never seem to work. I have set Riffer to MIDI control mode, which means you start the playback when a note is entered on the Renoise pattern tracker section.
Can anyone please give me any advice regarding this, is there a possible config workaround. Any advice would be extremely awesome, thanks to anyone who answers.
Matt
1 post - 1 participant
Track background blend not remembered
I build a template, where I color the reverb tracks a bit:
After saving and reloading the template, the blend setting is lost for some tracks:
Happens also, if I make a new group while having the bass track selected.
2 posts - 1 participant
3.3.2 still seems to use the old full screen on Mojave
3.3 shipped with some OSX fixes, including:
All builds are notarized now, a few drawing fixes for OSX Big Sur, replaced Renoise’s “custom” full screen mode with the standard OSX’ full screen mode for better compatibility in newer versions of OSX.
(⮚ Renoise 3.3 and Redux 1.2 released)
But I’m seeing 3.3.2 still use the old “faux-full screen” where it draws its UI on the same desktop as it launched on, but just “on top of everything else”, obfuscating the dock and status bar, rather than by using the standard full screen mode that moves full screen apps to their own virtual desktop so you can cmd-left/right your way to both your regular desktop and other full screen apps (used by everything from FL Studio/Ableton to Adobe Lightroom to VS Code to all full-screenable macos built-ins like Image Capture or Automator).
Note, I’m running in this on Mojave (mostly because old plugins means I can’t upgrade this machine to the 32-bit-less newer versions) although as far as I understand, that functionality did not change between Mojave and Catalina/Big Sur.
1 post - 1 participant
Handtool shortcut for Waveform editor
For the Waveform editor I would like to request new keyboard shortcuts that allow navigating the zoomed in view left and right.
Some apps have a feature where you press down the mouse-wheel button and drag left and right, a shortcut for this behavior would be sweet and or alternatively like in apps such as Photoshop it’s possible to hold down H or Spacebar and use LMB to drag the view left/right:
The sampler also have a minimap of the sample, maybe make it draggable or clickable so you can navigate the selected view from there.
2 posts - 2 participants
Renoise ..controlled by cirklon
I have received my cirklon and I am having a blast , but there is something that is really bugging me
In renoise, the Octave switch ( keyboard shortcut /,*) is transposing ALL instruments even when they receive midi input note data from the cirklon
Fist of all , i am not using the cirklon usb out , but rather the physical midi port and using the roland integra as a midi interface
Selecting the input port and channel I can easily control the vst’s insruments , but why does he octave switch transpose the instruments , or the incoming midi data ?
2 posts - 1 participant
Ultimatoni - Dawn Will Come (Alternative Pop)
Hey,
I just released a song that is the first one for my new solo project. This was almost entirely done on Renoise, only some vocals I needed to record on Reaper because I found recording on Renoise a bit complicated due to heavy CPU use and the delay went too high to record vocals properly.
2 posts - 2 participants
Blade Runner 2049: A Masterclass in Visual Storytelling
Hi there Renoise folk,
I just finished my new break down / video essay on Blade Runner 2049. Taking a look at how director Denise Villeneuve and cinematographer Roger Deakins used the camera to help tell the story of this Blade Runner world. And enhance the performances of the actors.
If you’re interested in that kind of thing, have a look please:
Ps. I replicated part of the score by Hans Zimmer and Benjamin Wallfish and all of the sound effects for this project inside of Renoise.
1 post - 1 participant
Reason 12 is out!
my opinion:
sampler is ‘meeeh’. NNXT was more than enough…
HD Gui is great improvement!
improved combinator… meeeeh…
i was activeliy following changes… aaand not impressed at all for a new major version… I hope that they implement Redux somehow! then i will be satisfied
Are you excited?
1 post - 1 participant
Electronic Starwalk (Playlist/EP)
Hi all, I’ve resurrected 3 old tracks from a few years ago, given them a tweak or two using things I’ve learnt on Renoise since then and popped them in a playlist;
1 post - 1 participant
Rate my resampling algorithm!
Hi everyone,
I’m Mark, casual Renoise user and one-time mod musician in the mid to late 1990’s (I went by the handle Arcturus - I was, at best, a footnote in the demoscene, and I’d be surprised if anybody remembered me - also there may have been more than one Arcturus). I’m now a professional software developer and I was feeling nostalgic, so I decided to try writing a mod player. Specifically, a player for 4-channel ProTracker mods - mostly just to see if I could do it. I decided to implement it in Kotlin running on the JVM, since that’s what I use a lot in my day job.
My goal is to get the player to play Space Debris correctly.
As you may be aware, the resampling algorithm is at the heart of any mod player. I was curious if my implementation is any good - it sounds fine so far, but I’m sure there’s probably a better way to do it. I’m posting this here to solicit feedback: What do you think of my implementation? Is there a better way to do it? Are there any obvious pitfalls? I don’t have a background in audio programming, but I know enough to get this far.
Anyway, here’s the repo: GitHub - mabersold/kotlin-protracker-demo: A JVM-based kotlin application that loads an Amiga Prot
Direct link to the resampler code: kotlin-protracker-demo/ChannelAudioGenerator.kt at main · mabersold/kotlin-protracker-demo · GitHub
Summary of the code organization:
model package contains the data classes to hold information about the song.
player package contains the class that actually sends the audio to the output device.
pcm package contains the classes that convert the song into a pcm audio stream. This is where the resampler lives.
The basic idea is that there is a single AudioGenerator class that keeps track of the position in the song. It has four ChannelAudioGenerator classes - one for each channel - that produces the audio in that channel. As each row passes, the main AudioGenerator sends data to the ChannelAudioGenerators (note commands and non-resampled audio data, basically) and the ChannelAudioGenerators send pcm data back to the main AudioGenerator, which mixes them and sends them back to the main class (which sends the data to the output device).
Summary of my algorithm:
The ChannelAudioGenerator has the following information: the period (basically the pitch that we need to resample to), an instrument number, and the audio data of the instrument.
The basic idea behind the algorithm is to do linear interpolation between each value of the original audio file to get the correct pitch. I’m operating under the assumption that I will not need to increase the frequency beyond what the original audio files already have (which seems to hold true so far from my testing) - I’ll only be reducing the frequency, and thus needing to interpolate data between the values of the existing audio data.
Calculate samples per second
It performs the following calculations. First, it takes the given period and calculates samples per second with the following formula:
samplesPerSecond = 7093789.2 / (period * 2)7093789.2 is the PAL clock rate, in case you were curious. The period comes directly from the pattern data in the Protracker mod. For example, 428 represents a C-2 note, so that would be calculated at 8287.1369 samples per second.
Find out how many bytes we need to interpolate
So now I have samplesPerSecond. Next, I need to calculate how many bytes I need to interpolate. To do this, I basically find out how many times samplesPerSecond fits into our sampling rate of 44100. The functions for this are as follows:
private fun getIterationsUntilNextSample(samplesPerSecond: Double, counter: Double): Int { val iterationsPerSample = floor(SAMPLING_RATE / samplesPerSecond).toInt() val maximumCounterValueForExtraIteration = (SAMPLING_RATE - (samplesPerSecond * iterationsPerSample)) return iterationsPerSample + additionalIteration(counter, maximumCounterValueForExtraIteration) } private fun additionalIteration(counter: Double, maximumCounterValueForExtraIteration: Double): Int = if (counter < maximumCounterValueForExtraIteration) 1 else 0“counter” is a double, starting at 0.0, that we continually add the samplesPerSecond to as we resample. When it exceeds 44100, we subtract 44100 from it, and then switch to the next pair of bytes in the original audio data to interpolate. So, the number of times we can multiply that counter until it exceeds 44100 is the number of bytes we need to interpolate between the first and the second byte (well, including the first byte, which we technically don’t interpolate).
For a C-2, if we start at the first two bytes of audio data from the instrument, these functions would conclude that there are six “iterations” before we switch to the next pair of bytes. So, we would to start with the first byte value, and then interpolate five times before we reach the second byte. The number of these iterations will not be uniform across the audio data: sometimes it will be five, sometimes six for a C-2.
So if, for example, the first two bytes were 6 and 18, we would need to interpolate five bytes in between those two values. It knows this because if we multiply 8287.1369 by five, it’s still under our sampling rate - we would need to multiply it by six before we exceed it.
Doing the actual interpolation
Finally, now that we know how many interpolations we need, we calculate the difference between the two bytes, calculate a slop, and then interpolate the bytes between them using a simple linear function. In practice it ends up looking like this:
(slope * currentIteration) + firstByteSo going back to the example of bytes with values 6 and 18, it would calculate a slope of 2, so we would end up with interpolated values of 6, 8, 10, 12, 14, 16 before the counter exceeds 441000 and then we move on to the next pair of bytes (18 and whatever’s after it).
A few notes:
- To reduce confusion, I only use the word “sample” to refer to the individual bytes in a PCM audio stream or collection. I do not use “sample” to refer to the instruments, for that I either use “instrument” or “audio data.”
- Effects aren’t implemented yet.
- For now, I’ve kept all the audio data at 8-bit, which is why I’m just dealing with bytes and not increasing them to words/shorts
- Part of my goal with this project is to make the code readable without too many bitwise operations, though some of that is unavoidable to some degree - I’m not going for performance
- For now I’m just retrieving one byte at a time, but I may eventually switch to retrieving collections of bytes to reduce calculations.
1 post - 1 participant
Cymbal Choke/Poly Aftertouch
Using a Roland TD-20 with Superior Drummer. ALL works and records spot on, except I can not figure out how to get the cymbal choke/poly aftertouch to work. Cymbal choke works fine in Superior stand alone program, but once I load it in Renoise as VST I can no longer get the choke. I can see that renoise is recognizing the poly after touch though it does not choke the cymbal. On some occasions it will randomly choke and then never open back up. Am I missing a simple setting some where?
1 post - 1 participant
Observer when renoise has fully initialized
My tool throws an error because it’s trying to do something before Renoise has fully initialized. I know of
renoise.tool().tool_finished_loading_observable and renoise.too().app_new_document_observable but both of those fire too early when Renoise is loading.
Question: what callback or variable can I use to know that Renoise has fully loaded?
1 post - 1 participant
EQs with all pass filter
Hi,
I am looking for an EQ which also provides all pass filters, but like Disperser an allpass filter with variable resonance and strength. The bigger Melda EQs have an allpass band option, but it seems to be quite limited, there is no amount and it does not sound as effective as Disperser. Any idea?
1 post - 1 participant
How to remap arrow keys?
Is it possible to remap the arrow keys? I’m using a Ducky One 2 Mini keyboard, so no arrow keys. Whenever I try to map new keys I get a ‘Computer Keyboard Piano’ error message and I can’t find any way around it.
1 post - 1 participant