Feed aggregator

OSX 10.13: AU settings won't be saved/loaded?

Renoise Forum - January 9, 2019 - 12:06

Since High Sierra 10.13.2, AU settings (here ambience AU freeware), won't remember settings anymore. If you load a song, the plugin will always have the default settings. Strangely after reloading (and changing settings), it seems to be ok, again. Also loading of some plugins now introduces a looooong pause, where no hdd action or cpu action appear.

Could be related to Apple's "new" way of managing AUs? There seems to be again a new system service, responsible for validating AUs?

 

 

EDIT: ABOVE IS OUTDATED PLEASE READ BELOW

Categories: Forum

What You Watching ATM?

Renoise Forum - January 9, 2019 - 03:10

Title says it all.

 

Dunno if this has been done before but seeing as how there's a "What You Listening To ATM?" thread I thought it'd be cool to get a "What You Watching ATM?" going on to.

 

I'll get the ball rolling. Found this documentary this evening randomly on youtube and it's really fucking good. It's about homeless dudes collecting beer cans for a living and racing shopping carts down the hills of North Vancouver, not your average documentary but it's well shot and paced. Definitely worth a watch.

 

Categories: Forum

Renoise not playing with Steinberg UR22mkII on MacOS Mojave

Renoise Forum - January 9, 2019 - 00:58

Hi all,

 

I've upgraded to MacOS Mojave and Renoise stopped working with the Steinberg UR22mkII audio interface. The play button doesn't work and I can't hear any audio (output or input). If I change the sound card to the internal audio in the preferences it works well, but not with the interface.

 

I checked the Steinberg site and the interface is supposed to be supported by the new OS. In fact, it's working with Cubase, but not with Renoise. It's a shame because Renoise is my favorite piece of code for electronic music and this audio interface has always served me well.

 

Do you know any solution or workaround?

 

Cheers,

André

Categories: Forum

Duplex: Layout For Mackie Control Universal Pro

Renoise Forum - January 8, 2019 - 22:14

I finished my links between renoise and my MCU Pro this week.

 

Tough  i will share it if someone need...

If you are used of duplex you should know what to do with those files.

I use the actual duplex version ( not the beta), and i needed to modify 2 files in the Duplex code. Those files are under "core modifs" folder.

 

I couldn't make work the displays of the MCU for now. I'll keep invastigating for that, but i'm kind of pessimistic about the result.

 

Anyway this layout give me good controls during masterisation and live sessions too.

Categories: Forum

Recording an internal track (NOT rendering) or MPE support

Renoise Forum - January 8, 2019 - 21:51

Hi

 

I'm using ROLI blocks… and basically I can't record any of the gestures within the sequencer. 

 

SO unless there's a way to record complex midi messages… I figured I could maybe record my VST as I would record an external instrument. 

Is there internal routing options in renoise ? (NOT RENDERING TO SAMPLES as gestures would be useless)

 

 

I could easily do so with a highend soundcard with internal routing option… but I don't own such a device

 

An idea ?

Categories: Forum

Renoise, Rewire and Reason Lite Problem

Renoise Forum - January 8, 2019 - 16:11

Hi

I've recenlty bought an Akai Mini MK2 with Reason 10 lite inside... but Renoise 3.1.1 (64bit) can't recognize it on Rewire...

Please, can you help me?

 

Regards,

Fabrizio

Categories: Forum

stuff we actually need :(

Renoise Forum - January 8, 2019 - 00:20

am i the only thinking ''why renoise doesn't come  with a native multi-band compression device''

 

for processing in the mixer tracks..

 

 

do you guys use it at all....it seems that renoise users dont multiband compress .

Categories: Forum

Pushing the envelope with Redhoot Oboemonger

Renoise Blog - April 17, 2018 - 16:30

Redhoot Oboemonger has just put out his second release on iTunes and Spotify. Not only that, but he has shared all 7 tracks from the album as Renoise playthroughs on Youtube (awesome!), and is also an accomplished visual artist.

Granularity (video edit) from redhoot on Vimeo.

Hello there Redhoot. Can you tell us about that artist name of yours?

"Redhoot Oboemonger", I've used a lot of different names over the years but for some reason this one stuck with me for the last couple of years.

And when did you start producing music?

I have an older brother who got into the demoscene in the early 90s late 80s, he got me hooked on ScreamTracker2 (the text-mode one) then I was around 9-10. I always wanted to play with instruments and synths but as a kid you can't really afford anything. ScreamTracker3 came along just around the time my paper-boy job bought me my first proper soundcard; The Advanced Gravis Ultrasound. Then FT2. BUZZ and Renoise, These days it's mostly Renoise for the main work, but with Reaktor, SuperCollider and PureData thrown into the mix to help out.

Can you tell us how you approach a new song?
Do you have an idea before you start, or do you experiment until something interesting catches your attention?

I love building procedural and fractal systems, for both my visual arts and music. Then find moments, sounds or sequences in them that I can develop into a track. Sometimes it would start with just the samples that inspires me, or other times its coming from some of my generative sequencers I've built that I feed Renoise. I've built some really simple LUA scripts for Renoise to help me with doing rapid prototyping if ideas based on simple probability triggering. Phrases have been really cool in this regard to abstract some of these workflows.

But in the end, I think how I start a track has the same answer as how do you finish a track. Its mostly just happy accidents that I like and then take them to completion. I have about 60+ hours of music kicking around that never got finished. Just by doing volume SOMETHING has to be decent right ?

Starting out with trackers at such an early age was probably an advantage. So what do you think of the theory that we settle in our musical taste in the early teenage years? Or, to put it differently, which artists have you discovered since those early years that inspired you to rethink what music is, or could be?

There weren't really any alternatives to trackers when it came to music software for us on 286/386 PCs. My brother got a hold of a bunch of floppies with all these unnamed great Amiga mods on them, and it made me obsess over the idea of using the computer to make music. The idea that I had not just the song; but the entire formula behind it right here in my bedroom was amazing to me. I would definitely say that post-pop 10 year old me suddenly got very heavily influenced by the Amiga mods. Travolta/Spaceballs, Zodiak/Cascada, Purplemotion/FC were all on my walkman at some point.

Later in the 90s as I got more and more esoteric interest in how music and sound is made; I have to give credit to Lassi Nikko (DUNE/Brothomstates) for opening up my eyes to how far you can take trackers, mixing musical styles and sound design through minimalism. Other artists at the time I were discovering like Aphex Twin and Squarepusher (this was around 95?-97?) were also pivotal, but Lassi made his mods available for anyone to look at. And seeing the inner workings of his songs made it even more enticing.

Trackers in general have a tendency to not focus on the graphics representation of sound; which in turn makes you focus even harder on the sounds and music. It abstracts the compositions into some kind of advanced looking spreadsheet where you're not distracted by moving bars, sample waveforms, instrumental representation through things like piano rolls and frequency analyzers. The notes, the composition and structure is at the center where everything is a first class citizen. This is the same reason why I love SuperCollider and other visual programing tools for creating sound, you don't sit in front of your screen and look at the sound, you just sit and listen.

I appreciate and enjoy a lot of different music, from jazz to pop to weird noise performances. But when it comes to making music myself I just like to make something I haven't heard or made before.

Your sound is very complex and layered. Do you use a lot of processing on the sounds? The videos you've shared seems to reveal that source samples are very "hot".

I love the Renoise sampler for how much you can mangle a sound, but I also generate a lot of my sounds through my own granular samplers I've built in Reaktor and SuperCollider, then tweak and finess them in Renoise for use in the compositions. I have a pretty terrible sample library, but if you have some minor DSP skills you can turn that limited data-set into an infinite resource. And by that design; some of the sounds come out as "happy accidents". Sometimes that means during recording the sound might be a bit messed up/hot/wrong phases etc. But If I like it I like it and just use it regardless. In the context of the Renoise where I make the actual tracks the sounds usually works for me.

OK, it's sort of a tradition that these interviews also turn to Renoise, and what you'd like to see from it in the future. So, if you could name just one or two features, what would that (they) be?

My unrealistic wish would be to look at modular frameworks for building DSP instruments, samplers and effects, and would love to see a nodal approach to not just routing but low level access to sound manipulation. Its very large undertaking to develop something like this so I've got my fetish satisfied elsewhere for the time being. But there's something to be said about having a framework that is almost entirely user driven from a content point of view. I would argue that Reaktor gets is money worth from its user content alone, despite of the lackluster software updates. This has given some software a very long lifespan.

On a more realistic level, 3.0 brought so many new things that I'm still overwhelmed when it comes to exploring phrases and the new instrument possibilities. It’s definitely made tracking a whole lot faster. That said; sample handling is at the heart of Renoise, so I'd love to see some more advanced options for pitch, time and other granular (prehehe) DSP functions. I know proper pitch and time warping are very complex procedures if you're going for quality so as far as feature requests I'd be happy to just see envelopes smaller than 25ms so I can roll out my own crappy resynthesis phrases for now (ed: this just settled the headline)

That said, I think I could spend half an eternity just exploring the possibilities with the current features.

You mentioned having quite a lot of music in the redhoot vault. Are we going to have to wait another decade for your third album to arrive?

I’m hoping to do more frequent but smaller releases. I do a lot of visuals (https://www.instagram.com/redhoot_/ - NSFW) that tie into the music, so by doing smaller EPs there's room for even more experimentation in various other formats.

Redhoot Oboemonger’s new album is out now The whole album as Renoise playthroughs Category: Artists
Categories: Blog

Mutant Breaks #10 - Voting starts Dec.10th

Renoise Blog - November 17, 2017 - 23:41

The eclectic music competition known as Mutant Breaks (or MBC) has returned for the 10th time.

The competition is already in full swing, with a dozen entries and counting.
Join the challenge to vote, win prizes, or just for lulz. You decide!!

Deadline is December 10th

Forum link with rules, entries, etc.

Category: Competitions
Categories: Blog

Renoise 3.1 goes gold

Renoise Blog - January 12, 2016 - 19:10

We're happy to announce that another round of beta testing has passed. Renoise 3.1 is ready for production.

What's new?

In case you missed it the first time around, new features in 3.1 include:

  • Support for VST and AU MIDI generators: This means that you can use specialized tools such as harmonizers, note matrices or arpeggiators - things that can “drive” other instruments in Renoise.
  • Improvements to the sound engine: Completely new, rewritten filter section as well as optional oversampling and bandlimiting on sample playback. Various improvements to Renoise's native DSP devices.
  • More love for Phrases: Phrases have become a lot more powerful and streamlined too - when working within the phrase editor, you could describe it as “feeling more like the pattern editor”. And when working in the pattern editor, you have more options for controlling phrases.
  • Presets everywhere. And now, libraries too: Renoise 3.1 includes a more powerful preset system. You can now store/recall samples and keyzones as presets too, and the whole preset browsing experience has been improved.

... and much more. A detailed description of what's new can be found on the Renoise 3.1 launch page.

Category: Releases
Categories: Blog

Mutant Breaks #8

Renoise Blog - December 9, 2015 - 12:42

Mutant Breaks; the exceptionally well disorganised annual Renoise competition has returned!! The rules are simple – submit a Renoise track between 3-7 minutes long, based on this year’s theme.

You can win a programmable force field, sample packs and more...

Link to the forum topic

Category: Competitions
Categories: Blog

Renoise 3.1 beta-test has started

Renoise Blog - October 9, 2015 - 13:36

We are happy to announce that Renoise 3.1 is now in public beta. We would like to invite anyone with a Renoise license to download the software and try out its new features.

This version of Renoise represents the integration of features from Redux, the VST/AU plugin we released earlier this summer. This means that any instrument created with Redux can now finally be loaded into Renoise, and vice versa.

While Redux is a large part of the story, it’s not the whole story. We have squeezed in a few long-requested features too, in an attempt to make Renoise 3.1 the best possible release.

Check out the full release notes here

Category: Releases
Categories: Blog

The Renoise CDP Tool : An Installation Guide For Linux Users

Renoise Blog - September 30, 2015 - 18:12

Early in 2014 the excellent Create Digital Music/Motion news site published a brief notice regarding an interesting tool for the Renoise DAW. The tool incorporates the power of software created by the Composers Desktop Project (CDP), a collective of composers and programmers guided primarily by the work of Trevor Wishart. The CDP software includes a suite of audio processors that perform a fantastic variety of sound-altering functions, some of which are found only in the CDP suite. The software was sold as a commercial product until February 2014 when it was released as an open-source project licensed under the LGPL.

The following presentation provides detailed instructions on installing the CDP software and subsequently installing the CDP Tool for Renoise.

The CDP For Linux

I've assumed that your system has been set up for compiling software from source code. If not, see your distribution's documentation on configuring a software development environment. I've also assumed that you know what is a terminal window, the command line, and a prompt. Beyond those basics I'll walk you through the entire process of downloading, compiling, and installing the CDP software for Linux.

To begin, go to the CDP Free Downloads page :

https://www.unstablesound.net/cdp.html

Read the page, then download the Linux sources (beta, 64-bit compatible) :

https://www.unstablesound.net/downloads/CDPrelease7src-linuxbeta.tar.gz

Open the package in your source directory (e.g. /home/dave/src), and enter the newly created directory. If you don't already have a source directory either create one using your file manager or follow these steps at the command-line :

cd $HOME mkdir $HOME/src cd $HOME/src/CDPrelease7src

Read the file named linux-howtobuild.txt, then read it again. If you've never compiled and installed software at the command-line you're going to need some help, so here's the step-by-step method. Follow it exactly as presented. First, let's satisfy the need for libaaio :

cd $HOME/src/CDPrelease7src/libaaio bunzip2 libaaio-0.3.1.tar.bz2 tar xvf libaaio-0.3.1.tar cd libaaio-0.3.1 ./configure --prefix=/usr/local make sudo make install

The howto includes instructions on building the Portaudio library. Portaudio isn't strictly necessary for our purposes, so you can safely ignore the instructions. However, if you do want to build the CDP programs that require it, you should install Portaudio from your distribution's software repositories. Be sure to install the development package as well. On Fedora that package is called portaudio-devel, on Ubuntu it's portaudio-dev, so you might have to use your package manager's search and info functions to find the correct files.

Now you're ready to compile the CDP processors. First, enter their source directory :

cd $HOME/src/CDPrelease7src/dev

Next, open a text editor and load the makeprograms.sh script. Make sure that the PABUILD variable is set to your preference - it's either "yes" or "no" - then save the file and close it. Now run this command to insure it's an executable file :

chmod +x makeprograms.sh

Now run it :

./makeprograms.sh

The script will build the processors and install them to the Release directory. Enter that directory to view the binaries you've just built :

cd $HOME/src/CSPrelease7src/dev/Release ls -l

The ls command should yield a listing like this one :

-rwxr-xr-x 1 dlphilp dlphilp 48469 Sep 18 11:36 abfdcode -rwxr-xr-x 1 dlphilp dlphilp 52865 Sep 18 11:36 abfpan -rwxr-xr-x 1 dlphilp dlphilp 52802 Sep 18 11:36 abfpan2 ...

To verify your build, run one of the commands, e.g. :

./blur

You should receive a helpful text that tells you how to use the processor. For example, here's the help from running the blur effect :

./blur CDP Release 7 2014 BLURRING OPERATIONS ON A SPECTRAL FILE USAGE: blur NAME (mode) infile outfile parameters: where NAME can be any one of avrg blur suppress chorus drunk shuffle weave noise scatter spread Type 'blur avrg' for more info on blur avrg..ETC.

Most processors have multiple modes of operation, and each mode has its own parameter set. For instance :

./blur avrg CDP Release 7 2014 blur avrg infile outfile N AVERAGE SPECTRAL ENERGY OVER N ADJACENT CHANNELS N must be ODD and <= to half the -N param used in original analysis. N may vary over time.

And here's the report for another mode of the blur effect :

./blur spread CDP Release 7 2014 blur spread infile outfile -fN|-pN [-i] [-sspread] SPREAD PEAKS OF SPECTRUM, INTRODUCING CONTROLLED NOISINESS -f extract formant envelope linear frqwise, using 1 point for every N equally-spaced frequency-channels. -p extract formant envelope linear pitchwise, using N equally-spaced pitch-bands per octave. -i quicksearch for formants (less accurate). spread degree of spreading of spectrum (Range 0-1 : Default 1). spread may vary over time.

And so on and forth. Given 147 processors, each with multiple modalities, that's a lot of possibilities. It's also a lot of parameters. An actual run of a processor can involve a lengthy command line with arcane parameters in which any misspelling or invalid data ends the run. Adding to the joy, some parameters take text files or specially formatted analysis files that require preparation with external software, i.e. a text editor or a phase vocoder.

The CDP processors were created in the spirit of the UNIX philosophy that favors one good tool for one job. However, UNIX also provides powerful scripting capabilities and the ability to pipe output from one process to another. Thus, a script can be written to automate a lengthy series of transformations upon a single soundfile. Your script could be written as a closed system, with no user input beyond running it, or it could be designed to accept user input. Input values can be specified as variables when the script is started :

foo.scr $1 $2 $3

where foo.scr is your script and $N takes a value intended for a processor parameter somewhere in the script.

Scripting a CDP processor chain is a powerful working method, but your workflow will likely require a soundfile editor and other GUI-based audio tools. And so at last we come to the CDP tool for the Renoise DAW.

The CDP Tool for Renoise

From this point forward I'll assume you know how to use Renoise.

The Renoise DAW is designed to load tools created by 3rd-party programmers coding the Renoise Lua scripting language. The CDP interface is a good example of a Renoise tool, and yes, once again you'll need to download it and install it yourself.

First, let's get an annoyance out of the way. Open a terminal window and run these commands :

cd $HOME vim .bashrc

I've assumed that your system shell defaults to one called bash and that a bash resource file exist in your home directory, i.e. $HOME/.bashrc on most Linux systems. Vim is merely my favored text editor. It doesn't matter what editor you use - gedit, emacs, vi/vim, LibreOffice, whatever - as long as it supports the plain text file format. Open the file and add the following lines anywhere before the end of the file :

# For the Renoise CDP tool export CDP_SOUND_EXT=wav

The first line isn't really needed, it merely clarifies the purpose of the export, but it's a good idea to comment your code for clarity's sake. Save the file, exit the editor.

Now log out and log back in again. You've just set an important item called an environment variable, without which the CDP tool will fail to produce any output. You can set it at the prompt per session, but why bother ? Set it in the shell resource file, do the login/logout two-step, and let's proceed.

Visit the Renoise tools repository and download the latest version :

https://www.renoise.com/tools/cdp-interface

Open Renoise, open your file manager. Find and drag your newly downloaded file onto Renoise, drop it anywhere in the program display, and voila, you have installed the tool.

Open the Sample editor and load some soundfiles into the samples browser. Add a few extra empty slots and rename them. Select the first file in the browser to load it into the editor. Now open the Renoise Tools menu where you should see an item labelled CDP Interface. Click to enable it, set the path to your CDP binaries, and you're good to go.

The tool defaults to the blur processor. It's a good example, so let's explore it before a first test. The EXE Filter is a drop-down menu of your CDP binaries. Next we see a drop-down menu for the modes of the processor, followed by buttons for parameter reset and activating the process. The terminal output box displays messages to let you know if things succeed or fail. Heed its error messages, they provide the keys to resolving failed runs.

Below the terminal output viewer you'll see the help for the selected processor, followed by the data entry widgets for its parameter set. The input and output selections default to the currently edited sample. The selectors on the left refer to macro definitions that we're going to ignore for now. We're interested in the selectors on the right. They default to the active sample for input and output, i.e. the transformation will be in place of your original file. You can change the output destination to any other slot listed in the drop-down menu. Likewise, you can select a different input file.

By the way, the help text is very informative. The tooltips also provide valuable assistance for every parameter of each processor. Even more help is available from the CDP home site where you'll find links to original documentation, workshop material, soundfiles, related sites, et cetera.

The blur processor depends on the results of a phase vocoder analysis of the input file. At the command-line the processor requires a separate analysis file, but the CDP tool automates that step in the Analysis Settings subpanel. If you don't know what those settings signify you can leave in peace. If you know what you're doing you can fine-tune the analysis resolution. And again, if you don't know you can just play with the settings and learn things.

So much for the blur processor. Other processors add a few other widgets, including those for text input and breakpoint data, but the format just described is the typical layout for the whole suite. If a text file is required, the tool kindly provides a pop-up text editor for creating, applying, loading, and saving such files. Yet another widget calls the active envelope generator in Renoise's Modulation editor.

The tool is seamlessly integrated into Renoise. When a process is complete it automatically loads the output into its designated sample slot. Which means, of course, that it is now subject to the sample editing powers abundantly provided by Renoise.

A Demo

And now, a brief demonstration in six figures.


Figure 1. Load a percussion loop into a sample slot, open the CDP tool.


Figure 2. From the blur mode menu select the Blur Shuffle mode.


Figure 3. Verify that your selected sample is the input, set the output to an empty slot.


Figure 4. Edit the domain-image definition. (Edited here to abcd-adcbabcda.)


Figure 5. Set analysis resolution. (2048 here. Value must be a power of 2.)


Figure 6. Process. If successful the output will automatically appear in the designated sample slot.

At this point your newborn sample is subject to the attentions of the Renoise sample editor, the operations of which are beyond the scope of this demonstration. See the Renoise documentation for detailed information about its sample editing capabilities.

Tool Tips

In lieu of a manual - and in no particular order - here are a few tips and tricks I picked up while learning to use the tool.

Add a series of empty sample slots in the Sample browser. Fill some with soundfiles but leave some empty. Use the empty slots as output destinations for the CDP transformations to avoid any accidental overwrite of your original samples. Add more samples and empty slots as needed. And by the way, the CDP tool doesn't pick up the default slot labels in the I/O drop-down menus, you'll need to rename them if you want visible names. Which can be helpful.

If you receive an error regarding mismatched soundfiles, use the sample editor's Adjust Sample Format function to prepare your files with matching sample rates, channel number, and bit depth. Fast and easy, you never need to leave Renoise.

Avoid tedious slider positioning. For direct entry, right-click a parameter's numerical value at the right end of the slider. Type in the desired value, press Enter, move on.

Edit/Undo is your friend. The Ctrl-Z keystroke combination is your faster friend. Learn some of Renoise's valuable keystroke accelerators. Ctrl-Z (Undo) is a good combo to know, particularly when you're doing a lot of experimentation. And it's easy to assign new keystroke combinations. For example, the sample editor's playback control has no default keystroke assignment, but a quick visit to Edit/Preferences/Keys resolved that issue in under a minute. I assigned the alt-spacebar combination for playback start/stop, it's easy to operate with my left hand while my right stays in control of the mouse. Again, when you're doing a lot of tests and experiments you need an efficient workflow. Take advantage of Renoise's customization, learn some keystrokes.

If you add/remove samples or alter their order in the browser just click the button to the left of the Process button. Its tooltip says it recalculates the slider ranges, but it also updates the I/O file drop-down menus. Beats closing and re-opening the tool just to reset those menus.

The Powers Behind The Power

The CDP tool relies on three convergent development efforts. The CDP software is first among equals here, sharing the position with the Renoise DAW and its internal support for the Lua programming language. Lua is easy to learn, its integration with Renoise is splendid, and the Renoise community has responded to its implementation by creating a variety of add-on tools to extend the host's already-rather-awesome capabilities, another strong selling point for Renoise to the modern coding musician. Alas, I can say no more on the topic, but I'm happy to report that the Renoise developers have prepared an introduction to programming Lua-based extensions for their host. See the Resources listed with this article for more information.

Outro

The CDP software suite has been justly praised for its powers, but alas, those powers were available only to users proficient with a text-based interface and the command-line. Likewise, the project's Soundloom and SoundShaper GUI utilities are currently available only for Mac and Windows users, leaving Linux users with scripting the command-line processors. Scripting batch processes is a powerful method of using the CDP suite, but it is tedious and error-prone.

The CDP tool for Renoise eliminates those concerns entirely. The software's powers are at last easily accessible to any user, and the capabilities of the Renoise DAW are immediately at hand for viewing, auditioning, and editing your new sounds. Indeed, it's difficult to imagine a better workflow for experimenting with and using the CDP processors. Major respect to developers/users afta8, Djeroek, and EmreK for their work on the tool. Consider it highly recommended for regular Linux audio folk, consider it essential for anyone into unusual sound design possibilities.

Resources

Renoise

Lua Scripting for Renoise

The Composers Desktop Project (CDP)

CDP Downloads page

The CDP Tool for Renoise

Trevor Wishart biography on Wikipedia

Peter Kirn's article for CDM

Category: Tutorials
Categories: Blog

Aria Rostami - Interview & Album Release

Renoise Blog - September 30, 2015 - 16:41

Hello, tell us about yourself and what you do.

I'm both a musician and a producer so I have one foot in composition and performance and another foot in sound design. I've never been too interested in staying in a specific genre but I tend to lean in an experimental direction rather than a pop direction... both as a musician and as a producer.

Electronic music was something I fell into initially based on convenience in the sense that I didn't need a band. Once I started recording at the age of 15 or 16 I didn't want to wait on collaborators to help me finish songs. I didn't have access to many instrument so I ended up using bad synth patches and editing drums using mostly samples I recorded in my room with a computer mic. By the time I left high school I think I had recorded nearly 300 songs. I still work very fast and I work constantly.

I don't think I found any true grounding as an artist with something to say until I made the album "Form" at the age of 22. That album was about the illusion of control and both the production and compositions weave in and out from stable to unstable. It was the first group of songs I recorded after I got clean from drugs. It was the base of my simple understandings of adulthood and trying to break free from the cultural mindset and depictions of Millennials. That album was also the base of what the various themes of my music would be.

A year after I recorded "Form" my best friend and musical collaborator, Shawn Dickerson, disappeared. There's a lot of unsolved specifics to the case but I have good reason to think he's no longer alive. Shawn wasn't the first death I had experienced but he has definitely been the most important person I've lost. He was an amazing talent and unfortunately only released one track... a remix of my song "Cleare" which was released on "Uniform" and if you have time you should give it a listen. My EP "Peter" was a dedication to my relationship with Shawn and more or less based on the world we lived in together.

There are a few releases after those ones but my creative work at this point centers around death, control, addiction, identity and I try to switch back and forth between a light-hearted playful aesthetic and a dark aesthetic as to hopefully give these topics many dimensions to live in.

Could you give us an insight into your latest release, Sibbe?

"Sibbe" (pronounced Sibby) focuses on information, the technology that proliferates that information and cultural identity. Both of my parents were born in Iran but my brother and I were born in the United States. I've only been to Iran once when I was 12... this was pre 9/11 so the Middle East wasn't as large of a cultural topic as it is now. I was just old enough then to understand what Iran was and how it connected to me but not really mature enough to understand what identity is in a larger worldly sense. I especially wasn't ready to become a "professional" on the Middle East, Islam, religious fanaticism, international relations and Iran post 9/11.

Granted, I definitely knew more about these topics than my peers did, which actually became part of a problem. It took quite some time for me to understand that my knowledge of Iran, Islam and the Middle East was informed by a very specific diaspora. My family, their friends and other Persians I knew all come from a specific slice of the greater Iranian culture. In other words, I only heard one story and I heard it frequently. I had always thought I knew the full picture. Even the news I'd hear from Iran came through major Iranian cities... for example the media tends to focus on Tehran. It's not uncommon for people to ask me about Iran or the Middle East and my thoughts on specific things happening in the news because I am seen as an authority on the topic. In reality, I am removed from the true source and experience and in some ways even my sources are removed.

"Sibbe" is about that inaccuracy and also the wanting to understand. Some of the source material was sent to me from Iran by my dad and my girlfriend who were visiting at separate times and all the source material for the track "Sibbe III" was sent from Teipei by my friend Nicole who lives out there now. The source material was recorded secretly using cellphones. I wanted to use things like cellphones in this project because it is a way information is spread and collected whether that's reading the news, talking to people in other countries, or NSA spying and data-mining (which is also why I appreciated the recordings were done in secret.)

I've also been listening to a lot of re-releases of older music from all around the world which have become more and more popular in the last decade. It's always been easy for Western, English speaking countries to proliferate their media and influence to other countries but we're finally at a time and place where the Internet and the greater interest in information has opened up the doors for this music to come back to us.

I look at "Sibbe" as an American album through and through although a lot of it is influenced by music outside of the United States. I also made a point to nonsensically mishmash cultural tones and ideas to show ignorance, appreciation and a push for something new.

To what extent do you make use of Renoise in your music creation process and what is the blend of hardware and software in your setup?

The only D/I I use is an Alesis Ion synthesizer and then everything else is either recorded through a microphone or sampled. When making experimental music, Renoise is my main instrument. Every track on this album and even going back to all the tracks on my first album "Form" were made using Renoise. I may write segments on instruments but I never plan a full song before I start recording because I'd rather filter it through the creative process of production while I'm writing.

I don't know if this works for everyone but I'd definitely recommend not planning out an entire song before you record it if you're the only person working on the song. The benefit of having collaborators is that there are many minds working at the same time. You can recreate that effect if you record and then write with what you've laid out. The problem with a big idea for a song from beginning to finish is that you'll leave no room for flexibility or serendipity.

Do you involve live instruments in any way?

Yes. On the new album "Sibbe" for example I used piano, glockenspiel, violin, synthesizer, vocals, Turkish Tar and melodica. I'll record passages on instruments and then if I don't like the way it turns out I might cut them up and sample them to create something I would never have thought of just sitting in front of the piano, for example.

The track, Vietnamoses, caters to the dance floor, while the others are more ambient in nature. Was this planned in advance? Why not focus exclusively on one or the other?

I recorded many songs during this period, some of which had drum tracks. My EP "Czarat" was made during this time and the B-Side was "Vietnamoses."I was originally only going to focus on the experimental non-percussive stuff for "Sibbe" but I thought "Vietnamoses" added something other songs couldn't.

There may be a sense of elitism in experimental music that sees rhythmic music as lesser than. Or at least this may have been a driving point in the 20th century when people really wanted to challenge what we know about music and now it's just part of the cultural understanding that songs with beats are in one box while ambient experimentalism is in another box. But truthfully, there is an overlap between the two. When done in a specific way, robotic rhythms and free-flowing sounds can each have a trance like hypnotic effect, which I think "Vietnamoses" captures. You can see this connection in cultures that chant or create drones when praying or meditating while other cultures will use drums and poly-rhythms in their religious practices.

This release just came out on the Audiobulb label. Do you have any upcoming shows or new releases planned?

I don't really play my solo stuff live because I always create it with home listening in mind, but I am a part of a duo that performs live. I collaborate with Daniel Blomquist and we have two shows planned in San Francisco. We'll be playing with Thomas Dimuzio and two other acts on Saturday, October 17th at Thee Parkside and we'll be doing a collaboration set with a yet to be determined acoustic musician or group curated by Danny Clay on Saturday, December 12th at the Center for New Music. Daniel and I have enough songs recorded for an album, but we haven't shopped anything out quite yet. Our songs are generally in the 10-15 minute range with long, slow builds and towering peaks.

I also have an LP called "Agnys" coming out digitally and on vinyl on Spring Theory sometime in January or February of 2016. "Agnys" is playful and joyful without being overstated or saccharine sweet and all in all pretty much the opposite in aesthetic when compared to "Sibbe"... all the songs have beats to them for example and they definitely give into a pop sensibility. "Agnys" was initially based around ideas Shawn and I never got to complete, but eventually grew into a bigger world of ideas as I developed it.

Who would you say have been your biggest musical influences and why?

Early on the two big ones were Nine Inch Nails and David Bowie. I was mostly into bad metal bands before the age of 14 and both of those guys showed me you could be dynamic with sound and style. I listen to a lot of music now but the artists that I am really inspired by would be people that are widely dynamic. To name a few I'd say John Zorn, Mike Patton, Trey Spruance, Moondog, Ryuichi Sakamoto, Yoko Kanno, Nobuo Uematsu, David Bowie, Aphex Twin, Daniel Lopatin, Bjork, Can, Demdike Stare, Dungen, Fennesz, Floating Points, Four Tet, Godspeed You! Black Emperor, The Haxan Cloak, William Basinski, Debussy, Chopin, Googoosh, Satie, Dave Brubeck, Bill Evans, Holly Herndon, Richie Cunning, Opiate, Pole, Andy Stott, Ata Ebtekar, Dariush Dolat-Shahi, Selda, Isao Tomita, Leyland Kirby, Dr. Lloyd Miller, Murcof, Pan Sonic, RAUM, Ryoji Ikeda, Secret Chiefs 3, Tim Hecker, Tortoise...

Are there any other things that you would say have an influence on your music?

I get bored very easily so my mind is always racing. I would never be able to pay attention in Elementary School because I was always day-dreaming. Even up until college I never found school to be that challenging... it was always hard for me to pay attention. There are some serious downsides to not really having to work in school... mainly you just don't learn what it's like to struggle to accomplish something but it taught me something about creativity I wouldn't have learned otherwise.

When I would daydream I would think in ways that were productive. One thing I still do is I take some sort of visual stimulus or an object and I think about what it would be if it were a sound. So not exactly what sound it makes if you hit it but rather what tones, textures, notes, lengths, moods and so forth would this non-sound based stimulus have. I don't mean that you then go ahead and make a concept album based on this or anything... I just did it to keep my mind thinking of sound in a different way.

One of my favorite musings was light trails on film from old footage of boxing matches in the 70/80's and what that would sound like. I also like to look at plain uninspiring things like pencils and post-its and think about their sound because it's a little more of a challenge.

How did you find out about Renoise and what attracted you to it in the first place?

Shawn had introduced me to it through a video Aaron Funk, aka Venetian Snares, posted on YouTube. I had no idea what a tracker was at the time. Renoise is much cheaper to buy than a lot of other equal quality DAWs, so I didn't see the downside of spending the money for a product good enough for someone like Venetian Snares.

I didn't touch the software for a while though. It wasn't until I went to visit my parents for a week sometime in 2007/8 that I actually used it. The first night I was in town I met up with a good friend and she was at some turning point of her life and needed advice. I didn't know what to say at the time. I went home and recorded an EP over the next 5 days using Renoise for the first time... I was forced to in a sense because I didn't have any instruments to record with. I named the EP "Advice" and I gave it to her at the end of the week. It was just some sweet gesture... I didn't know what else to do. Regardless, I've used Renoise ever since.

Finally, is there any particular new feature you'd like to see in a future version of Renoise?

The ability to automate instrument modulations and Autoseek tracks with pitch modulation. It'd also be nice if the pitch envelope could span more than 6.144 seconds like it did in previous versions.

(Editor's Note: Technically, you can get longer envelopes. Set the envelope to beat synced mode, extend it to the maximum 24 beats, set your song tempo to the lowest 32 BPM, then switch the envelope back to milliseconds. It will now be around 45 seconds in duration, and you can switch the song tempo back to normal.)

Sibbe is available now from Audiobulb Records.


Official Website
Soundcloud
Facebook

Category: Artists
Categories: Blog

I Am Robot And Proud - Interview & Album Release

Renoise Blog - September 17, 2015 - 16:57

Hello, tell us about yourself and what you do.

My name is Shaw-Han Liem, and I make music under the name, “I Am Robot and Proud” in Toronto, Canada. I've been doing this a while, since 2001. I also work in other mediums like programming visual projections and designing video games.

Light and Waves by i am robot and proud

Could you talk a bit about your latest release, Light and Waves. Was there anything in particular that you wanted to achieve with it?

Immediately before this record I worked on one called “People Music”. The goal there was to create arrangements of my previous work for a live band (drums, bass, synth, guitar) that we could perform live as a group. In the process of doing so, I had to sort of explode my normal process of making songs (which previously revolved around Renoise). We recorded the album into Logic in the studio and I mixed it in a more traditional 'rock band' way. With “Light and Waves” I came back to my previous process, but I wanted to incorporate the new techniques that I used on “People Music” - doing a lot more live playing and improvisation. On the other hand, I also wanted to explore more 'computer assisted' techniques like using randomness in sample processing and the compositions. So I guess you could say “Light and Waves” is an attempt to have those two worlds living together - live musicians playing organically and using a computer as a compositional tool.

To what extent do you use Renoise in your creative process and what is the blend of hardware and software in your setup?

When I started out releasing albums in 2001, I did everything in trackers. I used FastTracker2 for my first two albums and Renoise after that. I don't think I had used a real hardware synth, VST instrument or effect until my fourth album, “Uphill City”. So Renoise really informed my process early on, like trying to stretch the most interesting sonic possibilities from a few pieces of sound. Since then, I've added a lot of things to my setup: hardware and software synths, drum machines, effects, etc. But much of my work still starts in a blank Renoise file.

You also perform your music live, but are live instruments involved much in your studio recordings?

Now I use a lot of live instruments during the recording process. Because Renoise has a built in sampler, I tend to improvise loops on guitar or synth, then chop them into sample instruments that I can use in tracks. On “Light and Waves” there are two tracks recorded with the full band, which are essentially recorded completely live in a traditional studio process.

A video posted by i am robot and proud (@robotandproud) on Mar 26, 2015 at 3:23pm PDT

And has Renoise been used on stage at all?

Yes! I've been using Renoise to run my live show since 2000. Early on I used it to basically control the playback of the track, but now I'm using it controlled by a Novation Launchpad and some custom scripts that I used to control Instrument switching and live looping (recording into the pattern editor in real time to create tracks on the fly).

For my live visuals, the setup is fairly simple. I have Renoise sending midi data for each track to Processing (audio-visual software), and also a custom LUA script that sends some extra info over OSC (Open Sound Control) - things like BPM, current position in the song, etc. From there I make a new sketch in Processing for each song and interpret the data coming in as visuals. For example, I map audio characteristics like note pitch, velocity and instrument visual things like shape, color, movement. I can have global changes happen as the song progresses, eg. when we get to the chorus, reverse gravity, change color scheme and speed up the camera movement. It's something I've been experimenting with for my last few tours, but I think only recently have I become comfortable with the process and starting to make some interesting work. I'm currently working on the next iteration of my visuals for my Japan tour in October.

Your music is available to buy on your bandcamp web-store, but it also receives physical releases through various labels. Could you give us an insight into how this approach to selling music works for you?

It basically comes down to having the music available in as many ways as possible and giving people the option of buying it where they are most comfortable. Some people prefer to buy on bandcamp, because they know the money goes directly to the artist. Other people prefer iTunes or their local record shop. Some people prefer streaming on youtube or streaming services (for an artist like me, this provides almost no income), and others just download it for free. It's really a complicated thing and no-one has really figured out a stable model. But I think having your music available in as many ways as possible gives you the best chance of connecting with the people who want to support you.

From this experience, do you have any advice for other musicians looking to have their music released?

It's a weird time to be doing music. There really is no template or 'business model' now and there is definitely less money going around, but there are still more crazy musical ideas to be discovered. I prefer to focus on that aspect of it, creating something new is a feeling that you can't really get anywhere else in life.

Who would you say have been your biggest musical influences and why?

I grew up in around a lot of other musicians - other teenagers that played in my local music scene (in and around Toronto). I would say that aspect of a local music community is the thing that influenced me the most. These days, with everything being online, the 'community' can span a larger geographical space, but I think the motivations and spirit is the same. And there is no substitute for standing in a room with your friends and having your mind blown by a cool band making weird music you've never heard before.

Are there any other things that you would say have an influence on your music?

I'm also influenced a lot by people in other fields - most recently, local game-maker friends who have to be both engineers and artists to create their work. I have become very interested in creating new tools for making my own music. I'm really inspired by the idea of an 'engineer/artist', who can use technology to create new tools and then use those tools to create new kinds of work - whether that be musical, visual or interactive.

How did you find out about Renoise and what attracted you to it in the first place?

I first started making music on a computer in high-school in the 90s - I think the first program I used was ModEdit on DOS. The reason was - it was free. You could literally make music even if you only had a PC and nothing else. From there I moved to FastTracker2 on Windows and I used that for a few albums. At some point I was playing a show and another artist on the bill saw I was using FastTracker2. He was like, "if you like that, you should check out Renoise!" From there I was basically hooked, all the comfortable micro-editing capabilities of a tracker, but with modern things like VST support and a scripting interface.

Is there anything about the software in particular that really helps you creatively?

As a musician and also a programmer and designer, I think the scripting interface (Renoise Tools) is one thing that really melted my brain the first time I tried it. It really opened up a lot of possibilities. I have made my own custom controllers on iPad that talk to Renoise over OSC and written scripts that manipulate note data (for example, 'find all kick drums and randomly shift them forward by half a step' or 'take all the notes in the track and shift them up or down by a minor third'). I find manipulating the compositions using code/algorithms has created a super fun way to find musical ideas that I would never consider when playing an instrument in the traditional way.

A video posted by i am robot and proud (@robotandproud) on Mar 26, 2015 at 3:23pm PDT

The iPad controller sounds intriguing, could you tell us more about it?

I guess it's a way for me to solve that problem of having a 'zoomed out' view of my songs as I'm working on them. It's a transport control and arrangement view where I can use touch gestures to zoom in and out of certain sections - or zoom all the way out to see all the tracks and patterns on one screen. From there it's easy to audition sections and arrangement ideas. It also works over wi-fi and has an instrument view/selector - so I can carry it around my studio and record overdubs into my Renoise session without having to look at the computer screen. I made it as an experiment using a game engine called cocos2d (since I have a background in game programming, it was the easiest way for me to get started). I have a custom Renoise tool written in LUA that communicates with the iPad app over OSC to send and receive all the information (so that anything you change on your computer is instantly reflected in the app and vice versa).

I worked on it on and off for a few months, creating a version that I could use while making “Light and Waves” - but then I got too busy recording the album, preparing the live band for tour etc. and haven't been able to go back to it since. I would love to return to it and eventually release it for the Renoise community, but I'm not sure when I'll have time to do so!

A video posted by i am robot and proud (@robotandproud) on Mar 26, 2015 at 3:23pm PDT

Are there any new features you'd like to see in a future version of Renoise?

I think Renoise (and trackers in general) are really good at showing you the 'zoomed in/micro' view of your music. In a way that no other kind of DAW really does, you are looking at your composition at a scale that allows you make tiny and subtle variations to notes and sequences pretty quickly and easily. I think where a tracker is lacking is in the 'macro/zoomed-out' view - where you have to look at the song as a whole and sculpt the arrangement at a larger granularity (maybe at the level of 'verse', 'chorus' etc). Is that really vague? To be honest I don't have an answer or feature suggestion to solve this, it's just something that I struggle with in the tracker world.

You've also been involved in other artistic mediums, such as games and midi-responsive visuals. Is this something you plan to continue doing?

I'm continuing to work in other fields like games, interactive art, visuals and procedural music. I basically see all of these things as related: using technology to create fun/interesting/important experiences.

And finally, do you have any upcoming shows or future releases planned that you can tell us about?

Once the album is out, I will be on tour all around Japan in October.

Light And Waves is available now in both digital and physical formats.
I Am Robot And Proud - Official Website

Category: Artists
Categories: Blog

Renoise Redux VST/AU released

Renoise Blog - June 5, 2015 - 13:51

We're happy to announce that the Redux VST/AU plugin from Renoise is now available for Win/OSX/Linux.

Want to try it out for free?

Demo versions can be downloaded from:
http://www.renoise.com/download

How much does it cost?

A Redux license is €58 (+VAT) or $65.
For owners of Renoise, a Redux license is €40 (+VAT) or $45 when ordered through the Renoise Backstage.

What's included?

Redux comes with a small but fine pack of example instruments, samples, DSP FX Chains, Modulation Sets and Phrases - all the components that make up an instrument in Redux. We highly recommend that you browse through the included presets to get an impression of what it can do for you.

There are 3 additional free content packs with more instruments, samples and other presets available to registered users in the Backstage.

Want to know more?

http://www.renoise.com/redux

Category: Releases
Categories: Blog

Instrument-building: Electric Piano

Renoise Blog - November 27, 2014 - 15:12

Photo by Roger Mommaerts / CC BY

In this tutorial we are going to create an authentic sounding electric piano in Renoise

Topics covered in this tutorial
  • How to render samples from a plugin (VST/AU)
  • How to layer sounds to create a thick, convincing sound
  • How to implement cross-fading between keyzone layers
  • How to use velocity-tracking for expressiveness

The electric piano - and its cousin, the electric clavinet - are truly some of the classic instruments of the 20th century. Famous models like the Fender Rhodes (the one that started it all), and later models such as the Wurlitzer and Hohner are recognized all over the world, by musicians of all ages and spanning all genres.
In these instruments, the sound is not created electronically, but rather by a hammer striking a string or pitch-fork like apparatus, with a built-in pickup system amplifying that signal. Essentially, you could describe the electric piano as a cross between the piano and electric guitar.

Unlike the electric organ, which we covered in the last chapter, the electric piano responds to how hard you strike it - the tone becomes stronger and fuller, but still with a consistent tone. This is by definition something we can emulate pretty well in Renoise, using minimal resources.

Before we go through the steps of creating the actual instrument, let’s hear a quick sample of what the finished instrument might sound like (the link will open in a new window/tab):
Electric Piano demo (Doors - Riders On the Storm)

Step 1: Samples! - our basic building blocks

Since this is going to be a sample-based instrument, the first thing we need to decide upon is a good sound source. Of course, we could choose to sample a real-life Fender Rhodes, but I personally don’t have one standing in the corner of my living room (I’m sure someone reading this does, though). Rather, I would like to showcase a really cool feature of Renoise - the plugin renderer. The plugin renderer allows you to "freeze" the output of a plugin, creating a sample-based version at the push of a button. As the most suitable candidate for rendering our samples, I have chosen a plugin called Pianoteq.


The Pianoteq plugin lookin’ sweet in Electric Piano mode

Pianoteq is a great plugin for creating faithful emulations of a number of different instruments. It specializes in recreating the sound of both modern and historical pianos, but also comes with various percussion instruments installed. You can download the free demo of Pianoteq here.

Once you have installed the plugin (and possibly, rescanned for new plugins in Renoise), Pianoteq should be ready for use. Heading into the instrument list in the upper-right corner of Renoise, you can choose to load the plugin from the list of plugins right underneath (Instrument Properties), or by means of the plugin tab (Plugin).

Let’s begin by launching the plugin renderer by right-clicking on the plugin in the instrument list. First, we need to define the pitch & velocity range - this will in turn decide how many steps the keyzone is divided into. In our particular case, the low and high note should be set to C2 and C8, respectively. Tones outside that range are rarely used, and in doing this, we save a little bit of memory / harddisk space. As for velocity, we are interested in capturing the maximum and (almost) minimum velocities of the electric piano. But by default, the plugin renderer is set to just a single velocity level - we want to increase this to 4. Also, the instrument does not radically change its timbre from note to note, so we set the pitch step size to 6 (6 semitones = two samples per octave).

The rendering dialog should now look like this:

If you have not already done so, in the Pianoteq plugin select the Electric Piano “Tines R2 (Basic)” preset, and look for the little switch labelled Reverb (we want to turn off reverb before rendering).

If we hit the Render button, Renoise will start to record the plugin. Note that this happens using the sample rate specified in your audio settings. You might want to match that with the plugin to avoid any loss of quality when converting between rates. In our case, Pianoteq has a setting which allows you choose the desired output in hertz - I have chosen 48kHz, with the renderer set to 16 bit. Normally, 16 bit is fine, but increase this value if you are planning to change the volume of the rendered samples afterwards - the extra bit depth would allows for an increase in volume, with no (perceptive) loss of quality.

Having rendered the plugin, switching to the keyzone should now look like this

This is how the plugin renderer creates its output - a tiled keyzone map with seven octaves and four velocity levels. When rendering a plugin with multiple velocity layers, the plugin renderer will automatically disable the link between velocity and volume (the VEL>VOL button located in the toolbar underneath the keyzone editor). This is fine, as any rendered sample based on a notes with a low velocity will - most likely - also have a corresponding low volume.

And of course, when choosing our render setting we could have chosen to divide the velocity into even more steps, in an attempt to more accurately capture the character of the plugin - but this is a creative decision we are going to make: we are not really interested in a sound which is divided into 4, 8 or 16 velocity steps - rather, we are aiming to emulate the sound at all possible steps, using the minimum and maximum velocity levels as our “opposite poles” that we then crossfade between.

Because of this, we are now going to remove the second and third velocity layers from our rendered sample, keeping only the lowest and highest ones intact:

Note: I am using the SHIFT modifier to select multiple keyzones between mouse clicks

If you are now asking ‘but why didn’t we just render with two velocity levels in the first place?’, consider that the velocity levels are distributed equally. So, by rendering four levels we got samples with 25, 50, 75 and 100 percent velocity. By keeping the 25 and 100 percent levels, we got approximately the right velocities (with the right “timbre”) for our particular purpose.

Also note that the keyzones going from F#3 to B-3 might be empty. There is nothing strange about this, as the demo version of Pianoteq will skip certain notes. You probably want to delete those samples and resize the neighbouring zones to cover the “missing spot”.

Step 2: Adjusting the sample properties

It's always good to check the levels before you render a bunch of samples. But, even if our levels were good for the electric piano at peak volume, it is of course an entirely different story for the samples we rendered at the lower velocity - right now they sound fine, but once we add our own velocity response curves, the volume would become too low.

A good way to boost the volume for a selection of samples is to apply it to the properties of the sample (anywhere between -INF and +12dB). This is easy - from within the keyzone editor, drag the mouse across the lower part of the keyzone to select those samples. Now you can “batch-apply” the new velocity by choosing a different volume in the Sample Properties:

What’s nice about applying the volume to the properties of a sample is, that you are not changing the actual waveform data - the values specified in the sample properties only applies when the sample is being played.

At this stage, you could also consider looping the samples. In the example instrument, I have done this - it saves a little bit of memory/disk space, and allows you to have an sustaining sound (which is not totally realistic for an electric piano, but a nice thing nevertheless). But note that this step would be entirely optional - to create a good sample loop is an art and science in itself (and one which I am going to cover in my next article, btw.).

Step 3: Preparing modulation

With the basic ingredients in place, it's time to add modulation. Look at the following graph, and you will see that we are going to create two modulation sets: one called pp (pianissimo - very softly), and one called ff (forte-fortissimo - very loud).

Therefore, we need to create two modulation sets, and apply different modulation to each of them. Again, we can select the samples in either the high or low part of the keyzone by dragging the mouse along the horizontal axis. Having selected the topmost samples, you can now choose Mod: Create and assign new. in the sample properties panel. Provide a name (ff) and repeat this process for the bottommost ones, adding another modulation set called pp.

Now we have the samples nicely divided into two modulation sets, and are almost ready to begin with the really interesting part: adding the modulation devices. But first, we need to make sure the two layers overlap each other.

Repeating the process from before, we drag the mouse horizontally to select the topmost samples*, and then, using the resize handle, we expand the range so that it covers almost the entire range, from 7F to 01. The same is done for the bottommost samples, but in the opposite direction - expanded upwards from 00 to 7E.

* Hint: If there isn’t any room for dragging the mouse (the keyzone being full), you can make room by resizing the two zones at the very left side. See also the picture below.


Making the upper and lower velocity levels overlap each other

There are two reasons for not making each layer cover the entire velocity range:

  1. If we are playing our instrument at full velocity, the quiet (pp) layer does not need to play (and vice versa for the ff layer). This will potentially save a little bit of CPU.
  2. Leaving a bit of space makes it possible to select samples via the keyzone and not just via the sample list (as you can see in the animated gif above).
Step 4: Implementing velocity response

At this stage, whenever we strike a key, the instrument plays two samples, always at full volume. But as long as we are working on just one of the modulation sets, having both layers sounding at the same time is a bit confusing...so let’s start by turning down the volume of the pp layer.

OK, that’s better. Now we only hear samples being played in the loud layer. In the modulation editor, click the ff > volume domain and add not one, but two velocity tracking devices.

Note: you can double-click to insert the selected device at the end of the modulation set

Actually, in most cases one velocity device would suffice, but as we are looking for a slight exponential curve (and the velocity/key-tracking devices are linear), having two devices loaded with the same settings will achieve just that. Also - in case you looped the samples - don’t forget to add an ADHSR device at the very end, or the sound could potentially keep playing forever.
The ADHSR should only turn down the volume when released, so we want it to sustain at full level and all other controls at zero (but perhaps add a few milliseconds for the release)

To resume, this is what our modulation chain for ff > volume should look like:
[*] Velocity tracker: (default settings - clamp, min = 0, max = 127)
[*] Velocity tracker: (default settings - clamp, min = 0, max = 127)
[*] ADHSR (attack/decay/hold = 0ms, sustain = 1.00, release = 4ms)

Next, we switch to the ff > cut domain. We then choose to enable the LP Moog style filter, a resonant type filter with a good warm character. We want to make the cutoff behave more or less exactly like the Pianoteq counterpart. This might require a little bit of experimentation, but I found the following settings to be useful:

[*] Velocity tracking, set to “scale”, min = 45 and max = 127
[*] Velocity tracking (yes, another one), with default settings
[*] Envelope device with approx. the following curve across 6 seconds

Finally, head into the ff > resonance domain and adjust the Input to just about 0.200.

That’s it for the loud layer - now we want to switch to the pp layer and turn its volume back up (remember how we temporarily turned it down?).

This time, we want to implement a velocity response that fades the volume out at both low and high velocity levels. If we currently play the instrument, the ff layer is sounding good at full volume, but at lower velocities, it lacks “beef”. This is what we intend to achieve - filling out the sound with our pp layer that grows in intensity as the velocity increases from zero to “something”, and then gradually making room for the sharper, stronger ff layer.

This is entirely possible to achieve - we have all the necessary components (modulation devices), we just need to put them together in the right way. Here is our magical custom velocity response for pp > volume

[*] Velocity tracker: (clamp, min = 127, max = 0)
[*] Operand set to 2.00
[*] Operand set to 2.00
[*] Velocity tracker (default settings)
[*] ADHSR (attack/decay/hold = 0ms, sustain = 1.00, release = 4ms)

So, what’s going on here? Well, for starters, the ADHSR device at the end is just there in case we have looped our samples. What’s really interesting is that we start by interpreting the velocity “in reverse” - the first velocity-tracking device makes sure that the harder you hit, the lower velocity becomes. This is what makes the volume fade out as velocity approaches the maximum level. At the other end, the second velocity tracking device is working in the normal way - the harder you play, the louder the sound becomes. As for the two operands sitting in-between - they are simply there for boosting the overall signal - otherwise, the sound would become too quiet.

Now, the instrument should be getting really close to what we want. Try playing it at different velocities, and hear how the sound changes the timbre? Not perfect perhaps, but still a pretty good approximation of what an electric piano sounds like.

Step 6: adding a macro to control falloff

As a finishing touch, we can now add a macro-controlled gradual volume falloff. This is done by adding a fader device to each of our volume domains. The macro can even be abused as a sort of pseudo-tremolo effect. Having a falloff is especially useful if the samples were looped to begin with (or they would never stop).

For the ff > volume domain, insert the fader device before the ADHSR device. Default settings are fine, as we are going to assign “duration” to a macro anyway. Repeat this for the pp > volume domain, also inserting it before the ADHSR device.

Then click the button in the instrument editor toolbar labelled Macros in order to show the macro panel, and hit the little icon next to the first rotary knob. This brings up the macro-assignment dialog.

With the macro dialog visible, we then click the duration of each fader device, and assign each mapping to this range: min = 32.00s, max = 100ms. Adjust the scaling too, and generally find the values that you like the most.

And that’s it. We have created a pretty convincing-sounding electric piano, and hopefully learned thing or two about advanced instrument creation in Renoise. Don't forget to save your creation!

Download the final instrument from here

I would like to extend a thank-you to the makers of Pianoteq, who have kindly given me permission to distribute the Renoise instrument as part of this article.

Next installment: the perfect loop: In the next chapter, I will explain the workflow involved in making high-quality multi-sampled instruments. This involves choosing the right bitrate and frequency, creating the “perfect loop”, and how scripting can make this process less cumbersome.

Category: Tutorials
Categories: Blog

Renoise 3 goes gold

Renoise Blog - April 10, 2014 - 18:44

Another round of beta testing has passed. Renoise 3 is ready for production.


In case you missed it the first time around, new features in 3 include:

  • Supercharged instruments: Per-sample envelopes and DSP effects, Keyzones with overlapping layer options (e.g. round-robin features)
  • New real-time performance options: Real-time input quantize and real-time applied harmonic scales
  • Instrument Phrases: Attach a whole note-sequence to an instrument and play this sequence in any pitch and tempo with a simple key press
  • Instrument automation & macros: Control an unlimited number of independently weighted parameters within an instrument
  • New Doofer DSP FX: a wrapper for other devices, enabling you to bundle complex DSP chains within a reusable “shell”
  • New Convolver DSP FX: Impulse response processor for simulating the reverberation of a physical or virtual space
  • Reorganoised GUI: Simplified tabs + layout, entirely redesigned (and detachable) Instrument Editor

A detailed description of what's new can be found on the Renoise 3.0 launch page.

Category: Releases
Categories: Blog

Mutant Breaks #6 Recap

Renoise Blog - January 25, 2014 - 16:18

Mutant breaks is an unorganized and mostly unadvertised yearly contest where experimental breakbeat producers come together to judge and be judged - Then the votes are thrown out the window and winners are chosen through random, poorly enacted, performance art. Cash prizes!

The contest came, the contest happened, the contest is over.

  • The thread it happened in here!
  • The votepack (with XRNS) available here!
  • The votes here!

Winner announcement:


See you in 2014?

Category: Competitions
Categories: Blog

Renoise 3.0: Creating a layered instrument

Renoise Blog - January 21, 2014 - 18:50

Photo by Shunichi kouroki / CC BY

In this exercise, we will recreate one of the instruments that come with Renoise 3, a drawbar organ emulation. This instrument is useful as a general purpose tone-generator, able to create sounds with a wide range of harmonics. At the end of the exercise, we should be able to control our creation via macros, too.

Note that this is not an introduction to advanced instrument-design as such, but rather a quick guide to pick up on some of the aspects of the new Renoise 3.0 sampler - the sample list/properties, modulation and macros. You could easily take the resulting instrument into new territory by adding your own effects, using an alternative tuning scheme or by replacing the individual samples.

Tip: to listen to the final instrument, you can open Renoise right away and and use the Disk Browser to navigate to the factory content (within the instrument tab). The file is called Drawbar Organ.xrni, located within the folder named “Electric”

Step 1: The fundamentals

So, we have set out to recreate a drawbar organ. In practical terms, this means stacking 8 differently tuned sinewaves on top of each other. And since this is a traditional organ-style sound, we want to add subharmonics (fractions of the topmost frequency) in the following manner:

Layer 1 : 16 110 Hz Layer 2 : 5 ⅓ 322.5 Hz Layer 3 : 8 220 Hz Layer 4 : 4 440 Hz  ← our "perceived" base frequency Layer 5 : 2 ⅔ 645 Hz Layer 6 : 2 880 Hz Layer 7 : 1 ⅗ 1032 Hz Layer 8 : 1 ⅓ 1290 Hz

The 1/1 ratio refers to the fundamental frequency - in this case we aim for 1720Hz - exactly two octaves above the standard base frequency of 440Hz. This will make the base frequency sit nice and comfortable as the middle 4th layer, with the possibility to add extra harmonics both above and below this tone

Step 2: Gathering ingredients

You can’t cook without ingredients, and in this case our main ingredient would be a sine-wave. Luckily, we have a instrument folder called “Elements” that contain basic sounds that you can build upon - using the ability to expand an instrument and load samples from within it, we can load the sample “Sine 16.351 Hz” from the instrument called Chip - Sine.xrni

Now, if we want to be able to load the instrument into any song, it would be a good practice to settle on the standard tuning, in which the A-4 note represents 440Hz. However, the sine-wave sample itself is playing at an extremely low frequency - 16.351 Hz is well below the limit of human hearing - so we would first need to transpose the sample upwards.

In Renoise, you can transpose samples in two ways - either by tweaking the transpose amount directly in the sample properties panel, or by opening the keyzone editor and adjusting the basenote. In our case, we want to adjust the basenote via the keyzone editor - all the way down to the lowest possible pitch:


In this picture, we are using both the numeric input, and clicking along the edge of the keyzone to adjust the basenote

To ensure that our sinewave is in fact tuned to the right frequency, try playing an A-4 note while looking at the spectrum analyzer (hint: to view the spectrum analyzer, switch to the mixer or pattern editor, and click this icon in the upper toolbar: ). Being a sine-wave, the sample should produce a clear peak, as you can see here

If triggering the note at A-4 plays the sample at the 440 Hz frequency, we are ready to continue.

Note: if you had loaded another type of sample - containing more complex harmonics - it would probably be a good idea to listen closely to the sound, and then adjust the transpose/finetune by ear. Still, a sine-wave would be a useful reference - a good “tuning fork”, so to speak.

Step 3: Creating the modulation sets

Opening the modulation editor, you can quickly add 8 sets by pressing the + button, and assigning the corresponding names: 16, 5 ⅓, 8, 4, 2 ⅔, 2, 1 ⅗ and 1 ⅓. Of course, these names are just there for reference, you could leave the names to their default values too.

Having selected the Volume slot, we then add an Operand device...

...and use copy-paste to copy the contents of the Volume slot to the remaining sets. For copying the modulation set, we can either use the keyboard shortcuts CTRL+C/CTRL+V, or access the clipboard via the context menu (right-click)

Also, we want to add macro assignments for each volume operand. This is done by stepping through the various sets in macro-assignment mode. Quickly done, although at this stage the repetition might begin to feel a bit tedious? Don't worry, we are nearly done!

Step 4: Creating and tuning samples

With our basic sample and modulation layers in place, all that is missing now are the actual samples. We will need to create copies of the original sine-wave sample, and transpose each copy according to the table of frequencies from step 1. We are certain that our original sample plays A4 exactly at 440 Hz, which makes it easy to calculate the relative transpose we need to apply to each note (you can also use a reference table such as this one if you need to look up how frequencies map to notes in standard/MIDI tuning).

Layer 1 : -24 semitones * Note that layer 2 and 5 does not accurately represent the pitch from the original table of frequencies, as they have been adapted to tempered tuning Layer 2 :  -5 semitones * Layer 3 : -12 semitones Layer 4 :   0 semitones Layer 5 :  +7 semitones * Layer 6 : +12 semitones Layer 7 : +16 semitones Layer 8 : +19 semitones

Creating the samples is done by selecting the sample in the sample-list and hitting “Duplicate” seven times (CMD/CTRL+D) - don’t forget to focus the sample list first (alt/middle-click). Be sure to follow the steps in the following screen capture: first, the basic sample is copied 7 times, then each copy is assigned to a modulation set (can be done via drop-down below the the sample list too) and finally the sample is transposed using the sample properties...

Step 5: Final steps

Now that each sample has been assigned to a unique frequency and modulation set, the instrument is practically done. You should be able to experiment with the macros to achieve various sounds and timbres (and, generally speaking, it is probably a good idea to check that each knob does in fact work, and is controlling the right layer).

Adjusting the volume of the instrument is recommended, too - playing this many sine waves on top of each other could easily get pretty loud, especially when playing chords. Of course you could just turn down the global volume of the instrument, but perhaps a better approach would be to select all samples in the sample list and “batch-apply” volume to them, using the volume value within the sample properties panel - this will ensure that you have some headroom for your instrument (it is always a good idea to aim for +0dB as the master volume, as it will make it easier to insert the instrument into a song/mix at a later stage).

And that’s it!

Congratulations on creating a layered instrument with the new sampler in Renoise. Now would probably be a good time to save your creation!

And, perhaps this exercise taught you something useful? In any case, if you know the new sampler in Renoise sufficiently well to create an instrument like this, you are well on your way to creating even more advanced instruments “from scratch”.

In the next Renoise 3.0 tutorial, we could perhaps look into creating a Doofer device, faithfully emulating a rotary speaker (a Leslie speaker). It could be combined with this instrument to create some really authentic jazz club vibes (cigarettes and alcohol not included).

Category: Tutorials
Categories: Blog

Pages

Subscribe to Renoise aggregator
wurst