Using Effects Plugins to Create Character Voices: Aliens, Demons, and Monsters

In our last article on processing vocals for speculative fiction podcasts, we covered dealing with synthetic voices such as Robots, AI, and Androids. In this article we will go over some techniques for creating non-human, non-synthetic voices, such as Aliens, Demons, and Monsters. Creating non-human character voices can be a fun experience and help bring your podcast to life, and also help the listener distinguish between the characters more easily. Here, we will cover several techniques for using effects plugins to achieve the desired voices of the characters. I will go over a few of my go-to tools for processing these types of voices, but I also encourage you to try out these effects in various combinations, as sometimes we need more than one tool to get the job done.

Reverse Reverb:

One of the first tools I reach for when dealing with ethereal, non-human voices is reverse reverb. Many DAWs have a built-in plugin for this effect but it can also be done manually with a few basic steps. Simply reverse the audio file so that it plays backwards, then add the desired amount of reverb, reverse the track again, so that the vocal is once again playing as it was recorded. The tail of the reverb will now precede the vocal and add an otherworldly, ghostly lead in to every word. This process is great for voices from beyond the grave or coming from another dimension. It will usually produce a haunting effect.

Bit Crushers:

 

Image 1) Screenshot of the Decimator interface in Audacity

Another fun tool to pull out for non-human voices is the bit crusher. For harsher sounding voices, for instance demons or monsters, I often reach for a bit crusher plugin. Bit crushers work by reducing the amount of information (or resolution) in the audio file, thus creating distortion. The amount of bit reduction can be catered to taste and can produce a variety of subtle to extreme effects. These plugins are easy to use and are included with many DAWs. The robust freeware Audacity has a free plugin called the Decimator, which allows you to reduce both the sample rate and the bit depth. There are also many freeware plugin options such as the Tritik Krush. These easy to use plugins may produce the desired effect, and require little finesse or experience.

Harmonizers:

Image 2) Screenshot of Harmonic Generator interface in Audacity

Harmonizers are one of my favorite tools for creating non-human character voices. They work by shifting the pitch (either higher or lower) of the original signal and then recombining the effected signal with the original. Most harmonizers allow you to combine multiple versions of the altered signal, and can be used to create thick and interesting effects. While these plugins can be used with musicality in mind they can also be used to create discordant and unsettling results, perfect for otherworldly voices. I often reach for these plugins (or hardware such as the POG, or poly octave generator) when working with alien characters and find the tool to be an easy choice for instant strangeness. Audacity has a free plugin called the Harmonic Generator and its operation is simple and intuitive. This effect can also be achieved by adding duplicates of the original signal and pitch shifting them individually. This latter technique can be useful as you will be able to mix the individual altered signals to taste.

Combining Effects:

While single plugins can be effective and get the job done, sometimes it is worthwhile to experiment with stacking effects. However, the order in which you process you audio files can drastically change the results. For instance, you may want to have reverb on your vocal track to create a sense of space, but should you add it before or after the other effects? In most cases, if you are using reverb for environment you will want to use it after processing your track with other effects. But there are not necessarily rules for which order effects can be used in. I encourage you to experiment with stacking effects, and while you do so try them out in different orders. You will find that the results can vary greatly, and as always it is up to your sense of esthetics (and the audibility of the track) that will determine which is the right tool or tools for the job. You may also want to take notes on how you created a certain characters’ voices in case they come back in a subsequent story. This process can be really fun, so go wild and hear what happens as you explore the world of audio effects plugins.


Using Effects Plugins to Create Character Voices: Robots, AI, and Androids

One of my favorite challenges when producing speculative fiction podcasts is dealing with non-human characters. This can vary from robots to aliens, androids to animals, and everything in between. While some of these characteristics can be dealt with in the acting, sometimes it requires additional processing to create convincing non-human characters. It is always important in an audio setting to differentiate who’s who, and this can be done in many ways. The acting and the panning are the simplest ways to make one character stand out from another but sometimes we also need to reach for the right tool to make the difference meaningful, especially when one actor is playing multiple characters in the story.

Robots, androids and AI are some of my favorite characters to produce, and they come up often in speculative fiction. There are a variety of audio effects that are useful in the creation of these characters and in this article I will cover a few of my favorite techniques for creating convincing voices. The first effect I reach for when working to create a synthetic sounding voice is the comb-filter (think C3PO). This simple technique gives the voice a synthetic feel, while retaining the clarity of the speech – and the plugin is available with most freeware and purchased Digital Audio Workstations (DAWs).

Comb Filtering

Comb filtering occurs when a signal is delayed and added to itself. This can often happen when using multiple microphones (set at different distances from the subject of the recording) and can cause problems with the audio quality. When this occurs the frequency display on your FFT (visual representation of the waveform) will show up looking like the teeth of a comb, hence the name (See Image 2). However, this layering of frequencies and the subsequent phase cancelation can be used as a powerful tool for creating non-human voices. Many DAWs will have a built-in, easy to use comb filter. If your DAW does not, you can achieve the same effect by duplicating the track and delaying the duplicate by a few milliseconds. Play around with different amounts of delay until you find something that suits your taste.  

                                     

Image 1) Vocal waveform before comb filter                                                                                       Image 2) Vocal waveform after comb filter

Another go-to effect for robot voices is the ring-modulator (I.E. the Daleks). The ring modulator is a tried and true effect for creating otherworldly voices but easily gets out of hand and needs to be treated gingerly, as not to be overdone. While using the ring-modulator is not my favorite technique because it has been overused in cinema, if it is the right tool for the job, then by all means use it.

Ring Modulators

Image 3) screenshot of a software interface for a ring modulator, highlighting the waveform function

A ring modulator combines both the waveform (in this case your voice) and a signal from an oscillator, typically a sine wave, but most ring mods will give you a choice of waveform (See Image 3). The output is both the sum and the difference of the combined frequencies, but contains neither of the original signals and tends to provide a robotic cadence to the original signal. It is easy for a ring modulator to go out of control, so alter your parameters with care and play around until you find something that fits your taste.

Text-to-Speech Applications

Another fun way of creating robot voices is to avoid the actor all together and actually use a robot voice. Most devices contain text-to-speech capabilities and these can give you the genuine robot feel, but it is important to keep in mind that they tend to be tinny and need to be equalized for clarity. There are several apps that come in handy for this technique and have a broad variety of voices, however the accents can be pretty bad and can sometimes sound like a parody of the desired accent. This is not my favorite tool but it can have its place, but keep in mind when dealing with this approach that you will not have much control over the inflection, and nothing can replace a good “human” actor. Text-to-speech apps are easy to come by, and often free, or even built into your word processors.

The aforementioned techniques will serve you well for synthetic characters but what about aliens? Since we don’t really know what aliens will sound like this can be a fun and interpretive process and the choices are endless as long as we stay within the human hearing range (20-20k Hz). But this is the subject for a future article. Of course, there are many other useful effects for creating synthetic character voices, and I encourage you to experiment with: pitch shifters, flangers, vocoders, etc. In the meantime have fun with Robot, AI, and Android voices.

 

 

 

 


Mixing Spoken Word and Music – Finding the Right Balance for your Audio Drama

Music and sound effects are a wonderful way to bring stories to life. An original soundtrack can take your fiction podcast to the next level and engage your audience in a deeper way. But the blending of music and spoken word can be tricky to mix properly, and the two can interfere with each other if not done properly. Since the story is the most important aspect of any fiction podcast (science fiction or otherwise) the music should not overpower the words but rather add a layer that helps tell the story.

In this article I would like to show you several techniques that I have found immensely helpful for making sure that both the spoken word and the music are both audible, so that the audience can enjoy the story without missing a beat and hear every word spoken.

The first technique, and probably the most important, is using the EQ (equalizer) to remove overlapping frequencies. EQ can be thought of as frequency specific volume control. Basically what happens is that when you add music to spoken word many of the frequencies in the sound overlap and tend to blur together, making both less audible and clear. It is easy to fix this problem with some basic EQ settings. If you are a podcaster EQ is an essential and easy to use tool for improving the quality of your audio recordings.

The human voice tends to sit in specific frequency ranges. Higher register voices tend to lie between 165 to 255 Hz. Lower register voices tend to lie between 85 to 180 Hz. Because the human voice tends to sit in this area of the lower mid-range, these frequencies can be turned down in the music to aid in making both clear. When these vocal frequencies and the same frequencies in the music are added together the result is that both are difficult to hear, mostly getting in the way of the clarity of the voice. We can use the EQ to remedy this issue. Open your EQ plugin on the music track and set the points to lay just outside of the vocal range, from there you can lower the DB of those particular frequency ranges (see pictures 1 & 2*). You may have to experiment a bit with how much to lower the volume of those frequencies, but once you find the right balance you will find that the vocal tracks are much easier to hear without interfering with the overall perceived volume of the music.

Example of EQ dip for higher register voice

                    

Example of EQ dip for lower register voice

Another issue that can come up is bass masking and is also addressed using the EQ. This happens when there is a buildup of low frequencies, these frequencies along with their overtone series can cause “masking”, which will make the mix both sound muddy, and get in the way of the clarity of the vocal tracks. To fix this potential issue we again reach for the powerful tool of the EQ. The human hearing range is from 20 to 20k Hz. Most of the bass frequencies that can lead to masking lie in the 100 Hz and under range. Because we cannot hear the bass frequencies under 20 Hz we can lower their volume without changing our perception of the bass, while at the same time removing potential issues of masking. One might think that this removal of information would cause the music to sound less rich, but the truth is that when we convert our audio files to MP3 this frequency range tends to be removed anyway. One might think at this point that we could just let the MP3 compression take care of the removal of these frequencies, however this would leave the overtone series intact and still have potential issues.

Dealing with masking can be an easy process, simply open your EQ plugin and roll off the frequencies under 20 Hz (see picture 3*). This is also often called high-pass filtering, basically letting the high frequencies through while reducing the amplitude, or blocking all together, the low frequencies.

High-pass filtering

While these issues can get complicated, particularly the math involved, familiarizing yourself with the use of EQ and playing around with these simple tricks can greatly improve the audio quality of your podcast. By all means experiment and see what the powerful tool of EQ can do for your recordings, it is a simple tool to use and once you get the hang of it can go a long way in helping your podcast be listenable and clear.

* The EQ plugin shown for example is from Audacity (freeware mixing software), but these settings will work the same in any EQ plugin with any DAW.


How To Make a Fictional Podcast

I’ve been a long-time fan of radio dramas, serialized fiction, Star Wars audiobooks, and other fantastic types of audio media, so when podcasts Hulk-smashed their way onto the scene in the early 2000s, I was thrilled. I contacted an author and mentor of mine, Nicole Kimberling, and asked if she would be interested in making a podcast. After googling what a podcast was, she signed on. Two years ago I worked with Nicole to make the podcast “Lauren Proves Magic is Real!” A few folks beat us to the chase with full cast recordings, but I think we caught the first wave of fiction podcasts.

I’m going to break down the steps you can take toward creating your own podcast, because if you have a story to tell, there’s someone out there who needs to hear it.

The approach that I would take to creating a serialized fiction podcast starts with breaking it into two parts: the story and the sound engineering.

Part 1: The Story

The story should be written as a script. They fall into the standard types meant to operate without the aid of visual explanations:

Classic Radio Play: A narrator explains the settings, scenes, and any other parts of the story not revealed through dialogue.

Serial Documentary Drama: A self-aware (which means the narrator is aware they’re recording a podcast) story in which the main character is recording a podcast. Normally, the characters investigate something and the plot involves their interviews, experiences, etc.

Theater of the Mind: A dialogue-based podcast without a narrator that relies entirely on soundscapes. Sound effects take the place of visuals. This can rely heavily on clear exposition (which can be corny).

Dear Diary: An audio diary in first person narration, because reasons. Maybe your character hates to write words, or maybe they are just one of those people who constantly take audio notes.

There are ways to merge these ideas. For example, the main character of “Lauren Proves Magic is Real!” was a podcaster who found the field recordings of a supernatural special agent. She then aired them as episodes of the podcast. So the podcast had to be self-aware, and have both narrative and audio diary elements. The self-aware podcast seems to be a popular choice, which might be because it can frequently be styled in a War of the Worlds way. The listener may experience a moment or two early on where they are not sure if the podcast is real or fiction.

It’s important to make the presentation style clear early on, as it will determine how much of the story is told in dialogue and how much of the story is told with audio.

An audio script, like any script, can be made simply with your computer, your typewriter, or some pen and paper. Like any good story, you’ll want a beginning, an inciting incident, a climax, maybe even a twist, and of course an ending. I personally gravitate toward cliff hangers in serial fiction episodes, so fans have something to wonder about

Now that we’ve covered some basic story elements and your imagination can start putting together a story you’d like to tell, let’s cover some of the technical basics and the steps to take if you want your podcast on Apple—‘the people’s platform’—or paid subscription networks like Stitcher.

Audio Recording Gear Land:

Microphones: How many mics you’ll need depends on how many characters will be speaking to each other in a single scene. I started with three mics and got to a point later on when we needed eight—for one scene. I’m not gonna lie: Mics can be expensive, depending on the sound quality you need. However, you can certainly start with cheap microphones, even gaming microphones that come with a desktop PC. You can also occasionally find microphones at secondhand stores. A classic mic that gives you a lot of bang for your buck is the MXL 990. The standard stage mic, SM58, will work too, as well as the standard stage instrument mic, the SM57.

USB interface: This is a little box into which you can plug a fancy microphone and then the box plugs with a USB cable to the computer USB driver. There’s a large variety, but here are some I’ve used: focus rite, m-audio usb, and audiobox.

Software: There’s a lot of audio programs out there. Your computer may already have one, like GarageBand. There’s also free audio editing software, like Audacity. I’ve also heard good things about Reason, and Ableton Lite. Your audio software will be where you record your story and track over track, and sound edit your story. You’ll record your dialogue using the power of acting, and the friends you can convince to act with you. Take your time to experiment with settings and be open to feedback. Eventually, you will become familiar with your software and be able to produce content very quickly.

There are a few workarounds for the creative person working with a very small budget. There are a series of apps for smartphones that are decent for recording and sound-mixing. If you want to start small, nothing is stopping you. You have the power to write, record, and mix on the device you are likely reading this article on.

Sounds & Music

Theme Songs: All good podcasts have a theme song. It can sometimes include clips of the dialogue cut out and edited like a movie commercial or be an original theme. Theme songs are a fantastic opportunity to collaborate with another like-minded creative. Find a musician friend and offer to use their music or ask them to write some. It’s hard to be vulnerable during the beginning of a creative process, but having a musician that you like on your team is really going to be worth it as you move forward in producing.

There are many opportunities to play music in a fictional podcast. There’s music that plays in cafes, music that denotes the passage of time, music from a car radio, music that sets a location. As much of this as you can get originally created, the better. But also there are a ton of resources online for both music and sound effects. I use www.freesound.org for sound effects and small bits of royalty-free music.

There will inevitably be times where you need a sound effect that isn’t on the internet. I’ve had to create sound effects by recording myself running up and down stairs while knocking things over, dropping plates, scraping a razor against a bowl, hitting an iron railroad nail tied to a fishing string, etc. Be creative. You already can record. Ask yourself what’s in the room around you that makes a sound that could enhance your story.

Hosting: Once you have recorded several episodes and are committed to a release date, you’ll need to acquire a site on the internet. It can be a simple free Wix site or Squarespace. The only real requirement you need is for the site to be able to hold an RSS feed, which is where you will drag the mp3s of your project. Then you’ll either pay a third-party site like Podbean, or hop over to Apple podcast and submit your podcast for review. If it passes muster with the strange robots that review things, then bang! You’re live. It’s time to hit that share button and plead for likes on your social media.

The best advice I can give anyone about to make a fictional podcast is start small, pick a release schedule, and meet that release schedule every week. That means being prepared and not waiting until the last minute to do anything. Enjoy yourself! You’re about to embark on the fairly unexplored medium of fictional storytelling, a genre that is still being formed. I’m excited to hear what you make.