Interview with Field Collapse

Field Collapse is an experimental instrument builder and composer for film, television, and podcasts. His work regularly appears in Simultaneous Times podcast, as well as television shows such as Altered Carbon, Mythbusters, and the White Rabbit Project. When not working on production music he can be found recording and touring with the experimental ambient group Less Bells. He is also the co-owner of Audiobrat Production Music, a provider of library music.

Can you explain what library music is and how you came to compose for libraries?

It’s essentially the music version of a stock photo website. Library music is music designed for TV and film but it is more affordable because it is not specifically composed for the project. Years ago, I was working at a music store selling recording equipment. I met someone who worked in the business and gave them a demo that I had put together showing whatever meager range I was capable of at the time. Years later, the man admitted he came home and threw the CD in the trash only to pull it out and eventually give it a listen a full year later! He called me when he needed a particularly “modern” sounding electronic track and of course I lied and told him I had tons of that stuff laying around. That night I think I made ten tracks for him to choose from. If you get a chance like that you take it!

What are some of the ways in which composing library music differs from composing for podcasts and film?

In the library world, you never know what the music is going to be used for, so you can’t tailor it to fit the project like you can when you are working to picture. As with all music, you are writing to hit a certain vibe or feel but you are working with broader strokes because you have no idea where it’s going to end up. For me it’s very freeing, I can imagine the narrative or TV show that I’m trying to score. The main drawback is the library music world is more conservative. If you make music that is too weird it will never get used, whereas in the podcast world I can force people to listen to the weird stuff. Podcasts are especially interesting now because they are such a relatively new art form the lexicon hasn’t been canonized. It’s still up in the air what podcasts are supposed to sound like.

When composing music for accompanying spoken word what things do you have to keep in mind?

One of the main mistakes I see a lot of young musicians make it concentrating on the “lead” instruments. Playing too many notes and grabbing too much attention. Most people get into music because they WANT attention. You have to remember that the “lead” instrument in a film or podcast is the narration, and your music is there to support that, not to compete with it. You are basically setting a stage for the action to take place in.

What tips do you have for using music to heighten emotion in a story?

Well, I’m probably not the one to ask about this because I’m always experimenting with manipulating emotions and hilariously failing most of the time. In the old days, you would use key modulations and other music theory tricks for heightening emotions, and occasionally I still do but these days I am drawn to more technical tricks, like binaural beats and imperceptibly gradual tempo shifts that speed up or slow down the heart rate of the listener. I like these “subliminal” tools modern musicians have access to. I love finding manipulation techniques that haven’t been exploited yet.

What is the main difference between making music for storytelling mediums versus making stand-alone “music” records?

I have always been confused when people would buy the incidental music soundtracks from a film. Why would you want to feel like you’re in a furious action movie when you are in the kitchen making scrambled eggs? To me, stand-alone “music” and soundtracks are functionally different. Soundtracks will have musical cues that don’t make sense when applied to your normal life. Also a lot of my soundtrack work is structurally compromised when you take the action or narration away. I use silence a lot to emphasize or highlight important information or to make an emotion hit harder. That said, I do listen to the Beat Street soundtrack all the time when I am hanging out because I love to feel like I’m a breakdancer in 1983 Manhattan.

What tips do you have for using Foley and sound-effects?

Like I said, podcasts are a relatively young art form and the roles that Foley and music play have yet to be really nailed down. Right now, I am leaning more on the old “Radio play” model of the pre-television age when it comes to Foley. In the “old days”, they would only use sound effects when they were deemed important to the story. If you didn’t do this you’d obviously just be listening to constant footsteps and shuffling feet! I tend to be very conservative with Foley. You are much more likely to hear a gun cocking than an actual gunshot because the click of a cocking pistol heightens the tension. People’s imagination will fill in the sound of doors being opened and closed but I will give them a really CREAKY SCARY door if I want people to get excited about something. I suppose it’s like picking your battles. You can’t fill up the entire sound spectrum because it will just be a mess, be very careful about what you use that space for.

You’re also an experimental instrument builder, do you have any advice for using non-conventional sound sources in composition?

The reason I started building my own instruments is because I was sick of everything sounding so uniform. ANYTHING you can throw into a mix that makes people’s ears prick up will serve you in the long run, even if the track is supposed to be ignored! It’s not about attracting attention as much as sounding fresh and new. Perhaps more importantly, traditional instruments are usually designed to sit in the frequency range humans prefer to listen to, that being the range that our voice sits in. When making music that will live with narration you want exactly the opposite of this. You want sounds that sit above or below this range. I build a lot of instruments that are higher or lower in pitch than the traditional version. A good example is a “baritone” guitar or an “alto” flute. Anything to get the musical information out of the way of the spoken information. Make way for the ducklings!

What music theory tricks do you take into consideration when composing for podcasts?

I’m not traditionally trained but I have picked up enough music theory to make a nuisance of myself. However, I’m always conscious of podcasts being a new artform. I really feel like all bets are off because the “standards” haven’t really been set. It has its roots in pre-television “radio plays” but it also has a lot of internet-age influences. So I really feel like it’s an important time to stretch the roles music and Foley play to see what hits and what doesn’t. I’m always trying to experiment with the form which occasionally results in total failure but can also succeed in ways you were never expecting.


Tips for Recording Quality Spoken Word for Podcasts

Recording spoken word and dramatic readings can differ significantly from recording singing. In this article I’ll go over some simple tricks to get the best audio quality possible from your recording, and how to deal with some common issues that occur when recording spoken voices. Whether you have a cheap or expensive microphone the following tricks will help you to alleviate potential problems that can interfere with the audibility of your vocal.

Plosives:

Plosives can occur when we use syllables that contain the letters T, K and P, and also sometimes with D, G and B. The air pressure from the plosive can overload the microphone causing an ugly “pop” in the audio. While this is undesirable it is easy avoided by using a popper stopper to reduce the plosive. Popper stoppers are inexpensive to buy but can also be made easily at home using a nylon stocking and a coat hanger. Either way it is an essential tool when recording quality vocals and should be used at all times. (See image 1 - pencil trick)

{image 1: picture of a wooden pencil attached to a recording microphone using an elastic)

Smackiness:

Another issue that interferes with the quality of your recording is "smackiness". This sound is generated by the mouth and can be tricky to EQ out, but it can be dealt with easily. A common trick used in studios is to eat a small piece of green apple. Simply have the reader eat a small bite of green apple before recording. This may sound strange but it will temporarily alter the chemistry of the saliva and will eliminate the unwanted smackiness. Another trick for dealing with this issue is to attach a pencil vertically to the front of the microphone with a rubber band. The width of the pencil happens to coincide with the wavelength of the offending smackiness and will block the frequency from being captured by the mic. However, the latter technique is less effective than a small bite of green apple. (see image 2 - popperstopper apple)

(image 2: picture a green apple next to recording equipment)

Reflections and unwanted reverb:

Not every environment is ideal for recording and the setting you are recording in can drastically alter the quality of your recording. Mic placement is a part of the puzzle for dealing with this issue and it is worth exploring multiple placements before recording. Generally you will not want your microphone too close to the wall, as the hard surface will cause unwanted reflections. This can also be dealt with by tacking a blanket to the wall behind and around the mic. If you have hardwood or concrete floors, which will also create unwanted reflections, putting a blanket on the floor will block many of these reflections and improve the quality. You can always add reverb for desired effect, but it is nearly impossible to remove if it is in the original recording.

Sibilance:

Sibilance is another issue which can interfere with the clarity of your recording. It can cause the voice to sound lispy and more difficult to understand. This issue can usually be cleaned up with a plugin called a de-esser, which as the name implies reduced the “essey” quality of the vocal recording. Another easy way to  educe the problem is to use your equalizer to filter out a small dip in the typical frequency range of sibilance. Sibilance is typically centered around 5K to 8kHz, and gently scooping out these frequencies will greatly improve the clarity of your vocal.

These are only some of the basic issues that can come up when recording spoken word, and all of these tools are worth experimenting with both in isolation, and in combination, until you find the ideal arrangement for recording quality vocal tracks. The cleaner you can get it to sound before doing any processing the better. While plugins are great for solving problems, the more you can do outside of the box the less of a headache you will experience when mixing. I also encourage you to familiarize yourself with the use of compressors, gates, de-essers, and noise reduction plugins, as these can all be powerful tools for making your spoken word tracks sound the best that they possibly can.


Composing for Podcasts & Film: An interview with RedBlueBlackSilver

RedBlueBlackSilver is a film and podcast/radio composer based in the Mojave Desert. He is a regular contributor to Desert Oracle Radio and Simultaneous Times podcast. As well as composing multiple podcast episodes each month he is also the composer for the films Hunt for the Skinwalker and Bob Lazar: Area 51 and Flying Saucers, he also regularly scores silent films for amazing live shows. For a time he also hosted his own podcast: SciFi Music with RedBlueBlackSilver. He can be found at https://redblueblacksilver.com/ and at his Bandcamp 

 

 What brought you to composing for podcasts and films?

It started out as a sad story. In 2015, a truck crashed into the car I was driving on the freeway, and my head hit the steering wheel, which cause some persistent neurological problems. I was used to being able to play quickly and accurately prior to the accident. I moved to the desert and quit making music entirely after the crash, and although it made me deeply sad, I moved on to other interests. When after a while I wasn't making progress, I started to learn about music therapy and I began to appreciate ambient and atmospheric music more. About two years after the crash, I heard the radio and podcast version of Desert Oracle (a beloved periodical field guide to the American deserts) and admired how music blended in with Ken Layne's voice. I was so taken with the show that I dusted off my instruments and started to submit my own atmospheric music - things I could make despite the neurological issues - to the program. Shortly after, I started to make music for a filmmaker (Jeremy Kenyon Lockyer Corbell) who had been a guest on Desert Oracle Radio. A couple of months later I began to also contribute music to Simultaneous Times Podcast.

 

How does composing for podcasts differ from film composition?

My favorite thing about making music for podcasts is the near-immediate feedback you get from the audience. There are sometimes only a few days (but no more than a few weeks) between when the music is written and recorded and when you get reactions from listeners. It is helpful in terms of calibrating your future work based on what people enjoyed. Musically though, there isn't a single thing I do in films that I don't also do for radio/podcasts. I reject the idea that film music should be bigger and more "cinematic", and it is disappointing to me that often podcast music is so thin and almost apologizing for existing. Of course, the music can't be so prominent that the voices aren't audible but there are a lot of tricks to make cinematic music that stays out of the way in the context of a podcast. Most of the other differences between film and podcast composition have to do with the particular styles of the people in charge, which would be different in someone else's situation.

 

What are some tips you have for podcast composers?

In terms of finding the right show(s) to work on, what worked for me was shamelessly contacting local podcast hosts and producers. It takes a lot more work that just writing introductory form emails - you have to listen to existing episodes enough to fully understand its aesthetic and personality, or to absorb whatever you can if it's a new show. I recommend including sample tracks that are custom tailored to that show, including leaving room for the host's vocal frequency in case they try to read something over your sample track. Most importantly, pick a show that you personally enjoy.

With respect to keeping yourself on a show, the basics of human interaction apply. Be accommodating when you can, meet deadlines (or decline offers in advance when you are too busy to handle them), remain open to new ideas, and listen carefully for criticism delivered gently "between the lines". Not everyone is equally likely to be direct with negative feedback, and sometimes you have to listen carefully to the text and the subtext. This is why, especially before the pandemic, I valued (and continue to value) working with locals. Once you meet someone in person and get to know them, it becomes easier to interpret what they tell you later on, even in writing. Helping the host/producer with promoting the show, and representing the show well in general, goes a long way.

 

What challenges have you faced when working with composition for spoken word accompaniment?

Most of my experience in accompanying spoken word has involved improvising with unfamiliar pieces but if you can get the material in advance, it is wise to read it thoroughly and discuss the emotional tone and big "moments" with the author. The author may not have the musical vocabulary to make specific suggestions, but they should have a good handle on the emotional tone of each part of the piece, and it's your job to translate that into what you play and when. If it's a live performance, watch the reader as if they were the lead singer in a band. The experienced ones constantly give out non-verbal cues but it may even help to work out basic signals in advance. For example, if the reader backs away from the mic, the accompaniment can be more prominent until they come back to the mic.

Mostly you are following their lead. If they get quiet, you get quiet. If they slow down, you slow down. You have to see what you are doing as an extension of what they are doing for it to work. Some people can't handle being in the following role and there are plenty of musical avenues for them, but being a scoring composer may not be ideal for the particularly extroverted or musicians who like to show off their shredding ability.

 

Do you have any pointers for matching emotion in fiction with audio elements?

Like in any music you have to be aware of tonality, instrumentation/arrangement, time, and space. Everyone uses a different mix of the three for their own sound. Personally, I gravitate towards using tonality as my primary tool, but that is just my individual style. With tonality, it takes a little bit of learning how different types of chords are emotionally interpreted by the listener. The basic thing a lot of people know is that major chords/keys sound happier than minor, but it goes way beyond that. Using diminished chords for a scary tense feeling for example.

For instrumentation and arrangement, strings can help reinforce the sound. A lone violin/viola can be used dramatically for a sad lonely feeling, and a string quartet sound can really drive home melancholy emotions. If you can get a good brass sound, that can help bring home a triumphant moment. I often use synthesizers to create thick bass drones that form the foundation of the instrumentation, and that alone can have its own emotional effect.

Time is important for feeling outside of the happy/sad continuum. For imparting a frantic feeling, a brisk piece that gradually gets faster works well. Similarly, a slow piece can reinforce the feeling of lethargy.

Finally, space is vital for giving either a frantic, crowded feel to the piece or making it feel isolated. This can be achieved through the use of panning as well as how you fill in the low, medium, and high frequencies.

 

What advice do you have for working with foley and sound effects?

I prefer to use a combination of existing sound effects, atmospheres, and foley (sounds recorded specifically for the piece). I prefer recording my own sounds when possible, but existing sound effects can be gathered from libraries or websites, and can often be the only practical way to achieve the desired outcome. For example, it may not be possible to record a real car crash or a real gunshot. The trick is to find ones that sound realistic and appropriate for the story, and you sometimes have to wade through a lot of low quality sounds to find the right one.

I personally like to record my own atmospheric sounds. Wind, rain, coyotes, birds... whatever you have local and available. Even traffic and mass transit sounds come in handy. Record them when you can and keep them labeled in a folder for later use. Even a cheap handheld recorder can get great results.

Foley is the most fun. It brings back the old era of radio when sound effects were produced live. Do what you can on your own. Footsteps, doors opening, and other household noises sound great when you record them yourself.

Whatever sounds you use, managing panning is important so that the sounds are more realistic to the human ear.

 

How do you use panning for dramatic affect and spatial dynamics?

Panning is extremely important. In the music itself, it allows you to emphasize or deemphasize instruments based on how close they are to the center of the stereo image. Similarly with sound effects and foley, panning gives the listener a sense of direction. The human ear is really sophisticated, and even subconsciously people can get overwhelmed if everything they hear is heard equally by both ears. In real life, if you hear a sound like the heater turning on, rarely is that heater directly in front of you - usually it's off to the left or the right. Not using panning for realism can turn people off without them even consciously knowing why. Panning sound effects can also be used creatively, like a car driving by may start off in the left channel, move to the center, and then to the right. It's just another way to immerse the listener in the fictional world.

 

What music theory tricks do you take into consideration when composing for podcasts?

To me, the chords you learn in the very beginning of music theory instruction are the only theory elements I use daily. In my approach, it is vital to know the different basic types of tonality and their emotional characteristics: major, minor, diminished, augmented, and suspended. It may be oversimplified, but major sounds happy, minor sounds sad, diminished sounds scary/tense, augmented sounds unstable but open, and suspended chords are nicely emotionally ambiguous.

Another trick I use are pedal tones - keeping the same bass note through the chord progression often employing "slash" chords - to impart the feeling of being stuck in a bad situation. The constant unchanging bass note feels like an anchor.

The important thing is to not get overwhelmed with too much theory. A lot of it is interesting to know, but may not apply to your situation. Learn in whatever way makes sense to you...an online tutorial, a class at a community college, or just reading on your own. The theory tools you use just become another way for you to sound different than anyone else, and can help you establish your own personal style.

 


Using Effects Plugins to Create Character Voices: Aliens, Demons, and Monsters

In our last article on processing vocals for speculative fiction podcasts, we covered dealing with synthetic voices such as Robots, AI, and Androids. In this article we will go over some techniques for creating non-human, non-synthetic voices, such as Aliens, Demons, and Monsters. Creating non-human character voices can be a fun experience and help bring your podcast to life, and also help the listener distinguish between the characters more easily. Here, we will cover several techniques for using effects plugins to achieve the desired voices of the characters. I will go over a few of my go-to tools for processing these types of voices, but I also encourage you to try out these effects in various combinations, as sometimes we need more than one tool to get the job done.

Reverse Reverb:

One of the first tools I reach for when dealing with ethereal, non-human voices is reverse reverb. Many DAWs have a built-in plugin for this effect but it can also be done manually with a few basic steps. Simply reverse the audio file so that it plays backwards, then add the desired amount of reverb, reverse the track again, so that the vocal is once again playing as it was recorded. The tail of the reverb will now precede the vocal and add an otherworldly, ghostly lead in to every word. This process is great for voices from beyond the grave or coming from another dimension. It will usually produce a haunting effect.

Bit Crushers:

 

Image 1) Screenshot of the Decimator interface in Audacity

Another fun tool to pull out for non-human voices is the bit crusher. For harsher sounding voices, for instance demons or monsters, I often reach for a bit crusher plugin. Bit crushers work by reducing the amount of information (or resolution) in the audio file, thus creating distortion. The amount of bit reduction can be catered to taste and can produce a variety of subtle to extreme effects. These plugins are easy to use and are included with many DAWs. The robust freeware Audacity has a free plugin called the Decimator, which allows you to reduce both the sample rate and the bit depth. There are also many freeware plugin options such as the Tritik Krush. These easy to use plugins may produce the desired effect, and require little finesse or experience.

Harmonizers:

Image 2) Screenshot of Harmonic Generator interface in Audacity

Harmonizers are one of my favorite tools for creating non-human character voices. They work by shifting the pitch (either higher or lower) of the original signal and then recombining the effected signal with the original. Most harmonizers allow you to combine multiple versions of the altered signal, and can be used to create thick and interesting effects. While these plugins can be used with musicality in mind they can also be used to create discordant and unsettling results, perfect for otherworldly voices. I often reach for these plugins (or hardware such as the POG, or poly octave generator) when working with alien characters and find the tool to be an easy choice for instant strangeness. Audacity has a free plugin called the Harmonic Generator and its operation is simple and intuitive. This effect can also be achieved by adding duplicates of the original signal and pitch shifting them individually. This latter technique can be useful as you will be able to mix the individual altered signals to taste.

Combining Effects:

While single plugins can be effective and get the job done, sometimes it is worthwhile to experiment with stacking effects. However, the order in which you process you audio files can drastically change the results. For instance, you may want to have reverb on your vocal track to create a sense of space, but should you add it before or after the other effects? In most cases, if you are using reverb for environment you will want to use it after processing your track with other effects. But there are not necessarily rules for which order effects can be used in. I encourage you to experiment with stacking effects, and while you do so try them out in different orders. You will find that the results can vary greatly, and as always it is up to your sense of esthetics (and the audibility of the track) that will determine which is the right tool or tools for the job. You may also want to take notes on how you created a certain characters’ voices in case they come back in a subsequent story. This process can be really fun, so go wild and hear what happens as you explore the world of audio effects plugins.


Using Effects Plugins to Create Character Voices: Robots, AI, and Androids

One of my favorite challenges when producing speculative fiction podcasts is dealing with non-human characters. This can vary from robots to aliens, androids to animals, and everything in between. While some of these characteristics can be dealt with in the acting, sometimes it requires additional processing to create convincing non-human characters. It is always important in an audio setting to differentiate who’s who, and this can be done in many ways. The acting and the panning are the simplest ways to make one character stand out from another but sometimes we also need to reach for the right tool to make the difference meaningful, especially when one actor is playing multiple characters in the story.

Robots, androids and AI are some of my favorite characters to produce, and they come up often in speculative fiction. There are a variety of audio effects that are useful in the creation of these characters and in this article I will cover a few of my favorite techniques for creating convincing voices. The first effect I reach for when working to create a synthetic sounding voice is the comb-filter (think C3PO). This simple technique gives the voice a synthetic feel, while retaining the clarity of the speech – and the plugin is available with most freeware and purchased Digital Audio Workstations (DAWs).

Comb Filtering

Comb filtering occurs when a signal is delayed and added to itself. This can often happen when using multiple microphones (set at different distances from the subject of the recording) and can cause problems with the audio quality. When this occurs the frequency display on your FFT (visual representation of the waveform) will show up looking like the teeth of a comb, hence the name (See Image 2). However, this layering of frequencies and the subsequent phase cancelation can be used as a powerful tool for creating non-human voices. Many DAWs will have a built-in, easy to use comb filter. If your DAW does not, you can achieve the same effect by duplicating the track and delaying the duplicate by a few milliseconds. Play around with different amounts of delay until you find something that suits your taste.  

                                     

Image 1) Vocal waveform before comb filter                                                                                       Image 2) Vocal waveform after comb filter

Another go-to effect for robot voices is the ring-modulator (I.E. the Daleks). The ring modulator is a tried and true effect for creating otherworldly voices but easily gets out of hand and needs to be treated gingerly, as not to be overdone. While using the ring-modulator is not my favorite technique because it has been overused in cinema, if it is the right tool for the job, then by all means use it.

Ring Modulators

Image 3) screenshot of a software interface for a ring modulator, highlighting the waveform function

A ring modulator combines both the waveform (in this case your voice) and a signal from an oscillator, typically a sine wave, but most ring mods will give you a choice of waveform (See Image 3). The output is both the sum and the difference of the combined frequencies, but contains neither of the original signals and tends to provide a robotic cadence to the original signal. It is easy for a ring modulator to go out of control, so alter your parameters with care and play around until you find something that fits your taste.

Text-to-Speech Applications

Another fun way of creating robot voices is to avoid the actor all together and actually use a robot voice. Most devices contain text-to-speech capabilities and these can give you the genuine robot feel, but it is important to keep in mind that they tend to be tinny and need to be equalized for clarity. There are several apps that come in handy for this technique and have a broad variety of voices, however the accents can be pretty bad and can sometimes sound like a parody of the desired accent. This is not my favorite tool but it can have its place, but keep in mind when dealing with this approach that you will not have much control over the inflection, and nothing can replace a good “human” actor. Text-to-speech apps are easy to come by, and often free, or even built into your word processors.

The aforementioned techniques will serve you well for synthetic characters but what about aliens? Since we don’t really know what aliens will sound like this can be a fun and interpretive process and the choices are endless as long as we stay within the human hearing range (20-20k Hz). But this is the subject for a future article. Of course, there are many other useful effects for creating synthetic character voices, and I encourage you to experiment with: pitch shifters, flangers, vocoders, etc. In the meantime have fun with Robot, AI, and Android voices.

 

 

 

 


Mixing Spoken Word and Music – Finding the Right Balance for your Audio Drama

Music and sound effects are a wonderful way to bring stories to life. An original soundtrack can take your fiction podcast to the next level and engage your audience in a deeper way. But the blending of music and spoken word can be tricky to mix properly, and the two can interfere with each other if not done properly. Since the story is the most important aspect of any fiction podcast (science fiction or otherwise) the music should not overpower the words but rather add a layer that helps tell the story.

In this article I would like to show you several techniques that I have found immensely helpful for making sure that both the spoken word and the music are both audible, so that the audience can enjoy the story without missing a beat and hear every word spoken.

The first technique, and probably the most important, is using the EQ (equalizer) to remove overlapping frequencies. EQ can be thought of as frequency specific volume control. Basically what happens is that when you add music to spoken word many of the frequencies in the sound overlap and tend to blur together, making both less audible and clear. It is easy to fix this problem with some basic EQ settings. If you are a podcaster EQ is an essential and easy to use tool for improving the quality of your audio recordings.

The human voice tends to sit in specific frequency ranges. Higher register voices tend to lie between 165 to 255 Hz. Lower register voices tend to lie between 85 to 180 Hz. Because the human voice tends to sit in this area of the lower mid-range, these frequencies can be turned down in the music to aid in making both clear. When these vocal frequencies and the same frequencies in the music are added together the result is that both are difficult to hear, mostly getting in the way of the clarity of the voice. We can use the EQ to remedy this issue. Open your EQ plugin on the music track and set the points to lay just outside of the vocal range, from there you can lower the DB of those particular frequency ranges (see pictures 1 & 2*). You may have to experiment a bit with how much to lower the volume of those frequencies, but once you find the right balance you will find that the vocal tracks are much easier to hear without interfering with the overall perceived volume of the music.

Example of EQ dip for higher register voice

                    

Example of EQ dip for lower register voice

Another issue that can come up is bass masking and is also addressed using the EQ. This happens when there is a buildup of low frequencies, these frequencies along with their overtone series can cause “masking”, which will make the mix both sound muddy, and get in the way of the clarity of the vocal tracks. To fix this potential issue we again reach for the powerful tool of the EQ. The human hearing range is from 20 to 20k Hz. Most of the bass frequencies that can lead to masking lie in the 100 Hz and under range. Because we cannot hear the bass frequencies under 20 Hz we can lower their volume without changing our perception of the bass, while at the same time removing potential issues of masking. One might think that this removal of information would cause the music to sound less rich, but the truth is that when we convert our audio files to MP3 this frequency range tends to be removed anyway. One might think at this point that we could just let the MP3 compression take care of the removal of these frequencies, however this would leave the overtone series intact and still have potential issues.

Dealing with masking can be an easy process, simply open your EQ plugin and roll off the frequencies under 20 Hz (see picture 3*). This is also often called high-pass filtering, basically letting the high frequencies through while reducing the amplitude, or blocking all together, the low frequencies.

High-pass filtering

While these issues can get complicated, particularly the math involved, familiarizing yourself with the use of EQ and playing around with these simple tricks can greatly improve the audio quality of your podcast. By all means experiment and see what the powerful tool of EQ can do for your recordings, it is a simple tool to use and once you get the hang of it can go a long way in helping your podcast be listenable and clear.

* The EQ plugin shown for example is from Audacity (freeware mixing software), but these settings will work the same in any EQ plugin with any DAW.


How To Make a Fictional Podcast

I’ve been a long-time fan of radio dramas, serialized fiction, Star Wars audiobooks, and other fantastic types of audio media, so when podcasts Hulk-smashed their way onto the scene in the early 2000s, I was thrilled. I contacted an author and mentor of mine, Nicole Kimberling, and asked if she would be interested in making a podcast. After googling what a podcast was, she signed on. Two years ago I worked with Nicole to make the podcast “Lauren Proves Magic is Real!” A few folks beat us to the chase with full cast recordings, but I think we caught the first wave of fiction podcasts.

I’m going to break down the steps you can take toward creating your own podcast, because if you have a story to tell, there’s someone out there who needs to hear it.

The approach that I would take to creating a serialized fiction podcast starts with breaking it into two parts: the story and the sound engineering.

Part 1: The Story

The story should be written as a script. They fall into the standard types meant to operate without the aid of visual explanations:

Classic Radio Play: A narrator explains the settings, scenes, and any other parts of the story not revealed through dialogue.

Serial Documentary Drama: A self-aware (which means the narrator is aware they’re recording a podcast) story in which the main character is recording a podcast. Normally, the characters investigate something and the plot involves their interviews, experiences, etc.

Theater of the Mind: A dialogue-based podcast without a narrator that relies entirely on soundscapes. Sound effects take the place of visuals. This can rely heavily on clear exposition (which can be corny).

Dear Diary: An audio diary in first person narration, because reasons. Maybe your character hates to write words, or maybe they are just one of those people who constantly take audio notes.

There are ways to merge these ideas. For example, the main character of “Lauren Proves Magic is Real!” was a podcaster who found the field recordings of a supernatural special agent. She then aired them as episodes of the podcast. So the podcast had to be self-aware, and have both narrative and audio diary elements. The self-aware podcast seems to be a popular choice, which might be because it can frequently be styled in a War of the Worlds way. The listener may experience a moment or two early on where they are not sure if the podcast is real or fiction.

It’s important to make the presentation style clear early on, as it will determine how much of the story is told in dialogue and how much of the story is told with audio.

An audio script, like any script, can be made simply with your computer, your typewriter, or some pen and paper. Like any good story, you’ll want a beginning, an inciting incident, a climax, maybe even a twist, and of course an ending. I personally gravitate toward cliff hangers in serial fiction episodes, so fans have something to wonder about

Now that we’ve covered some basic story elements and your imagination can start putting together a story you’d like to tell, let’s cover some of the technical basics and the steps to take if you want your podcast on Apple—‘the people’s platform’—or paid subscription networks like Stitcher.

Audio Recording Gear Land:

Microphones: How many mics you’ll need depends on how many characters will be speaking to each other in a single scene. I started with three mics and got to a point later on when we needed eight—for one scene. I’m not gonna lie: Mics can be expensive, depending on the sound quality you need. However, you can certainly start with cheap microphones, even gaming microphones that come with a desktop PC. You can also occasionally find microphones at secondhand stores. A classic mic that gives you a lot of bang for your buck is the MXL 990. The standard stage mic, SM58, will work too, as well as the standard stage instrument mic, the SM57.

USB interface: This is a little box into which you can plug a fancy microphone and then the box plugs with a USB cable to the computer USB driver. There’s a large variety, but here are some I’ve used: focus rite, m-audio usb, and audiobox.

Software: There’s a lot of audio programs out there. Your computer may already have one, like GarageBand. There’s also free audio editing software, like Audacity. I’ve also heard good things about Reason, and Ableton Lite. Your audio software will be where you record your story and track over track, and sound edit your story. You’ll record your dialogue using the power of acting, and the friends you can convince to act with you. Take your time to experiment with settings and be open to feedback. Eventually, you will become familiar with your software and be able to produce content very quickly.

There are a few workarounds for the creative person working with a very small budget. There are a series of apps for smartphones that are decent for recording and sound-mixing. If you want to start small, nothing is stopping you. You have the power to write, record, and mix on the device you are likely reading this article on.

Sounds & Music

Theme Songs: All good podcasts have a theme song. It can sometimes include clips of the dialogue cut out and edited like a movie commercial or be an original theme. Theme songs are a fantastic opportunity to collaborate with another like-minded creative. Find a musician friend and offer to use their music or ask them to write some. It’s hard to be vulnerable during the beginning of a creative process, but having a musician that you like on your team is really going to be worth it as you move forward in producing.

There are many opportunities to play music in a fictional podcast. There’s music that plays in cafes, music that denotes the passage of time, music from a car radio, music that sets a location. As much of this as you can get originally created, the better. But also there are a ton of resources online for both music and sound effects. I use www.freesound.org for sound effects and small bits of royalty-free music.

There will inevitably be times where you need a sound effect that isn’t on the internet. I’ve had to create sound effects by recording myself running up and down stairs while knocking things over, dropping plates, scraping a razor against a bowl, hitting an iron railroad nail tied to a fishing string, etc. Be creative. You already can record. Ask yourself what’s in the room around you that makes a sound that could enhance your story.

Hosting: Once you have recorded several episodes and are committed to a release date, you’ll need to acquire a site on the internet. It can be a simple free Wix site or Squarespace. The only real requirement you need is for the site to be able to hold an RSS feed, which is where you will drag the mp3s of your project. Then you’ll either pay a third-party site like Podbean, or hop over to Apple podcast and submit your podcast for review. If it passes muster with the strange robots that review things, then bang! You’re live. It’s time to hit that share button and plead for likes on your social media.

The best advice I can give anyone about to make a fictional podcast is start small, pick a release schedule, and meet that release schedule every week. That means being prepared and not waiting until the last minute to do anything. Enjoy yourself! You’re about to embark on the fairly unexplored medium of fictional storytelling, a genre that is still being formed. I’m excited to hear what you make.