Last time, we mentioned…
- What this project is: This is an independent project to push my skills further. The goal is to pickup 20 items, invoke particle systems and tell a simple story.
- Setting Up: I’ve decided that the story should take place in the remains of a deserted village, and the protagonist is revealed with her past, and her destiny as she explores the ruins.
- Character: How I approached the mechanism to control our character.
So, without further ado, let’s continue.
Originally I thought that given the short time allowed for my project, I would only need subtitles without an actual narrator. I tried my first iteration with only subtitles, and the results were obviously disappointing – the audio experience was awkward, and it felt really disconnected when I only get to read text without hearing a voice. This is when I decided to find a good voice actor for my small project.
In my imagination, my narrator is an old woman in her 80′s, an old member of the clan that has seen more than enough in her entire life, and is ready to pass her and the tribe’s legacy onto future generations.
So the preliminary question is, where could I find an old woman who could speak proper English? Fortunately I have a good friend that does great voice acting. I must say that she doesn’t sound old at all, but her depiction of that void, hopeless feeling with her voice really blew me away.
You can find her Instagram here: https://www.instagram.com/tsi_tsi_tsai/
We took some time getting used to the microphone, and we spent some more time on directing and adjusting her voice acting. Most of the time I would tell her where to emphasize more as well as the places more suitable with a softer voice. It wasn’t just me, though – we had a lot of discussions going on, determining the pronunciations of special words such as “Monakk” and “Tamak”. It was indeed a fun time.
We took two days to finish the whole script. Now it’s time for me to head back to my workstation and work on the editing.
I use Audacity – it’s pretty powerful, easy to use and most importantly, it’s free. I begin my edit by reducing the noise levels. All you have to do is select a segment of your recording that contains nothing but background noise, pass it in by selecting Effect / Noise Reduction… / Get Noise Profile. Then head back to the editor, select your entire recording, and go back into the Noise Reduction Options window. Adjust the parameters to your liking, and you’ll eventually retrieve a clean voice with much less background noise.
Next, I’ll have to edit her voice so that her voice would sound more mature, aged, and somehow broken. The first approach that came to me was to just simply adjust the pitch. It didn’t work because the result sounded actually funny, as if it was the recording of a criminal suspect.
As of now, my workaround is by using the Equalizer. I boosted the lower frequencies as well as a part of the higher frequencies. Hopefully this could create a slightly more immersive hearing experience to players.
Then finally, I listen for unwanted noise that could not be simply removed by noise reduction, such as breathing, tongue-tapping and other sounds retrieved by accident. I also listen to the entire recording over and over again, making sure that her pace is truly what I had expected. I would add some silence clips in between breaks, or shorten the breaks in reverse.
Now, our narrative clips are ready. We now need to integrate them subtitles.
At first I thought that I could treat audio clips the same way as animation clips, that I could add keyframes to it and invoke subtitles at certain points. Turned out that it doesn’t work that way and I need a workaround. (Or actually I was right at the beginning, just that I don’t know).
Sometimes when you have nothing in mind, Google is your best friend. I did some research on Google and I’ve found a good tutorial on how to create a script-controlled subtitle system. Here’s the link to that great tutorial.
In brief, his script is comprised of basic text-file parsing and one audio clip. I managed to understand the concepts swiftly and made some adjustments to my own. Here’s how it works:
1. I used brackets to specify special functions. For example I used
</p> to indicate the start and the end of the subtitles for an audio clip. I used
<time/> and the pipe symbol (
|) the same as the tutorial did to indicate the start time of a subtitle line or an event. The
</off> is used to indicate the indicator to turn off the subtitles.
2. For the scripting I load the entire script on
For the data structure I created a custom container called
SubtitleKey (I couldn’t come up with a better name, sorry) that saves the time when a certain subtitle is to be displayed and the subtitle itself. Then I save these
SubtitleKeys into a list. This list represents an entire block of subtitles of an audio clip. Finally these lists will be saved into
_subtitleCollection, the collection of all subtitle blocks.
Notice that I’ve highlighted two lines of code. These two lines made me debug for at least an hour and I highly suspect that they have broken the Principle of Least Astonishment. You can see me rant about it here.
Finally, to utilize it:
The script also initializes two arrays of audio clips, one for the environmental background sounds (which I will cover later if I have time), and one for the narrative audio clips. Whenever the player interacts with an orb, it will invoke the script to select an audio clip from the array in a fixed order and play it. It will also pass the subtitle string to another script, which controls the
UI.text component and tells it to display it according to the defined time frame.
And now you should have a working subtitle system that goes with your audio clips! (phew)