How does the creation of the sound for a video game come into fruition?

Which techniques, software or valuable knowledge needs to be acquired?

Not all perceive sound in the same way, subconsciously the knowledge of it comes from movies, so the conceiving of the sound is accepted in chronological order.

On the contrary, sound in video games is very different; the sound event that happens in the game, does not have a time frame but rather waiting to be directed by the player.

Space becomes a variable depending automatically on the player moves.

Another variable in video games is the surrounding space, this is because the sound can be positioned at a certain point.

But ultimately it is waiting to be directed by the player, that’s because the player can get closer, move away or go around the sound, or it just surrounds it.

If something exists in space and time, it automatically becomes something concrete, and in fact, the sound event becomes an audio object.

Coding becomes essential to dictate to the sound object’s space, time and conditions to operate it. For this task there are programmers who maintain this role. There must be a collaboration between the sound designer and programmer, what the sound designer must acquire is the middleware function.

Examples of middleware software are FMOD or WWISE, which act as a bridge between the traditional editing of the sound and the game engine, that is, the interactive environment in which programmers, animators and graphics designers develop the video game.

The middleware gives the possibility to build objects and attach the various sound characteristics; the sound object is just an audio sample with a set of scripts which are the APIs, that instruct the game engine on how and where to project sound.

One crucial point is the amount of memory that is possible to occupy the package of files that make up a video game.

Of course, the less, the better, however, there are some technical solutions offered by the middleware to create a complexity of sounds with just a few samples.

An example is the sound of an environment, like the chirping of birds, in a video game that can last an hour or as five minutes.

The simplest solution would be to create a loop, but focusing on the optimisation of the spaces, the loop would be too short and not entirely credible.

For this the exploit is, in middleware, granular synthesis algorithms and principles of randomisation of audio DSP parameters.

With these tools you can create a playlist of ten samples, each sample a single chirping, played in random order and superimposed.

Every play is slightly altered in pitch or volume, reverb or sound colour.

Adjusting parameter values ​​available to ten individual samples adequately could generate hours of a sound environment without ever repeating.

The music, on the other hand, will have an interactive arrangement, which follows the plot adapting the performance of the emotional story.

The sound editing is different than that of a film workflow.

In a classic video, the working method is to compose the sound by layering sound elements in real time with video, testing in an almost direct final effect.

In video games, the whole is much more indirect; it is preparing before the sound effect with audio editing software after moving it to the middleware, which develops the playback conditions, and then exports the sound object – a package of multiple files that are loaded and tested in the game.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply