Audio Engineering Society 49th International Conference on Audio for Games 

Twitter LinkedIn Google Maps RSS

Titles and Abstracts

Keynote

Robin Rimbaud aka Scanner is an artist, writer, plasticien sonore and composer working in London, whose works traverses the experimental terrain between sound, space, image and form, connecting a bewilderingly diverse array of genres – a partial list would include sound design, film scores, computer music, digital avant-garde, contemporary composition, large-scale multimedia performances, product design, architecture, fashion design, rock music and jazz.

Since 1991 he has been intensely active in sonic art, producing concerts, installations and recordings, the albums Mass Observation (1994), Delivery (1997), and The Garden is Full of Metal (1998) hailed by critics as innovative and inspirational works of contemporary electronic music. He scored the hit musical comedy Kirikou & Karaba (2007), wrote Europa 25 a new National Anthem for Europe in 2005, premiered his six-hour show Of Air and Ear (2008) at the Royal Opera House in London, designed the sound for the new Philips Wake-Up Light (2009), and national cinema campaign for Sprint USA (2012) Chanel’s Fall-Winter collection (2012) and music for the Olympic Ceremony in London in 2012.



Tutorials

Crossing the Streams
Scott Selfon, Sr. Development Lead, Microsoft
It’s often easy to lock into the mindset of churning out the hundreds (or thousands) of sound assets that a game needs. But what happens when we step beyond audio as a reactive force, and instead inject audio into the title’s game design? Scott will show and discuss specific examples from a number of shipped titles of situations where audio has moved beyond the “event to wave file” archetype to more uniquely interact with and drive gameplay, both technically and creatively.

Myths, Facts and Techniques behind HDR Audio
Xavier Buffoni, Software Engineer, Audiokinetic
Due to the non-linearity and unpredictability of game audio, much thought has been given to automatic and intelligent mixing tools in the last years. HDR (High Dynamic Range) audio has received a lot of attention after the huge success of a well known franchise used HDR to improve the quality of their in-game audio mix.

The lecture will explain in details what HDR audio consists of in theory and how game developers really use it in practice. Concrete examples will be presented for two different approaches. The first operates at the audio level using a combination of dynamic effects such as compression and limiting. The second operates at the logical level where all volume attenuations are computed before mixing.

Benefiting from real world audio examples, attendees will gain a better understanding of the concepts and mechanisms behind HDR audio and how to implement such a system in their own sound engines.

A (Fairly) Brief History of 3-D Audio in Games
Martin Walsh, R&D Director, DTS
3D audio has been a integral part of the gaming experience since the mid 1990′s. However, the use of 3D audio API’s has significantly decreased in the past decade or so. This tutorial presents a technical overview of how we hear and why we care about 3D audio in games. The history of the use of 3D audio in games is also discussed, from the early A3D / EAX wars to techniques used in today’s latest first person shooters. Once we are up to speed on the history of 3D audio in games we venture into the future with next generation technologies that could bring 3D audio once more to the forefront of the gamers minds.

How Sound Affects Realities: Enhancing Narrative with Audio
Stephan Schütze, Director, Sound Librarian
The world we perceive defines what is real for each of us. The ability to influence or change the way an audience perceives the world is a powerful storytelling tool.

Of all the perceptive senses, sound is one of the most effective tools with which you can influence the perceptions of others. Each person will interpret the same sound in a different way, and the context in which the sound is played can further alter the experience and interpretation.

This session will introduce a range of concepts that deal with how people think when they listen and how creative teams can utilise the emotional triggers and instinctual behaviour that sound exposes in a gaming audience.

Going Old School: Chiptunes and Trackers in Games
Leonard Paul, Interactive Audio Specialist, Lotus Audio Corp.
This presentation will detail in-depth how all of the sound design and music was created for the multi-award winning indie game Retro City Rampage. Trackers and chiptunes have been around since the golden age of video games and there’s a reason why they’re still a very effective method of creating high-quality audio content for downloadable, mobile and online games. All tools and code presented are open source, so audience members can easily apply the same techniques in their own games and projects.




Workshops

Measuring Loudness in Interactive Entertainment
Garry Taylor, Audio Director, Sony Computer Entertainment Europe
Due to it’s non-linearity, measuring loudness in interactive entertainment is an inexact science. Recently, Sony’s Audio Standards Working Group (ASWG) released loudness recommendations for their first party titles. Garry Taylor, Audio Director at Sony Computer Entertainment looks at the work of the ASWG, the data they collected, and how that data influenced their recommendations. He looks at their first loudness paper and how their titles are measured and tested at Quality Assurance.

Theoretical, Technical & Practical Frameworks for Interactive Mixing: A Moderated Panel Discussion
John Broomhall, (BPL) AES 41 Keynote Speaker, Game Audio Creator/Producer, Music Writer & Commentator
Tom Colvin, Audio Lead, Ninja Theory
Garry Taylor, Sony Computer Entertainment Europe
Jon Olive, Assoc of Motion Picture Sound
Xavier Buffoni, Audiokinetic
Stephan Schütze, Sound Librarian
Mixing in video games is a huge area for potential discussion. It is also an increasingly important topic in the interactive audio landscape and is gaining much wider attention in the field. This panel has been assembled to take a step back and assess the field of game audio mixing in some new contexts, examining some of the many facets of the mix from style, philosophy, and approach, to technology, loudness, planning and implementation.

In this moderated panel discussion, several of the leading practitioners and technologists in the field of interactive mixing come together to discuss the emerging theoretical, artistic and technical frameworks for game mixing over the next few years.

New Standards for Web Audio
Olivier Thereaux, BBC and W3C Audio Chair
Jory Prum, studio.jory.org and HTML5 Audio blog
This session will explore the capabilities of the modern web browser as the next exciting platform for gaming and audio. Olivier Thereaux from the BBC and chair of the Audio Working Group at the World Wide Web Consortium (W3C) and Jory Prum from studio.jory.org, publisher of the HTML5 Audio blog (html5audio.org), will give an interactive overview of the emerging standards for audio on the web: the Web Audio API for audio processing, event-driven playback, and synthesis within the browser, and the Web MIDI API, bridging the browser and the multitude of MIDI devices for music creation and device control. The session will include many demos hinting at the great potential of bringing audio to the web, and the web to audio, and will conclude with a questions and answers session exploring the opportunities and challenges of building games on the Open Web Platform.

Future of Game Audio – a retrospective
Adele Cutting, Director, Soundcuts
John Broomhall, Game Audio Specialist, Music, Writer & Journalist, BPL
Nicky Birch, Head of Products, Somethin’ Else
Ciaran Rooney, Technical Director and Co-Founder, Pitch and Yaw
Jason Page, Senior Manager, Audio Department R&D, Sony Computer Entertainment Europe

Revisiting the predictions of the ‘Future of Games Audio’ panel in 2009  – What came true and what is the focus of tomorrow?- a discussion on changes in creative and technical boundaries, developments in story-telling using audio plus interactive mixing, manipulation of user data, real-time synthesis, the importance of content and much more. Our previous thoughts were informed by AAA titles, but the UK games industry now has many smaller teams working on iOS, download and mobile platforms with lower budgets and shorter development cycles. What does this mean for game audio and what advancements are needed…?”

How Can a Background in Feature Film Help Design Audio for Games?
Vanesa Lorena Tate, Franchise Audio Director, Electronic Arts / Founder & Creative Director, Tate Post
Doug Cooper, Re-Recording Mixer, Electronic Arts
David Steptoe, Audio Lead Engineer, Electronic Arts

Planes, Trains & Automobiles: Creating & implementing vehicle sound systems for games
Mike Caviezel, Audio Director, Microsoft
This session will discuss some of the basic Vehicle Audio design concepts commonly found in games today. We’ll talk about system design, recording & sound design methodology, and various implementation techniques & tricks for making Vehicles sound great in games.

Global game audio production: Where are we going?
Francesco Zambon, Audio Technical Manager, Binari Sonori
Roberto Pomoni, Audio Project Lead, Binari Sonori
The constant demand for tighter loops in sound design, music composing, and multilanguage speech production is driving a change in the global game development models.  Such evolution is becoming possible thanks to the usage of more effective, standardized middleware audio engines, that nowadays are modifying traditional DAW-based production flows.  This speech draws a possible trajectory of the production methods from the point of view of global game audio creators.

Midi vs. the Real Thing: Does It Still Make $ense To Record an Orchestra?
Laura Karpman, Composer
Now that the sample libraries are getting very convincing, is there still a place for the orchestra in game music?

Composer Laura Karpman weighs in on the “big” question, when is recording real orchestra necessary? What alternative recording solutions are there to fit various budgets? Laura will take us through the scoring and recording processes. She will look at games she has scored with diverse instrumental ensembles, from large orchestra with chorus to solo instruments from around the world. Midi demos and their recorded orchestral counterparts will be contrasted, and she will look at scores where small sweetening sessions were enough. Laura will discuss the benefit of varied scoring approaches for specific games and for the field at large.

Behind the mix: An in-depth look at the audio engine in Hitman: Absolution
Mikkel Christiansen, Sound Designer, IO Interactive
Frans Galschiøt Quaade, Lead Sound Designer, IO Interactivee
By introducing the proprietary engine G2, IO Interactive allows the artist to freely work by using high-level graphical programming environment, known from programs like MAX/MSP and PD. Using this approach allows the artist to create interdisciplinary workflows and thereby secure consistency between all game elements such as gameplay, VFX, and sound design. G2 allows the sound designer to freely do adaptive and interactive mixing and sound setups, which is the foundation in our attempt to create the living, breathing world.

The main takeaway is the advantages for the game development at IO Interactive, given by the shift in paradigm, going from code depended setup to a design driven approach. Through in-game examples the talk will explain how the enhanced cross-disciplinary workflow is used to create the audio experience in Hitman: Absolution.




Paper Sessions

Game Music Systems

Talking Soundscapes: Automatizing voice transformations for crowd simulation
Jordi Janer, Music Technology Group, Universitat Pompeu Fabra
Roland Geraerts, Department of Information and Computing Sciences, Utrecht University
Wouter G. van Toll, Department of Information and Computing Sciences, Utrecht University
Jordi Bonada, Music Technology Group, Universitat Pompeu Fabra
The addition of a crowd in a virtual environment, such as a game world, can make the environment more realistic. While researchers focused on the visual modeling and simulation of a crowd, its sound production has received less attention. We propose the generation of the sound of a crowd by retrieving a very small set of speech snippets from a user-contributed database, and transforming and layering voice recordings according to the character localization in the crowd simulation. Our proof-of-concept integrates state-of-the-art audio processing and crowd simulation algorithms. The novelty resides in exploring how we can create a flexible crowd sound from a reduced number of samples, whose acoustic characteristics (such as people density and dialogue activity) could be modeled in practice by means of pitch, timbre and time-scaling transformations.

The Future of Adaptive Game Music: The Continuing Evolution of Dynamic Music Systems in Video Games
David M. Young, David M. Young Music
This paper examines what the future may hold for adaptive music in video games. Discussions are focused on technical developments in music production software, game audio middleware, and gaming interfaces, and what these could mean for dynamic music systems. Specifically, the heralding of an industry-standard interactive audio transferable file type, the increasingly standardized functionality and appearance of game audio middleware, the blurring of the lines between DAW and middleware, improved real-time audio effects, generative music and MIDI-based capabilities in game engines, and the use of new player-state-based data input streams to inform and personalize music experiences on a player-by player basis, are explored.


Perception of Interactive Audio

Can Interactive Procedural Audio Affect the Motorical Behaviour of Players in Computer Games with Motion Controllers
Niels Bøttcher, Medialogy(AAU-CPH), Aalborg University Copenhagen
This paper presents the design and implementation of a procedural sword sound model controlled with the Nintendo Wii remote. A prototype of a first person sword game was developed in order to test if the use of procedural audio in comparison to pre-recorded audio could potentially change the motorical behavior of the players. A test indicated that some of the test persons were influenced by the procedural audio, but no common measures could be found in the test.

Preliminary Investigation of Self-reported Emotional Responses to Approaching and Receding Footstep Sounds in a Virtual Reality Context
Erik Sikström, Niels Christian Nilsson, Rolf Nordahl, and Stefania Serafin, Department of Architecture, Design and Media Technology, Aalborg University Copenhagen
The emotional impact of approaching and receding sounds sources studies has previously been studied in seated laboratory experiments in with and without accompanying visual stimulus. This paper investigates the emotional responses to approaching and receding footstep sounds in an interactive virtual reality using a head-mounted display, 24-channel surround audio and a novel walking-in-place device utilizing acoustic detection  of  the  user’s  input. Based on self-reports using the Self-Assessment Manikin, the subjects gave post-experiment evaluations of 7 seconds long footstep sequence approaching and receding from outside of the participants field of view. The  participants’  sensation of presence is also studied using a SUS questionnaire. The results showed that approaching footsteps sequences in the beginning of the experiments were found to elicit a higher level of arousal than receding footsteps in the beginning of the experiment and during the times when there were no footstep sequences.

Auditory Feedback to Improve Navigation in a Maze Game
Kevin Dahlstrøm, Nicolai Gajhede, Søren K. Jacobsen, Nicklas S. Jakobsen, Søren Lang, Magnus L. Rasmussen, Erik Sikstrom and Stefania Serafin, Medialogy, Aalborg University Copenhagen
In this paper we investigate whether sound design guidelines used to improve navigation in a train station can be applied in providing navigation guidelines in a maze game. For this purpose, we designed a maze game and augmented it with auditory cues useful for the navigation of the maze. A between-subject experiment showed that auditory cues significantly reduce the time needed to complete the maze.

Rhythm-Action Games: The sonic interaction perspective
Cumhur Erkut, Department of Signal Processing and Acoustics, Aalto University
Hüseyin Hacihabiboğlu, Informatics Institute, Middle East Technical University
This paper provides game audio researchers and practitioners a short background on rhythmic interaction, with application to rhythm-action games. Based on our previous experiments and observations, we point out technical challenges and our current solutions. A special focus is on how these concepts can be used in education, reflecting on the relevant sessions in the IEEE SPS Summer School Game Audio (September 3-6 2012, Ankara, Turkey). The technical and educational implications of rhythmicity for game audio are provided.

Spatial Audio in Games

Modular Architecture for Virtual-World Parametric Spatial Audio Synthesis
Tapani Pihlajamaki, Mikko-Ville Laitinen, and Ville Pulkki, Department of Signal Processing and Acoustics, Aalto University School of Electrical Engineering
An adaptation of a parametric spatial audio coding method, Directional Audio Coding (DirAC), has been previously developed and validated for virtual-world applications. Although the quality in most cases is very good, it was noticed that some auditory scenes, e.g., ones containing multiple sources in acoustically dry conditions, are not produced optimally. In this paper, the architecture of this virtual-world DirAC is restructured and modified to avoid previous problems. In addition, these modifications achieve better scalability for the algorithm.

Integrating Custom 3D Audio Rendering into Game Sound Engines
Fritz Menzer, MN Signal Processing
While basic positional 3D audio for multichannel loudspeaker setups and headphone playback is provided by currently available game sound engines and specialized APIs, the needs of some game developers can go beyond just placing sounds in space, requiring also the simulation of diverse acoustical environments such as rooms, caves, or forests. This paper explores the possibility of addressing the game developers’ needs by adding custom 3D audio rendering into game sound engines by using their plugin systems and evaluates the performance of different plugin topologies.

Virtual Sound Source Positioning by Differential Head Related Transfer Function
Dominik Štorek, Dept.  of  Radioelectronics,  Czech  Technical  University  in  Prague
This  article  deals  with  a  new  approach  to  virtual  sound  source  positioning.  The  usual  modern  advanced  method  based   on  applying  Head  Related  Transfer  Function  to  both  stereo  channels  is  substituted  by  method  of  affecting  stereo  signal   only  in  one  channel.  The  proposed  method  claims  only  the  difference  in  spectral  features  in  both  channels  is  essential   for  ability  of  sound  source  perception  in  horizontal  plane.  This  fact  allows  to  reduce  the  usual  required  positioning  data   and  also  computing  operations  to  only  a  half.  In  this  paper,  the  process  of  proposed  method  is  introduced,  compared  to   the  standard  method,  and  verified  by  listening  test.    

A Framework for the Development of Accurate Acoustic Calculations for Games
Panagiotis Charalampous and Panos Economou, P.E Mediterranean Acoustics Research and Development
Despite the rapid development in acoustics calculation software during the last couple of decades, such advances have not been achieved uniformly. Various demands in different disciplines have shifted the focus to a number of different aspects of the calculations. Methods in game development have focused on speed and optimized calculation times to achieve interactive sound rendering, whilst engineering methods have concentrated in achieving accuracy for reliable predictions. This paper presents a flexible, expandable and adjustable framework for the development of fast and accurate acoustics calculations both for game development and engineering purposes. It decomposes the process of acoustic calculations for 3D environments into distinct calculation steps and allows third party users to adjust calculation methodologies according to their needs.

Posters

Use of 3D Head Shape for Personalized Binaural Audio
Philip J. B. Jackson and Naveen K. Desiraju, CVSSP, Dept. of Electronic Engineering, University of Surrey
Natural-sounding reproduction of sound over headphones requires accurate estimation of an individual’s Head-Related Impulse Responses (HRIRs), capturing details relating to the size and shape of the body, head and ears. A stereo-vision face capture system was used to obtain 3D geometry, which provided surface data for boundary element method (BEM) acoustical simulation. Audio recordings were filtered by the output HRIRs to generate samples for a comparative listening test alongside samples generated with dummy-head HRIRs. Preliminary assessment showed better localization judgements with the personalized HRIRs by the corresponding participant, whereas other listeners performed better with dummy-head HRIRs, which is consistent with expectations for personalized HRIRs. The use of visual measurements for enhancing users’ auditory experience merits investigation with additional participants.

Geometric and Wave-Based Acoustic Modelling Using Blender
Jelle van Mourik and Damian Murphy, AudioLab, University of York
Geometric and wave-based acoustic algorithms have been shown as appropriate for the auralisation of room acoustic models. In particular they hold significant potential to be used in interactive virtual environments as a means of real-time sound rendering, with possible applications ranging from aiding architectural acoustic design to enhancing computer game audio. This paper presents a tool for developing acoustical scenes in Blender, an open source 3D development programme, based on 3D acoustic modelling using ray-tracing and/or FDTD methods. With the potential for real-time interaction and walk-through auralisation by means of the Blender Game Engine we demonstrate how Blender can be used as part of the acoustical design process.

Plausible Mono-to-Surround Sound Synthesis in Virtual-World Parametric Spatial Audio
Tapani Pihlajamaki and Mikko-Ville Laitinen, Department of Signal Processing and Acoustics, Aalto University School of Electrical Engineering
The control of diffuseness of sound is a tool for the sound designer to synthesize surrounding sound from a monophonic signal in virtual-world audio. Virtual-world Directional Audio Coding offers this with a specific diffuseness parameter. However, the diffuseness parameter value measured from sound recordings often has spurious short-term fluctuations which have to be synthesized to obtain natural reproduction. This data is not readily available when upmixing a monophonic signal into a multi-channel setup. In this paper, a method is proposed for estimating the fluctuation of diffuseness parameter from a monophonic signal and synthesizing a multi-channel output based on it. This algorithm is based on the estimation of the reverberant energy in a signal. A formal listening test was performed to compare the relative quality of the proposed method to constant diffuseness cases. The results show that the proposed method increases the perceptual quality of the synthesis.

Modeling and Real-Time Generation of Pen Stroke Sounds for Tactile Devices
Hanwook Chung, Institute of New Media and Communication, Department of Electrical Engineering and Computer Science, Seoul National University
Hoon Heo, Music and Audio Research Group, Seoul National University
Dooyong Sung, Music and Audio Research Group, Seoul National University
Yoonchang Han, Music and Audio Research Group, Seoul National University
Kyogu Lee
In a real-world situation, pen strokes produce specific sounds that help to make interactions more natural in a virtual environment, such as using tactile input devices for education or games. In this paper, we describe a method for modeling and generating pen stroke sounds in real time. Since the proposed method is based on recorded signals, not only a specific pen sound but also various sound sources can be used. The difference in sound due to the change of speed of a pen movement is modeled by real-time resampling which is a simple and practical method. Acoustical resonant characteristics of the body below the surface and the pen are also identified. We conducted an experiment by implementing the proposed method on a tactile device and verified the performance.

Granular Analysis/Synthesis for Simple and Robust Transformations of Complex Sounds
Jung-Suk Lee, Music Technology Area, Schulich School of Music, McGill University; Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT); Broadcom Corporation;
François Thibault, Audiokinetic Inc.
Philippe Depalle, Music Technology Area, Schulich School of Music, McGill University; Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT)
Gary P. Scavone, Music Technology Area, Schulich School of Music, McGill University; Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT)
In this paper, a novel and user-friendly granular analysis/synthesis system particularly geared towards environmental sounds is presented. A granular analysis component and a grain synthesis component were intended to be implemented separately so as to achieve more flexibility. The grain analysis component seg- ments a given sound into many ‘grains’ that are believed to be microscopic units that define an overall sound. A grain is likely to account for a local sound event generated from a microscopic interaction between objects. Segmentation should be able to successfully isolate these local sound events in a physically or perceptually meaningful way. The second part of the research was focused on the granular synthesis that can easily modify and re-create a given sound. The granular synthesis system would feature flexible time modification with which the user could re-assign the timing of grains and adjust the time-scale. Also, the system would be capable of cross-synthesis given the target sound and the collection of grains obtained through an analysis of sounds that might not include grains from the target one.

Individualized HRTFs Simulation Using Multiple Source Ray Tracing Method
Dooyong Sung, Music and Audio Research Group, Seoul National University
Nara Hahn, Institue of New Media Communication, Department of Electrical Engineering and Computer Science, Seoul National University
Kyogu Lee, Music and Audio Research Group, Seoul National University
Head-related transfer functions (HRTFs) explore the spatial auditory characteristics of human and can be used in various applications such as spatial audio and 3D games. Since non-individualized HRTFs cause high elevation localization error and front/back confusion, individualizing HRTFs are required for more precise three-dimensional localization. However, HRFTs measurement for each individual is expensive and time- consuming. In this paper, we use ray tracing techniques to simulate individualized HRTFs. Ray tracing techniques, however, show limited performance in simulating diffraction of sound. In order to solve such problem, Kirchhoff-Helmholtz integral is applied to ray tracing, so called Multiple Source Ray Tracing. We conducted experiments using binaural and spectral cues as performance measurement, and verified that the proposed method yields performance comparable to the measured HRFTs.




Hands-On Sessions

The Fabric of Time
Fabric v2.0 audio toolset for unity3D. Fabric extends Unity’s audio functionality and provides an extensive set of high level audio components that allows the creation of complex and rich audio behaviours. Sign ups for sessions will be available here closer to the conference.


FMOD Studio
FMOD provides a hands-on look at their latest game audio authoring environment. Sign ups for sessions will be available here closer to the conference.


Mix Genius
Mix Genius have developed an intelligent game audio engine that will make autonomous decisions regarding the mixing of content. It will adjust levels, equalise, pan and compress all sources to achieve an optimal mix, while still complying with the constraints of the game. A video demonstration will be provided, showing the tools operating on game audio content, and comparisons will be made with standard rendering.




Social Events

Dolby Reception
Dolby will be holding a drinks-reception at their Soho Square offices and demoing some of their latest innovations.

Home Programme Titles and Abstracts
credit