Thèse de doctorat en Informatique
Sous la direction de Stéphane Natkin.
Soutenue en 2009
à Paris, CNAM .
Pas de résumé disponible.
Middleware architecture for sound creation in video games
When a composer writes music for a linear media such as film, he offers us a well defined sound composition to accopmpany what is happening on the screen. As the composer knows the sequence and timing of all events, there fore it can construct its work according to this knowledge. The sound designer of a video game does not have such certainly when making the sound composition of the game. He must thik his work as a dynamic structure to be integrated into the game that adapt to the game states it has previously identified to be highlighted aurally. The establishment of links between the game system and sound system are designed to enhance the immersion of the player and the consistency of the game universe. This dynamic, real-time, approach of sound composition requires the designer to adjust and think his music production so that it can be fulfill its role in the game. To make this possible, the sound design tools must take into account the specifities of writing sound for the video game and thus enable sound designer to define real-time processes to compute sound, relying on musical logic and sound synthesis, according to the states of the game. These tools must enable the integration of a dynamic soundtrack based possibly on procedural audio techniques. The procedural approach should allow the creation of rich sound universe but also facilitates the production of current games whose complexity is growing. In the present context, issues related to procedural content are becoming increasingly central and in all creative work in the gaming (animation, graphics. . . )