Português English

Master thesis

Texto completo em formato pdf.

I-Sounds - Emotion-based Music Composition for Virtual Environments


From the concert halls to our pockets, music has an ubiquitous presence in our lives. Due to its observable, yet not fully understood, expressive abilities, it is a very effective communication tool explored in computer games, films and virtual and interactive systems. In spite of the crescent demand for flexibility, nowadays, most applications use pre-composed soundtracks, less flexible though aesthetically refined.

A new trend takes the paradigm a step further to develop automatic composition systems, able to deliver real-time contextualised music, while the user is presented with an every changing interactive experience. This work is above all an exploratory proof of concept and a step towards that objective, proposing a computational architecture and implementation for an emotion-based composition system. The goal is to provide a composition algorithm development framework, as well as, a run-time environment for integration with affective systems. In parallel, this work proposes a composition algorithm able to express happiness, sadness, anger and fear, using the properties of rhythm and diatonic modes.

In conjunction both systems are expected to enlarge the affective bandwidth of an Interactive-Drama application, in which children can build up stories with the collaboration and participation of computer controlled characters. While the collected empirical data does in fact supports the hypothesis, it also uncovers some aspects requiring further refinement, such as, the illustration of anger and fear. The experiments, have also validated the I-Sounds framework as a suitable development and integration tool for composition algorithms.

[Complete text in pdf format]