An adaptive music generation architecture for games based on the deep learning Transformer model - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2022

An adaptive music generation architecture for games based on the deep learning Transformer model

Résumé

This paper presents an architecture for generating music for video games based on the Transformer deep learning model. The system generates music in various layers, following the standard layering strategy currently used by composers designing video game music. The music is adaptive to the psychological context of the player, according to the arousal-valence model. Our motivation is to customize music according to the player's tastes, who can select his preferred style of music through a set of training examples of music. We discuss current limitations and prospects for the future, such as collaborative and interactive control of the musical components.
Fichier principal
Vignette du fichier
2207.01698.pdf (566.84 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03715919 , version 1 (06-07-2022)

Identifiants

  • HAL Id : hal-03715919 , version 1

Citer

Gustavo Amaral Costa, Augusto Baffa, Jean-Pierre Briot, Bruno Feijo, Antonio Luz Furtado. An adaptive music generation architecture for games based on the deep learning Transformer model. 2022. ⟨hal-03715919⟩
57 Consultations
106 Téléchargements

Partager

Gmail Facebook X LinkedIn More