Call for Abstracts, please forward:
Versatile Sound Models for Interaction in
Audio–graphic Virtual Environments:
Control of Audio-graphic Sound Synthesis
Workshop @ Conference on Digital Audio Effects DAFx-11
Friday September 23, 2011 at Ircam, Paris
The use of 3D interactive virtual environments is becoming more
widespread in areas such as games, architecture, urbanism, information
visualization and sonification, interactive artistic digital media,
serious games, gamification. The limitations in sound generation in
existing environments are increasingly obvious with current
requirements.
This workshop will look at recent advances and future prospects in
sound modeling, representation, transformation and synthesis for
interactive audio-graphic scene design.
Several approaches to extending sound generation in 3D virtual
environments have been developed in recent years, such as sampling,
modal synthesis, additive synthesis, corpus based synthesis, granular
synthesis, description based synthesis, physical modeling... These
techniques can be quite different in their methods and results, but
may also become complementary towards the common goal of versatile and
understandable virtual scenes, in order to cover a wide range of
object types and interactions between objects and with them.
The purpose of this workshop is to sum up these different approaches,
present current work in the field, and to discuss their differences,
commonalities and complementarities.
Accepted abstracts will be invited to submit an extended version to a
special issue of the Springer Journal on Multimodal User Interfaces
(JMUI) or SpringerOpen EURASIP Journal on Audio, Speech, and Music
Processing.
Detailed information about the workshop can be found here:
http://www.topophonie.fr/event/3
http://dafx11.ircam.fr/?page_id=224
The workshop is free for attendants of the DAFx conference and
for non-DAFx-attendants by invitation. Registration to the DAFx
conference can be found here:
http://dafx11.ircam.fr
Call for Abstracts
------------------
Abstracts (max. 1 A4/Letter page, PDF format)
on the topics of the workshop should be sent by July 17
to Diemo Schwarz (schwarz(a)ircam.fr)
The submissions will be reviewed by a program committee
and accepted communications will be given at the workshop.
The authors will be notified at the latest end of July, 2011.
Important Dates
---------------
* July 17, 2011: Abstract Submission Deadline
* July 31, 2011: Notification of Acceptance
* September 23, 2011: Workshop
Program Chairs
--------------
Roland Cahen, ENSCI-les Ateliers
Diemo Schwarz, IRCAM
Christian Jacquemin, LIMSI-CNRS & University Paris Sud 11
Hui Ding, LIMSI-CNRS & University Paris Sud 11
Program committee
-----------------
Nicolas Tsingos (Dolby Laboratories)
Lonce Wyse (National University of Singapore)
Andrea Valle (University of Torino)
Hendrik Purwins (University Pompeu Fabra)
Thomas Grill (Institut für Elektronische Musik IEM, Graz)
Charles Verron (McGill University, Montreal)
Topics in detail
----------------
Which other and better alternatives to traditional sample triggering
do exist to produce comprehensive, flexible, expressive, realistic
sounds in virtual environments? How to produce rich interaction with
scene objects such as physically informed models for contact and
friction sounds etc? How to edit and structure audio–graphic scenes
otherwise than mapping one event to one sound? There is no
standardized architecture, representation and language for auditory
scenes and objects, as is OpenGL for graphics. The workshop will treat
higher level questions of architecture and modeling of interactive
audio-graphic scenes, down to the detailed question of sound modeling,
representation, transformation and synthesis. These questions cannot
be detached from implementation issues: novel and hybrid synthesis
methods, comparison and improvement of existing platforms, software
architecture, plug-in systems, standards, formats, etc.
New possibilities regarding the use of audio descriptors and dynamic
access to audio databases will also be discussed.
Beyond these main questions, the workshop will cover other recent
advances in audio-graphic scene modeling such as:
* audio-graphic object rendering, and physically and geometrically driven
sound rendering,
* interactive sound texture synthesis, based on signal models, or
physically informed
* joint representation of sound and graphic spaces and objects,
* sound rendering for audio-graphic scenes:
* level of detail, which is a very advanced concept in graphics, but is
rarely treated in audio.
* representation of space and distance,
* masking and occlusion of sources,
* clustering of sources
* audio-graphic interface design,
* sound and graphic localization,
* cross- and bi-modal perceptual evaluations,
* interactive audio-graphic arts,
* industrial audio-graphic data:
* architectural acoustics,
* sound maps,
* urban soundscapes...
* platforms and tools for audio-graphic scene modeling and rendering,
These areas are interdisciplinary in nature and interrelated. New
advancements in each area will benefit the others. This workshop will
allow to exchange the latest developments and to point out current
challenges and new directions.
--
Diemo Schwarz, PhD --
http://diemo.concatenative.net
Real-Time Music Interaction Team --
http://imtr.ircam.fr
IRCAM - Centre Pompidou -- 1, place Igor-Stravinsky, 75004 Paris, France
Phone +33-1-4478-4879 -- Fax +33-1-4478-1540