*Apologies for cross-postings*
# Fifth Annual Web Audio Conference - 2nd Call for Submissions
https://www.ntnu.edu/wac2019
The fifth Web Audio Conference (WAC) will be held 4-6 December, 2019 at the
Norwegian University of Science and Technology (NTNU) in Trondheim, Norway.
WAC is an international conference dedicated to web audio technologies and
applications. The conference addresses academic research, artistic
research, development, design, evaluation and standards concerned with
emerging audio-related web technologies such as Web Audio API, Web RTC,
WebSockets and Javascript. The conference welcomes web developers, music
technologists, computer musicians, application designers, industry
engineers, R&D scientists, academic researchers, artists, students and
people interested in the fields of web development, music technology,
computer music, audio applications and web standards. The previous Web
Audio Conferences were held in 2015 at IRCAM and Mozilla in Paris, in 2016
at Georgia Tech in Atlanta, in 2017 at the Centre for Digital Music, Queen
Mary University of London in London, and in 2018 at TU Berlin in Berlin.
The internet has become much more than a simple storage and delivery
network for audio files, as modern web browsers on desktop and mobile
devices bring new user experiences and interaction opportunities. New and
emerging web technologies and standards now allow applications to create
and manipulate sound in real-time at near-native speeds, enabling the
creation of a new generation of web-based applications that mimic the
capabilities of desktop software while leveraging unique opportunities
afforded by the web in areas such as social collaboration, user experience,
cloud computing, and portability. The Web Audio Conference focuses on
innovative work by artists, researchers, students, and engineers in
industry and academia, highlighting new standards, tools, APIs, and
practices as well as innovative web audio applications for musical
performance, education, research, collaboration, and production, with an
emphasis on bringing more diversity into audio.
## Keynote Speakers
We are pleased to announce our two keynote speakers: Rebekah Wilson
(independent researcher, technologist, composer, co-founder and technology
director for Chicago’s Source Elements) and Norbert Schnell (professor of
Music Design at the Digital Media Faculty at the Furtwangen University).
More info available at: https://www.ntnu.edu/wac2019/keynotes
## Theme and Topics
The theme for the fifth edition of the Web Audio Conference is Diversity in
Web Audio. We particularly encourage submissions focusing on inclusive
computing, cultural computing, postcolonial computing, and collaborative
and participatory interfaces across the web in the context of generation,
production, distribution, consumption and delivery of audio material that
especially promote diversity and inclusion.
Further areas of interest include:
* Web Audio API, Web MIDI, Web RTC and other existing or emerging web
standards for audio and music.
* Development tools, practices, and strategies of web audio applications.
* Innovative audio-based web applications.
* Web-based music composition, production, delivery, and experience.
* Client-side audio engines and audio processing/rendering (real-time or
non real-time).
* Cloud/HPC for music production and live performances.
* Audio data and metadata formats and network delivery.
* Server-side audio processing and client access.
* Frameworks for audio synthesis, processing, and transformation.
* Web-based audio visualization and/or sonification.
* Multimedia integration.
* Web-based live coding and collaborative environments for audio and music
generation.
* Web standards and use of standards within audio-based web projects.
* Hardware and tangible interfaces and human-computer interaction in web
applications.
* Codecs and standards for remote audio transmission.
* Any other innovative work related to web audio that does not fall into
the above categories.
## Submission Tracks
We welcome submissions in the following tracks: papers, talks, posters,
demos, performances, and artworks. All submissions will be single-blind
peer reviewed. The conference proceedings, which will include both papers
(for papers and posters) and extended abstracts (for talks, demos,
performances, and artworks), will be published open-access online with
Creative Commons attribution, and with an ISSN number. A selection of the
best papers, as determined by a specialized jury, will be offered the
opportunity to publish an extended version at the Journal of Audio
Engineering Society.
**Papers**: Submit a 4-6 page paper to be given as an oral presentation.
**Talks**: Submit a 1-2 page extended abstract to be given as an oral
presentation.
**Posters**: Submit a 2-4 page paper to be presented at a poster session.
**Demos**: Submit a work to be presented at a hands-on demo session. Demo
submissions should consist of a 1-2 page extended abstract including
diagrams or images, and a complete list of technical requirements
(including anything expected to be provided by the conference organizers).
**Performances**: Submit a performance making creative use of web-based
audio applications. Performances can include elements such as audience
device participation and collaboration, web-based interfaces, Web MIDI,
WebSockets, and/or other imaginative approaches to web technology.
Submissions must include a title, a 1-2 page description of the
performance, links to audio/video/image documentation of the work, a
complete list of technical requirements (including anything expected to be
provided by conference organizers), and names and one-paragraph biographies
of all performers.
**Artworks**: Submit a sonic web artwork or interactive application which
makes significant use of web audio standards such as Web Audio API or Web
MIDI in conjunction with other technologies such as HTML5 graphics, WebGL,
and Virtual Reality frameworks. Works must be suitable for presentation on
a computer kiosk with headphones. They will be featured at the conference
venue throughout the conference and on the conference web site. Submissions
must include a title, 1-2 page description of the work, a link to access
the work, and names and one-paragraph biographies of the authors.
**Tutorials**: If you are interested in running a tutorial session at the
conference, please contact the organizers directly.
## Important Dates
March 26, 2019: Open call for submissions starts.
June 16, 2019: Submissions deadline.
September 2, 2019: Notification of acceptances and rejections.
September 15, 2019: Early-bird registration deadline.
October 6, 2019: Camera ready submission and presenter registration
deadline.
December 4-6, 2019: The conference.
At least one author of each accepted submission must register for and
attend the conference in order to present their work. A limited number of
diversity tickets will be available.
## Templates and Submission System
Templates and information about the submission system are available on the
official conference website: https://www.ntnu.edu/wac2019
Best wishes,
The WAC 2019 Committee
While experimenting with window functions for spectral analyzis, I
compared Hann, Sin and Lanczos. It is easy to notice, that Hann is
really same as sin(x)^2. Lanczos is tiny bit better, because its sides
are tiny bit smoother, compared to sin(). It seems, that unsmoothed
corners between sides and zero axis for sin() is reason why sides are
so high, compared to Hann. Hamming and more over Gaussian have ideal
smooth sides, but narrower middle (probably this one reason why central
leaf is wider for them).
Just for experiment i tried to change sin(x)^2 to just sin(x)^f, where
1.0f < f < 2.0f. And it looks like any f>1 causes derivative to be =0
at zero axis. The only thing, affected by exact amount in this range,
is how fast it will become zero. While it is easy to notice with Hann
example, factor around 1.2 or 1.1 make it hard to notice without very
deep zoom. With f=1.25 or 1.26 it nearly reproduces Lanczos, thought
difference may be noticed, if plotted at the same time.
Though still not have enough precise integral for weakening correction,
i noticed that side leafs falldown slightly faster than for Hann.
Now I'm curious, is such function is in use? I don't know how to call
it for search request. E.g., after reinventing Welch window by just
multiplicating y=2x with y=2-2x, I already knew it is parabola. For
sin(x)/x i know it is sinc. But what is sin(x)^y, at least at some
'y' between 1 and 2 ?
I feel, that this is also something reinvented. Just like writing
sin(x)^2, i discovered later that it is Hann. Need help.
One of professors, who are still aware of signal processing stuff,
adviced me to reed this book (found localized to russian):
https://www.scirp.org/(S(351jmbntvnsjt1aadkposzje))/reference/ReferencesPap…
but i still have to find time to learn it (besides of deepening my
math knowledge).
Hello.
I just have attempt, writing serious application, considering language
(C), tookit (gtk3, cairo, pango, etc) and that it is audio app. Besides
that many audio things use simple "float" for audio data, i noticed
some posts, where tension towards integer math appeared. While it is
not my case (my app is jack-based, and audio data have FP type
everywhere), i slightly distracted from audio side.
My app for now is spectrum analyzer, which later should evolve into
couple of spectral editing helper utilities (more exactly to generate
spectrogram and apply changes either by resynth or by filtering with
diff spectrogram).
I distracted to experiments with graphics post-processing for instant
spectrum view rendering. Due to cairo nature, color data in cairo image
surface may be at best argb32, rgb24 and rgb30 (for now i implemented
uspport only for first two). As result post-processing is all done in
integer mode. I can understand this, as i noticed that channel orders
in memory depends on endianness, what points that cairo relies more on
masks and bitshifts than for byte value arrays where is possible.
I decided to look in debugger for how gcc does optimize these ops.
Noticed following. Below is code chunk from kdbg with some asm blocks
expanded. There is only one line, where i tried to use floating point
ops, just for comparison (plus some context for better understanding).
for (int l = 0; l < 3; l++)
for (int c = 0; c < 3; c++)
{
unsigned char
* p = bpix[l][c];
col[0] += p[0] ,
0x5b27 movzbl 0x60(%rsp),%esi
col[1] += p[1] ,
0x5a34 movzbl 0x65(%rsp),%eax
col[2] += p[2] ,
0x5a42 movzbl 0x62(%rsp),%edx
col[3] += p[3];
0x5a47 movzbl 0x67(%rsp),%esi
0x5a4c mov 0x10(%rsp),%rbp
}
col[0] /= 9.0, col[1] /= 9.0, col[2] /= 9.0, col[3] /= 9.0;
0x5a39 pxor %xmm0,%xmm0
op[0] += col[0] ,
0x5b8d add %sil,0x0(%rbp)
op[1] += col[1] ,
0x5b94 add %cl,0x1(%rbp)
op[2] += col[2] ,
0x5b97 add %dl,0x2(%rbp)
op[3] += col[3];
0x5b91 add %al,0x3(%rbp)
for (int l = 0; l < 3; l++)
ip[l] += 3;
op += ch_n;
}
Notice, how line, containing chain of divisions, is compiled to single
sse operation. What is interesting, this is only asm line, involving
xmm or imm - i can't find anything else like mov, involving these
registers.
With division to integer 9 - it is still one line, but no xmm is
involved.
col[0] += p[0] ,
0x5b17 movzbl 0x64(%rsp),%eax
col[1] += p[1] ,
0x5a35 movzbl 0x65(%rsp),%eax
col[2] += p[2] ,
0x5a4a movzbl 0x7e(%rsp),%esi
col[3] += p[3];
0x5a4f movzbl 0x7f(%rsp),%ecx
}
col[0] /= 9, col[1] /= 9, col[2] /= 9, col[3] /= 9;
0x5a3a mov $0x38e38e39,%r8d
op[0] += col[0] ,
0x5b78 add %dl,0x0(%rbp)
op[1] += col[1] ,
0x5b3a add %dil,0x1(%rbp)
As for GCC opts, i used unusual combo "-march=native -O3 -g" in order
to get exhausing optimization, but still keep it readable for kdbg
disasm to get a look.
While searching for info about sse integer support, search pointed me
to wikipedia page about x86 instructions, where i found almost enough
instruction set - logical, add, sub and mul, signed and
unsigned, words and bytes (at least in SSE2 which according to
description enabled MMX set to use SSE registers, where integer support
was primary in MMX).
So, i'm in maze - why doesn't gcc involve such ops in integer mode?
Could it be possible, that what it did is better without SSE?
I'm about to eventually try more FP ops in this place but unsure, about
possible conversions, since source and dest are anyway cairo surfaces
with integer data.
On 4/14/19 5:42 PM, Tim wrote:
> Hi list.
> When I fist boot each day, this is what I get,
> no hi-res timer:
>
> cat /proc/asound/timers
> G0: system timer : 4000.000us (10000000 ticks)
> P0-0-0: PCM playback 0-0-0 : SLAVE
> P0-0-1: PCM capture 0-0-1 : SLAVE
> P2-0-1: PCM capture 2-0-1 : SLAVE
>
>
> But after I start Jack, I get this:
>
> cat /proc/asound/timers
> G0: system timer : 4000.000us (10000000 ticks)
> G3: HR timer : 0.001us (1000000000 ticks)
> Client sequencer queue -1 : running <<< Jack I believe
> P0-0-0: PCM playback 0-0-0 : SLAVE
> P0-0-1: PCM capture 0-0-1 : SLAVE
> P2-0-1: PCM capture 2-0-1 : SLAVE
>
>
> But curiously, after I *quit* Jack and Jack dbus
> and ensure they are not running, I still get this:
>
> cat /proc/asound/timers
> G0: system timer : 4000.000us (10000000 ticks)
> G3: HR timer : 0.001us (1000000000 ticks)
> P0-0-0: PCM playback 0-0-0 : SLAVE
> P0-0-1: PCM capture 0-0-1 : SLAVE
> P2-0-1: PCM capture 2-0-1 : SLAVE
>
> Notice the hi-res timer is now still available.
> What's happening?
> I can only see that Jack uses pcm and seq but no timers.
> Seems by virtue of Jack using pcm/seq, ALSA loads a module
> or something.
Ah, that would be snd_hrtimer I suppose.
Module Size Used by
snd_hrtimer 16384 1 <<< Jack
It's been a while since I had to force a module to load or
think about these things. Let's see, how to do it these days...
And how to do it through our app.
Could it be considered a bug that it is not available?
Sorry for the noise.
Tim.
>
> How can I ensure that the hi-res timer is available
> always from boot up? Must I manually load a module?
> Most *importantly*, can I do this through our application
> software so that users do not have to load a module?
>
> Thanks.
> Tim.
On Thu, Apr 11, 2019 at 12:17:58PM +0200, linux(a)justmail.de wrote:
> On Thu, 2019-04-11 at 08:37 +0100, Will J Godfrey wrote:
> > On Thu, 11 Apr 2019 08:16:29 +0000
> > John Rigg <ladev9(a)jrigg.co.uk> wrote:
> > > A Korg GA-1 tuner can go down to 5 semitones flat. It's quite common
> > > in the heavier styles of rock music to downtune a few semitones.
> > Interesting. Thanks for that.
>
> Assuming the guitar tuner is a chromatic tuner, dropped and lowered
> guitar tunings don't require anything else than A = 440 Hz and if you
> dislike 440Hz a range from + half of a semitone (+50 cent) to - half of
> a semitone (-50 Cent).
>
> https://en.wikipedia.org/wiki/List_of_guitar_tunings#Dropped
> https://en.wikipedia.org/wiki/List_of_guitar_tunings#Lowered
That's all very well, but tuning quickly on stage in a live gig
is lot easier if your tuner goes down to the right pitch with
minimal fuss. (Speaking from long experience as gigging guitarist
and bassist).
The GA-1 tuner I mentioned isn't a true chromatic tuner, but
its ability to shift the standard guitar tunings down several
semitones is very useful. In modern metal genres C or B tunings
are probably more common than the standard EADGBE, so this isn't
just an edge case.
John
Currently in 'Scales' Yoshimi can set this anywhere between 1Hz and 2kHz, which
is frankly ridiculous.
This doesn't appear at all in the Scala documentation, so that's no guide.
I've had suggestions ranging from +- 1/2 semitone to +- half octave as being
more than enough, considering that there is also semitone master key shift
covering +- 3 octaves (used to be 5!) along with a fine detune of +63 -64
cents.
What have other synth people here set for this? Does anyone else actually have
the setting?
--
Will J Godfrey
http://www.musically.me.uk
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.