Salut Michel,
Bonne année à tous !
Renvoyé sur linux-audio-dev et linux-audio-user
Stéphane
> Le 4 janv. 2022 à 18:41, michel buffa <micbuffa(a)gmail.com> a écrit :
>
> Bonjour et bonne année ! Je vous ai envoyé le CFP pour la WAC 2022, pouvez-vous le relayer sur les mailing listes audio que vous suivez ?
>
> Merci d'avance !
>
> Michel
> Web Audio Conference 2022, Cannes (France) – Call for Submissions
>
> The 7th Web Audio Conference (WAC 2022) will be held in Cannes on July 6-8, 2022 at the University Côte d'Azur.
>
> The WAC is an international conference dedicated to web-based audio technologies and applications. The conference addresses research, development, design, artworks, and standards concerned with emerging audio-related web technologies.
>
> The conference includes two days of talks, poster presentations, demos, installations and performances and a third day with workshops and tutorials. The attendees are invited to a banquet in the evening on the second day.
>
> Please refer to the WAC 2022 website at https://wac2022.i3s.univ-cotedazur.fr/ <https://wac2022.i3s.univ-cotedazur.fr/> for up-to-date information about the conference.
>
> For any questions and requests contact us at wac2022(a)i3s.unice.fr <mailto:wac2022@i3s.unice.fr> or via the Twitter account @webaudioconf <https://twitter.com/webaudioconf>.
>
> Theme and Topics
>
> Emerging web standards such as Web Audio and MIDI, WebRTC, Media Capture and Streams, Media Source Extensions, Timing Object and many others open a multidisciplinary field of innovation that connects state-of-the-art audio techniques with the unique opportunities afforded by the web in areas such as social collaboration, user experience, cloud computing, and portability.
>
> The Web Audio Conference focuses on innovative work by artists, researchers, and engineers in industry and academia, highlighting new standards, tools, APIs, and practices as well as innovative design and applications.
>
> The theme for this edition is "innovative audio narrative"
>
> The scope of the conference encompasses, but is not restricted to, the following areas:
>
> Web Audio API and other existing or emerging web standards for audio and music
>
> Tools, practices, and strategies for web-based audio application development
>
> Innovative web-based audio and music applications
>
> Web-based audio production, delivery, and experience
>
> Audio processing and rendering techniques
>
> Frameworks for audio synthesis, processing, and transformation
>
> Audio data and metadata formats and network delivery
>
> Cloud/HPC for audio production
>
> Audio visualization and/or sonification
>
> Multimedia integration
>
> Web-based live coding and collaborative environments for audio and music generation
>
> Web standards and use of standards within web-based audio projects
>
> Hardware and tangible interfaces in web applications
>
> Codecs and standards for remote audio transmission
>
> Any other innovative work related to web audio that does not fall into the above categories.
> Keynote Speakers
>
> The WAC 2022 features two keynotes by Mark Sandler (Queen Mary University of London) and Jari Kleimola (freelance audio developer).
>
> Submission Types
>
> We welcome submissions in the following tracks: papers, talks, demos, and artistic works. All submissions will be single-blind peer reviewed. The conference proceedings will be published open-access online and will include papers, abstracts of talks and demos, and program notes of artistic works. When submitting a paper or talk, you also should consider submitting a demo of your work (requires as a separate submission in the demo track).
>
> Papers (Plenary or Poster): Submit a paper to be given as a plenary presentation (max. 6 pages) or a poster (max. 4 pages). Paper submissions have to use the provided templates.
>
> Talks: Submit a talk to be given in a plenary session. Talk submissions consist of an abstract and a description of the talk including an outline of the talk and a detailed overview of the presented work or idea (max. 2 pages) together with links to additional documentation.
>
> Demos: Submit a work to be presented at a hands-on demo session. Demo submissions consist of an abstract and a detailed description of the presented work and setup (max. 2 pages), links to additional documentation, and a complete list of technical requirements.
>
> Artistic Works (Performance or Artworks): Submit a performance or artwoks/installation making creative use of web audio standards. Works can include elements such as audience device participation and collaboration, web-based interfaces, and/or other imaginative approaches to web technology. Apart from an abstract, submissions to this track consist of a description of the work (max. 2 pages), links to audio/video/image documentation, and a complete list of technical requirements as well as short program notes (max. 5000 characters) and one-paragraph biographies of the authors (max. 1000 characters per author).
>
> Workshops and Tutorials
>
> The third day of the conference (July 8) is dedicated to workshops and tutorials. It’s the perfect time to dive deeper into topics learned at the conference with a more hands-on approach. If you are interested in running a tutorial session or a workshop at the conference, please contact the organizers directly at wac2022workshops(a)i3s.unice.fr <http://wac2022workshops@i3s.unice.fr/> with a short description of your tutorial or workshop.
>
> Free Attendance and Assistance for Contributors
>
> The WAC is a community-run conference with a limited budget. Nevertheless, we would like to make sure that at least one author of each submission selected in the peer-review process can attend the conference. For submissions that are not affiliated to an institution or corporation, authors can request the waiving of the conference fee (the request is limited to one author per submission and hidden from the review).
>
> If you are from an underrepresented group and need further financial assistance to present your work at the conference, you can apply for financial aid at wac2022chairs(a)i3s.unice.fr <http://wac2022chairs@i3s.unice.fr/>.
>
> Companies and institutions covering travel expenses and tickets for presenters without being involved in the presented work will be included on the sponsors page of the conference website.
>
> Important Dates
>
> January 1, 2022: Open call for submissions starts.
> March 15, 2022: Submissions deadline for Papers (research track) (updates possible until March 23, 2022).
> April 15, 2022: Submissions deadline for Talks, Posters, Demos, Performances, Artworks. (updates possible until April 23, 2022).
> Mai 17, 2022: Notification of acceptances and rejections for Papers (research track).
> May 17, 2022: Notification of acceptances and rejections for Talks, Posters, Demos, Performances, Artworks.
> June 10, 2022: Early-bird registration deadline.
> June 10, 2022: Camera ready submission and presenter registration deadline.
> July 6-8, 2022: The conference.
> Templates and Submission System
>
> The submissions for WAC are handled through the WAC 2022 conference management system <https://easychair.org/conferences/?conf=wac2022> (EasyChair). In case you need assistance with your submission please contact us at wac2022program(a)i3s.unice.fr <http://wac2022program@i3s.unice.fr/>.
>
> All submissions of papers and abstracts to be published in the conference proceedings have to use the WAC 2022 templates for Word or LaTex <https://mega.nz/file/ClJ0ERRK#x5bLrbNVPE1A5mwYswSTK6tZk2L63S9EYKgg13y2iQY>.
>
> Code of Conduct
>
> By submitting your work to the WAC, you agree to abide by our code of conduct available on the conference web site <https://wac2022.i3s.univ-cotedazur.fr/>. Violations against the code of conduct will result in exclusion from the conference.
>
> Best wishes,The WAC 2022 Committee
>
Hey hey,
is there any way of finding out whether a big SysEx message is incoming,
before the normal callback is invokved or I suppose one of the buffers is
full? May it be viable reducing the buffersize (version 5.0.0) and increasing
the number of buffers? The manual for this function mentions that this will
not change anything on most APIs, since they handle buffers internally.
Many thanks for any pointers!
Best wishes,
Jeanette
--
* Website: http://juliencoder.de - for summer is a state of sound
* Youtube: https://www.youtube.com/channel/UCMS4rfGrTwz8W7jhC1Jnv7g
* Audiobombs: https://www.audiobombs.com/users/jeanette_c
* GitHub: https://github.com/jeanette-c
Top down, on the strip
Lookin' in the mirror
I'm checkin' out my lipstick <3
(Britney Spears)
Thanks Roman. For the actual pitch quantisation I used a similar approach in
the end. Though for the MIDI case I stuck with the temporary 128 element
array. Though this is simply calculated from the same set of 12 element
arrays.
Best wishes,
Jeanette
Jan 1 2022, Roman Sommer has written:
> Hi Jeanette,
>
> If you want to reduce the amount of table elements, you could calculate
> the offset from the key base note, take it mod 12 and have a 12-element
> array for every scale you want to implement (pseudo-C ahead):
>
> bool minor[] = [true, false, true, true, false, true, false, true,
> true, false, true, false];
> uint32_t offset = 2; // key: D
> // add offset instead of subtract to ensure the result is positive
> uint32_t additive_offset = 12 - offset
> uint32_t index = (note + additive_offset) % 12;
>
> if minor[index]
> play_note(note);
>
> Won't probably make much of a difference in performance really, but it
> should make it easier to implement all scales in all keys.
> Hope that helps! :)
>
> keep grooving
>
> Roman
>
> "Jeanette C." <julien(a)mail.upb.de> writes:
>
>> Gopod morning Fons!
>> Dec 31 2021, Fons Adriaensen has written:
>> ...
>>> There is code doing this [quantise frequency to scale] in zita-at1 (the autotuner). It has some
>>> refinements such as an optional preference for the previous note.
>>> I will look this up and isolate it - it may be difficult to find
>>> as it is integrated with other functionality.
>> Many thanks, this is great! You are, as always, very kind.
>>
>> Best wishes,
>>
>> Jeanette
>>>
>>> Ciao,
>>>
>>> --
>>> FA
>>>
>>>
>>>
>>
>> --
>> * Website: http://juliencoder.de - for summer is a state of sound
>> * Youtube: https://www.youtube.com/channel/UCMS4rfGrTwz8W7jhC1Jnv7g
>> * Audiobombs: https://www.audiobombs.com/users/jeanette_c
>> * GitHub: https://github.com/jeanette-c
>>
>> I thought love was just a tingling of the skin <3
>> (Britney Spears)
>> _______________________________________________
>> Linux-audio-dev mailing list
>> Linux-audio-dev(a)lists.linuxaudio.org
>> https://lists.linuxaudio.org/listinfo/linux-audio-dev
>
--
* Website: http://juliencoder.de - for summer is a state of sound
* Youtube: https://www.youtube.com/channel/UCMS4rfGrTwz8W7jhC1Jnv7g
* Audiobombs: https://www.audiobombs.com/users/jeanette_c
* GitHub: https://github.com/jeanette-c
Our imagination
Taking us to places
We have never been before... <3
(Britney Spears)
On Fri, Dec 31, 2021 at 12:58:31AM +0100, Jeanette C. wrote:
> OK, the project I'm working on is a monophonic step sequencer. You will
> find similar functionality in some master control keyboards, softsynths
> and other DAWs. It's mostly for convenience's sake or to help people
> with less knowledge
In that case, one option would be to just disable the unwanted notes.
This provides immediate feedback, so people will actually learn the
set of allowed notes, and I guess that will happen quite fast.
If instead you replace the unwanted ones, the user will learn
either nothing, or the wrong things, e.g. that C# is a valid note
in a C-major scale.
> Initial thoughts on the MIDI note case included creating a 127 element
> array and fill it with notes only in the scale and then use it as a
> lookup table. So element 60 (middle C) would map to 60, whereas element
> 61 (C3) might map to 62 (D). Such a table could relatively easily be
> defined from some kind of scale definition and root note number. Though
> the process did seem unellegant.
It isn't. I don't think you could find a general-purpose algorithmic
approach taking less than 127 bytes to code it.
> I think I once wrote a quantiser that did quantise any frequency to the
> nearest note in the western chromatic scale, which wasn't too difficult,
> but I can't see a way to perform the same feat with any kind of diatonic
> scale, eventhough finding the relevant frequencies in that scale is
> almost as easy as setting up the MIDI scales above.
There is code doing this in zita-at1 (the autotuner). It has some
refinements such as an optional preference for the previous note.
I will look this up and isolate it - it may be difficult to find
as it is integrated with other functionality.
Ciao,
--
FA