[Sorry for cross-posting, please distribute]
We are happy to announce the next issue of the Linux Audio Conference
(LAC), May 1-4, 2014 @ ZKM | Institute for Music and Acoustics, in
Karlsruhe, Germnany.
http://lac.linuxaudio.org/2014/
The Linux Audio Conference is an international conference that brings
together musicians, sound artists, software developers and researchers,
working with Linux as an open, stable, professional platform for audio
and media research and music production. LAC includes paper sessions,
workshops, and a diverse program of electronic music.
*Call for Papers, Workshops, Music and Installations*
We invite submissions of papers addressing all areas of audio processing
and media creation based on Linux. Papers can focus on technical,
artistic and scientific issues and should target developers or users. In
our call for music, we are looking for works that have been produced or
composed entirely/mostly using Linux.
The online submission of papers, workshops, music and installations is
now open at http://lac.linuxaudio.org/2014/participation
The Deadline for all submissions is January 27th, 2014 (23:59 HAST).
You are invited to register for participation on our conference website.
There you will find up-to-date instructions, as well as important
information about dates, travel, lodging, and so on.
This year's conference is hosted by the ZKM | Institute for Music und
Acoustics (IMA). The IMA is a forum for international discourse and
exchange and combines artistic work with research and development in the
context of electroacoustic music. By holding concerts, symposia and
festivals on a regular basis it brings together composers, musicians,
musicologists, music software developers and listeners interested in
contemporary music. Artists in Residence and software developers work on
their productions in studios at the institute. With digital sound
synthesis, algorithmic composition, live-electronics up to radio plays,
interactive sound installations and audiovisual productions their
creations cover a broad range of what digital technology can inspire the
musical fantasy to.
The ZKM is proud to be the place of the LAC for the fifth time after
having initiated the conference in 2003.
http://www.zkm.de/musik
We look forward to seeing you in Karlsruhe in May!
Sincerely,
The LAC 2014 Organizing Team
>> In my lv2 plugin, I need to communicate information from the DSP to
>> the UI, but I don't want to break the DSP/UI separation principle (no
>> Instance or Data access). On top of that, I'm using LVTK.
> he, he, yeah it can get a little confusing... maybe this will help.
> // you're sending things in an atom sequence so get the size information
> // from the port buffer
>
> LV2_Atom_Sequence* aseq = (LV2_Atom_Sequence*) p (p_notify);
> m_forge->set_buffer ((uint8_t*) aseq, aseq->atom.size);
>
> m_forge->sequence_head (m_notify_frame, 0);
>
> // sequences need a timestamp for each event added
> m_forge->frame_time (0);
>
> // after forging a frame_time, just write a normal float (no blank object needed)
>
> m_forge->write_float (1604);
> Your ttl file has atom:Float as the buffer type. I've never used
> anything besides atom:Sequence. I imagine this buffer type doesn't need a
> sequence head forged first. Maybe David will jump in on how atom:Float
> bufferType'd ports are supposed to forged into and out of.
Ok, so I changed my code the way you proposed (with switching in the
ttl to Sequence), but still don't manage to make it work.
I'm wondering if there isn't something wrong with the way I setup the
Forge in the first place. I'm a bit confused with the way to interact
wiith the map object in LVTK.
Scope::Scope(double rate) : Plugin<Scope, URID<true>,
Options<true>>(p_n_ports)
{
m_forge = new AtomForge(p_map);
}
void Scope::run(uint32_t nframes)
{
// you're sending things in an atom sequence so get the size information
// from the port buffer
LV2_Atom_Sequence* aseq = (LV2_Atom_Sequence*) p (p_notify);
m_forge->set_buffer ((uint8_t*) aseq, aseq->atom.size);
m_forge->sequence_head(m_notify_frame, 0);
// sequences need a timestamp for each event added
m_forge->frame_time(0);
m_forge->write_float(1604);
}
> I recommend, if you want to use LVTK to do atom forging, that you subclass
> lvtk::AtomForge and add appropriate methods to it...
>
> Here's a snippet that shows how to write patch get/set messages with a
> subclassed AtomForge. It also shows how to write raw midi.
>
> http://pastebin.com/C1LYtXpv -- the code in there uses small uses the
> nullptr macro. just change those to "0" if you're not using c++11
Could you tell me the advantages of doing that?
But I had a look at the code, I still need to understand how to work
with LV2_URID_Map in LVTK (I cannot find any examples using it).
Hello all Users & Devs of linux-audio-land,
Moving forward from the topic on Aeolus and forking projects, perhaps it is
wise to look at how the community as a whole can grow from this situation:
1) It seems the frustration of forks is mainly due to lack of communication.
2) Had appropriate communication been in place, patches could have been
merged.
3) If 1) and 2), then the community flourishes as a whole.
In the Aeolus thread on LAD, Michel Dominique wrote (and I feel its
relevant here):
"That imply we must communicate more with each other"
"I think this is a big problem, and not only related to Fons work, or the
LAD, but to the whole community."
The mailing list you're reading from now is one of the central hubs for the
community:
The -developers list is the perfect place to announce projects, forks,
patches etc.
The -users list is good for asking users and interested parties questions.
I will try to announce more patches / code, to contribute upstream, and
hopefully benefit the community.
Cheers, -Harry
Hi All,
I've written up a blog post on some recent changes to the JAudioLibs'
AudioServer API [1]. This is a Java callback audio API loosely
inspired by PortAudio, and the recommended approach for adding JACK
support to a Java application with JNAJack. The AudioServer API makes
it easy to switch between JACK support and JavaSound support without
requiring code changes.
The recent code additions provide for better runtime service discovery
and optional extension features. For example, it is now possible to
more easily control JACK connections, server autostart and client ID,
as well as for the first time directly access the JackClient if
necessary. More info in the blog post.
The source code on GitHub [2] is now up-to-date for testing, though a
new binary download is not yet available.
Other changes include fixes to JNAJack to build against JNA 3.5+
(binary downloads already work), and minor improvements to the
JavaSound server performance, particularly on Linux (ALSA /
PulseAudio).
Comments and feedback welcomed.
Thanks and best wishes,
Neil
[1] http://praxisintermedia.wordpress.com/2013/11/06/jaudiolibs-audioservers-a-…
[2] https://github.com/jaudiolibs/
--
Neil C Smith
Artist : Technologist : Adviser
http://neilcsmith.net
Praxis LIVE - open-source intermedia development - www.praxislive.org
Digital Prisoners - interactive spaces and projections -
www.digitalprisoners.co.uk
OpenEye - the web, managed - www.openeye.info
Greetings,
Noticed on the Audacity list :
https://code.google.com/p/r8brain-free-src/
Sample-rate conversion from Voxengo, "free" as defined under the MIT
license, not the GLP/LGPL.
Best,
dp
hi all
does anyone know if qmidiroute is still alive, and if so, who is currently
maintaining it ?
i found this git repo : https://github.com/royvegard/qmidiroute
but i cant find a way to contact the owner of this repo (Roy Vegard Ovesen)
grtz
Thijs
--
follow me on my Audio & Linux blog <http://audio-and-linux.blogspot.com/> !
Hi
I would save a internal audio buffer to file on exit, to reuse it after
e new start.
Currently I use stdio fopen/fwrite/fread and save the binary data from
the array.
That works well.
Now I've started to play with libsndfile, first I use SF_FORMAT_WAV |
SF_FORMAT_FLOAT which work as nice as stdio. The file size is the same
then with plain binary data.
To save some bytes on the disk I tried SF_FORMAT_FLAC | SF_FORMAT_PCM_24
which reduce the size nearly to the half. The drawback is that the
floats in the internal buffer could go out of the range from -1.0 <->
1.0 which leads to crackles when the buffer get refiled. As long the
values are in range, flac works very well.
I tried it with the sf_command (sndfile, SFC_SET_NORM_FLOAT, NULL, SF_TRUE)
for write and read, but that didn't help.
Yes, I use sf_write_float to write/read the flac file, even if flac
didn't support floats. When I understand the libsndfile api right,
libsndfile will handle that.
So my quetion will be, which is the common format to write flac file
with libsndfile, and is there a way to write values out of range into a
flac file?
regards
hermann
Re-posting. Maybe I can get some responses here :)
---------- Forwarded message ----------
From: Rafael Vega <email.rafa(a)gmail.com>
Date: Sun, Nov 3, 2013 at 3:31 PM
Subject: Measuring phase and frequency response of a filter.
To: linux-audio-user <linux-audio-user(a)lists.linuxaudio.org>
Hi.
I'm building a little ear training application for eqing for which I'm
building some filter banks in puredata. Can someone point me to a practical
way of measuring and plotting my filter's frequency and phase responses?
Jack apps, pd patches or code should be fine.
Thanks!
--
Rafael Vega
email.rafa(a)gmail.com
--
Rafael Vega
email.rafa(a)gmail.com