As some of you know already, ive been busy porting OpenMusic to Linux
over the last months.
Theres a beta-release of OpenMusic 6.7 available as a RPM-package here:
http://repmus.ircam.fr/openmusic/linux
The standard functionality is in place. Its been tested, with most of
the standard OM-libraries from IRCAM, as well as quite many 3rd-party
libs, during the last weeks by me and others.
The development of OM for Linux is continuing in collaboration with the
main OM-developers at IRCAM.
Please check back at the download-area for updates every now and then.
Theres a mailing-list where announcements might come more often:
http://repmus.ircam.fr/openmusic/contact
OM installs and uses its source-code while running. If you want to
build your own or get into developing this further its presumably more
effective to work on a local checkout of the svn-tree, available through
the "Sources/Developer"-pane on the same page.
Please provide feedback, bug-reports, issues, questions.
Thanks,
-anders
These questions are really directed to Paul Davis (as the
main Ardour dev), Erik de Castro Lopo (libsndfile author),
and anyone with experience in this field.
Imagine a real-time audio processing app reading (or writing)
lots of audio files, possibly evaluating a complex timeline
consisting of many separte pieces. To make things work some
(or a lot of) buffering and lookahead will be necessary.
There are at least three distinct places where this can be
done:
1. the file system(s) and kernel
2. any library used to acess audio files,
3. the application itself.
Of these, only (1) will be aware of any hardware related
issues, and only (3) will be aware of what is expected to
happen in the (near) future. (2) sits somewhere between
the two.
In view of this, what is currently the best way for an
app to read/write audio files, the basic read() and write()
calls, or the stdio interface ?
More specifically, if one would write a library to access
a particular audio file format (not supported, or only
partially by e.g. libsndfile), how 'smart' in terms of
buffering, lookahead etc. should that library be, or not
try to be, in order to perform well with apps like e.g.
Ardour ? What form would the preferred API take ?
Ciao,
--
FA
A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)
It seems to be problem of gui, aeolus 0.8.4 and 0.9.0 both have this problem.
Backtrace below is got using 0.9.0. First time i reported directly to Fons,
but still no reply. Don't know, is it accepted at all for such things as bug
reporting.
(gdb) run
Starting program: /usr/bin/aeolus -J
warning: no loadable sections found in added symbol-file system-supplied DSO at
0x7ffff7ffa000
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[New Thread 0x7ffff7faa700 (LWP 9934)]
[New Thread 0x7ffff7f29700 (LWP 9935)]
[New Thread 0x7ffff7ea8700 (LWP 9936)]
[New Thread 0x7ffff55fd700 (LWP 9937)]
[New Thread 0x7ffff55ec700 (LWP 9938)]
[New Thread 0x7ffff55db700 (LWP 9939)]
Reading '/usr/share/aeolus/stops/Aeolus/definition'
[New Thread 0x7ffff55ca700 (LWP 9940)]
[New Thread 0x7ffff4ecb700 (LWP 9941)]
Reading '/home/nick87720z/.aeolus-presets'
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7ffff4ecb700 (LWP 9941)]
0x00007ffff5d78da1 in ?? () from /usr/lib/x86_64-linux-gnu/libfreetype.so.6
(gdb) backtrace
#0 0x00007ffff5d78da1 in ?? () from /usr/lib/x86_64-linux-gnu/libfreetype.so.6
#1 0x00007ffff5d7dd1f in ?? () from /usr/lib/x86_64-linux-gnu/libfreetype.so.6
#2 0x00007ffff5d7f5af in ?? () from /usr/lib/x86_64-linux-gnu/libfreetype.so.6
#3 0x00007ffff5d2bbd5 in FT_Render_Glyph_Internal () from /usr/lib/x86_64-
linux-gnu/libfreetype.so.6
#4 0x00000039bd40c670 in XftFontLoadGlyphs () from /usr/lib/x86_64-linux-
gnu/libXft.so.2
#5 0x00000039bd409c64 in XftGlyphExtents () from /usr/lib/x86_64-linux-
gnu/libXft.so.2
#6 0x00000039bd40a06a in XftTextExtentsUtf8 () from /usr/lib/x86_64-linux-
gnu/libXft.so.2
#7 0x00000039a840e7ff in X_textip::textwidth(int, int) () from
/usr/lib/libclxclient.so.3
#8 0x00000039a840ed6d in X_textip::update(bool) () from
/usr/lib/libclxclient.so.3
#9 0x00007ffff6533023 in Instrwin::show_tuning(int) () from
/usr/lib64/aeolus_x11.so
#10 0x00007ffff653c974 in Xiface::handle_mesg(ITC_mesg*) () from
/usr/lib64/aeolus_x11.so
#11 0x00007ffff653cab7 in Xiface::thr_main() () from /usr/lib64/aeolus_x11.so
#12 0x00000039a7c0309a in P_thread_entry_point () from
/usr/lib/libclthreads.so.2
#13 0x00007ffff7974e9a in start_thread (arg=0x7ffff4ecb700) at
pthread_create.c:308
#14 0x00007ffff6c873fd in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:112
#15 0x0000000000000000 in ?? ()
(gdb) kill
Dear all,
QMidiArp 0.5.3 fixes a number of bugs and should from now on replace 0.5.2. It also has some minor functional improvements, all is listed below.
With thanks to all reporters, contributors and translators.
And....enjoy!
Frank
------------------------------------
QMidiArp is an advanced MIDI arpeggiator, programmable step sequencer and LFO for Linux with ALSA and JACK MIDI backends.
------------------------------------
Downloads are available at
http://qmidiarp.sourceforge.net/
direct link:
http://sourceforge.net/projects/qmidiarp/files/qmidiarp/0.5.3/qmidiarp-0.5.…
------------------------------------
qmidiarp-0.5.3 (2013-11-26)
New Features
o Random functions for sequencer and LFO steps and arp repeat mode
(feature request #5 Keith Milner)
Improvements
o NSM support now handles import/export/clear to facilitate
getting started (Roy Vegard Ovesen)
o Tempo is now MIDI-controllable (MIDI-learn)
o Sequencer transpose slider is now MIDI controllable (MIDI-learn)
(feature request #7)
o Sequencer pattern maximum length extended to 32 bars
(feature request #6)
Fixed Bugs
o LFO offset jumped back to fixed value when MIDI controlled
(bug #6 distrozapper)
o Arp trigger behavior was not practical with chords pressed on keyboard
(bug #7 Burkhard Ritter)
o JACK Transport no longer worked when no JT Master tempo was present
(bug #5 Barney Holmes)
o Deleting an arp pattern in text window while running caused crash
o Note lengths were not consistent between alsa and jack backends
o Note lengths did not account for current tempo
o Sequencer did not honor "D" button when MIDI controlled
o Seq note length is now a 16th at half slider scale
Hi!
I wrote a quick&dirty cmdline tool to dump and restore the internal
mixer state of an RME card (no matter if handled by snd_hdsp or
snd_hdspm, so this should apply to almost all RME cards except the new
MADIFX).
I call it hdspdump for now, if you like, grab it here:
http://adi.loris.tv/hdspdump.c
Almost no sanity checks, it's proof of concept atm. For your
convenience, I'll also attach it to this mail (who knows what will
happen to the URL above in the future...)
Idea is as follows:
1. Run hdspmixer ONCE to make your desired mix.
2. Run hdspdump hw:DSP > dumpfile to dump the current mixer state.
3. Run hdspdump hw:DSP write < dumpfile to restore the mixer state.
Of course, you can have multiple files for different mixes.
It's most likely used for headless setups with static signal routing.
I have only tested it on my Multiface, so if you have other RME gear and
have a few minutes to waste, I'm happy to hear from you.
Cheers
/* compile with gcc -std=gnu99 -o hdspdump hdspdump.c -lasound */
#include <stdio.h>
#include <stdlib.h>
#include <alsa/asoundlib.h>
void usage(char *name) {
fprintf(stderr, "usage: %s cardname [write]\n", name);
fprintf(stderr, "Example: %s hw:DSP > dumpfile\n", name);
fprintf(stderr, "Example: %s hw:DSP write < dumpfile\n", name);
}
void error_and_out(int err) {
fprintf(stderr, "ALSA error: %s\n", snd_strerror(err));
exit(err);
}
int main(int argc, char **argv) {
int err;
char *cardname;
snd_hwdep_info_t *info;
snd_hwdep_info_alloca(&info);
snd_ctl_elem_id_t *id;
snd_ctl_elem_value_t *ctl;
snd_ctl_t *handle;
int write = 0;
if (argc < 2) {
usage(argv[0]);
exit(1);
}
cardname = argv[1];
if (argc > 2 && (0 == strcmp("write", argv[2]))) {
write = 1;
}
snd_ctl_elem_value_alloca(&ctl);
snd_ctl_elem_id_alloca(&id);
snd_ctl_elem_id_set_name(id, "Mixer");
snd_ctl_elem_id_set_interface(id, SND_CTL_ELEM_IFACE_HWDEP);
snd_ctl_elem_id_set_device(id, 0);
snd_ctl_elem_id_set_index(id, 0);
snd_ctl_elem_value_set_id(ctl, id);
if ((err = snd_ctl_open(&handle, cardname, SND_CTL_NONBLOCK)) < 0) {
error_and_out(err);
}
for (int source = 0; source < 128; source++) {
for (int dest = 0; dest < 64; dest++) {
snd_ctl_elem_value_set_integer(ctl, 0, source);
snd_ctl_elem_value_set_integer(ctl, 1, dest);
if (write) {
char buf[1024];
fgets(buf, sizeof(buf), stdin);
snd_ctl_elem_value_set_integer(ctl, 2,
atoi(buf));
if ((err = snd_ctl_elem_write(handle, ctl)) < 0) {
snd_ctl_close(handle);
error_and_out(err);
}
} else {
if ((err = snd_ctl_elem_read(handle, ctl)) < 0) {
snd_ctl_close(handle);
error_and_out(err);
}
printf("%li\n", snd_ctl_elem_value_get_integer(ctl, 2));
}
}
}
snd_ctl_close(handle);
}
[Sorry for cross-posting, please distribute]
We are happy to announce the next issue of the Linux Audio Conference
(LAC), May 1-4, 2014 @ ZKM | Institute for Music and Acoustics, in
Karlsruhe, Germnany.
http://lac.linuxaudio.org/2014/
The Linux Audio Conference is an international conference that brings
together musicians, sound artists, software developers and researchers,
working with Linux as an open, stable, professional platform for audio
and media research and music production. LAC includes paper sessions,
workshops, and a diverse program of electronic music.
*Call for Papers, Workshops, Music and Installations*
We invite submissions of papers addressing all areas of audio processing
and media creation based on Linux. Papers can focus on technical,
artistic and scientific issues and should target developers or users. In
our call for music, we are looking for works that have been produced or
composed entirely/mostly using Linux.
The online submission of papers, workshops, music and installations is
now open at http://lac.linuxaudio.org/2014/participation
The Deadline for all submissions is January 27th, 2014 (23:59 HAST).
You are invited to register for participation on our conference website.
There you will find up-to-date instructions, as well as important
information about dates, travel, lodging, and so on.
This year's conference is hosted by the ZKM | Institute for Music und
Acoustics (IMA). The IMA is a forum for international discourse and
exchange and combines artistic work with research and development in the
context of electroacoustic music. By holding concerts, symposia and
festivals on a regular basis it brings together composers, musicians,
musicologists, music software developers and listeners interested in
contemporary music. Artists in Residence and software developers work on
their productions in studios at the institute. With digital sound
synthesis, algorithmic composition, live-electronics up to radio plays,
interactive sound installations and audiovisual productions their
creations cover a broad range of what digital technology can inspire the
musical fantasy to.
The ZKM is proud to be the place of the LAC for the fifth time after
having initiated the conference in 2003.
http://www.zkm.de/musik
We look forward to seeing you in Karlsruhe in May!
Sincerely,
The LAC 2014 Organizing Team
>> In my lv2 plugin, I need to communicate information from the DSP to
>> the UI, but I don't want to break the DSP/UI separation principle (no
>> Instance or Data access). On top of that, I'm using LVTK.
> he, he, yeah it can get a little confusing... maybe this will help.
> // you're sending things in an atom sequence so get the size information
> // from the port buffer
>
> LV2_Atom_Sequence* aseq = (LV2_Atom_Sequence*) p (p_notify);
> m_forge->set_buffer ((uint8_t*) aseq, aseq->atom.size);
>
> m_forge->sequence_head (m_notify_frame, 0);
>
> // sequences need a timestamp for each event added
> m_forge->frame_time (0);
>
> // after forging a frame_time, just write a normal float (no blank object needed)
>
> m_forge->write_float (1604);
> Your ttl file has atom:Float as the buffer type. I've never used
> anything besides atom:Sequence. I imagine this buffer type doesn't need a
> sequence head forged first. Maybe David will jump in on how atom:Float
> bufferType'd ports are supposed to forged into and out of.
Ok, so I changed my code the way you proposed (with switching in the
ttl to Sequence), but still don't manage to make it work.
I'm wondering if there isn't something wrong with the way I setup the
Forge in the first place. I'm a bit confused with the way to interact
wiith the map object in LVTK.
Scope::Scope(double rate) : Plugin<Scope, URID<true>,
Options<true>>(p_n_ports)
{
m_forge = new AtomForge(p_map);
}
void Scope::run(uint32_t nframes)
{
// you're sending things in an atom sequence so get the size information
// from the port buffer
LV2_Atom_Sequence* aseq = (LV2_Atom_Sequence*) p (p_notify);
m_forge->set_buffer ((uint8_t*) aseq, aseq->atom.size);
m_forge->sequence_head(m_notify_frame, 0);
// sequences need a timestamp for each event added
m_forge->frame_time(0);
m_forge->write_float(1604);
}
> I recommend, if you want to use LVTK to do atom forging, that you subclass
> lvtk::AtomForge and add appropriate methods to it...
>
> Here's a snippet that shows how to write patch get/set messages with a
> subclassed AtomForge. It also shows how to write raw midi.
>
> http://pastebin.com/C1LYtXpv -- the code in there uses small uses the
> nullptr macro. just change those to "0" if you're not using c++11
Could you tell me the advantages of doing that?
But I had a look at the code, I still need to understand how to work
with LV2_URID_Map in LVTK (I cannot find any examples using it).
Hello all Users & Devs of linux-audio-land,
Moving forward from the topic on Aeolus and forking projects, perhaps it is
wise to look at how the community as a whole can grow from this situation:
1) It seems the frustration of forks is mainly due to lack of communication.
2) Had appropriate communication been in place, patches could have been
merged.
3) If 1) and 2), then the community flourishes as a whole.
In the Aeolus thread on LAD, Michel Dominique wrote (and I feel its
relevant here):
"That imply we must communicate more with each other"
"I think this is a big problem, and not only related to Fons work, or the
LAD, but to the whole community."
The mailing list you're reading from now is one of the central hubs for the
community:
The -developers list is the perfect place to announce projects, forks,
patches etc.
The -users list is good for asking users and interested parties questions.
I will try to announce more patches / code, to contribute upstream, and
hopefully benefit the community.
Cheers, -Harry