Hi LADs,
After some time in quarantine, meaning that it just passed almost 40
days since its last public appearance, the frivolous debutante has
matured a bit but not that much. Truth is, it is not quite healed and in
fact, it is getting seriously bloated ;)
Qtractor 0.1.1 (futile duchess) has been released!
Qtractor is an Audio/MIDI multi-track sequencer application, written in
C++ around the Qt toolkit. Its primordial target platform is Linux,
where the Jack Audio Connection Kit (JACK) for audio, and the Advanced
Linux Sound Architecture (ALSA) for MIDI are the main infrastructures to
evolve as a fairly featured Desktop Audio/MIDI Workstation GUI specially
dedicated to the personal homestudio. It sits confortably tagged as for
the techno-boy bedroom home-studio. There's no genre segregation here,
it also applies to techno-girls ;).
Now seriously, it even has its own Wikipedia entry already:
http://en.wikipedia.org/wiki/Qtractor.
Back to business, these are the major highlights for this release:
- Draft user manual, contributed by James Laco Hines.
- Native Linux VST plug-in support.
- Initial DSSI plug-in support (audio effects only atm.)
- User configurable keyboard shortcuts.
- JACK server auto-start.
- Clip fade-in/out relative resizes.
- Auto time-stretch now optional.
- MIDI capture/record input quantize.
- Major plug-in infrastructure rewrite.
- Seamless plug-in drag-and-drop.
Check it out, from the official project web site:
http://qtractor.sourceforge.nethttp://sourceforge.net/projects/qtractor
Direct link for the source tarball download:
http://downloads.sourceforge.net/qtractor/qtractor-0.1.1.tar.gz
The new user manual draft is also made available:
http://downloads.sourceforge.net/qtractor/qtractor-0.1.1-user-manual.pdf
Literal change-log follows. since the last 0.1.0 (frivolous debutante)
alpha release:
- After some great user demand, keyboard shortcuts are finally
configurable, as found provisionally under Help/Shortcuts...,
for the main application menu and for the MIDI editor as well.
- Debian package gets SSE optimization disabled as default.
- At least some transport actions get to be non auto-repeatable
when pressed for much too long, as Play and Record, avoiding
the tumbling imposed from the keyboard.
- For the first time ever, jackd auto-start is now allowed (!).
- OSC service support through liblo gets optional at configure
time, now leading the way to proper DSSI plug-in hosting.
- All plug-in widget controls count are now capped to one hundred.
- Plugin paths setup is now made available on the options dialog,
overriding each of respective default settings, as implicit from
the LADSPA_PATH, DSSI_PATH and VST_PATH environment variables
(see View/Options.../Display/Plugin Paths).
- Clip fade-in/out lengths are now kept relative to tempo changes
and also to clip offset and length changes (clip resizes).
- Automatic time-stretching for all audio clips when session tempo
changes, may now be disabled/enabled as a global session option
(see View/Options.../Audio/Playback/Automatic time-stretching).
- Double-clicking on an empty area (de)selects all clips on track.
- MIDI capture (record) quantization is now an option, possibly
handy for some jerky performance musicians, as the one found
in myself ;) (see View/Options.../MIDI/Capture/Quantize).
- The global options dialog (View/Options...) has seen its Display
tab page being moved back and to the right.
- Major rewrite of the plug-in infrastructure, adding primordial
support for DSSI and native VST plug-in flavors.
- Drag-and-drop of plug-in instances are now allowed intra- and
inter-mixer strip chains, either on tracks or buses.
- Turning track record off while recording is rolling was leaving
the session in a inconsistent recording status, now fixed.
- A random but instant crash upon audition/pre-listening player
onset was hopefully fixed.
Cheers && Enjoy
--
rncbc aka Rui Nuno Capela
Quoting Anders Dahnielson <anders(a)dahnielson.com>:
> Frustrated I took a stab at writing a simple JACK client for the
> Freeverb3
> convolution reverb. But I can't get it to work properly. Could anyone
> knowledgeable take a look at it and help us out? I've posted the code of
> my
> (more or less copy-and-paste) attempt here:
>
> http://bb.linuxsampler.org/viewtopic.php?f=7&t=33#p291
Regarding recording impulses, one should really check Aliki on
http://www.kokkinizita.net/linuxaudio/ (manual and download on the download
page).
Sampo
Hello all,
A few days ago I asked Paul Davis what he knew concerning the AES32
and using the hdspmixer tool to access it's matrix. Currently it
doesn't support it, and given that I haven't been able to contact
anybody who developes hdspmixer or the AES32 driver, he was the only
one who I could find who could at least answer any questions. The
basic answer was "there isn't anybody to fix it for you." :P
So, I've spent the last two days messing with the hdspmixer code and
have confirmed my initial assumption that I have no idea what the hell
I'm doing. I started by trying to fix it so that the hdspmixer would
support cards from both the hdsp and hdspm drivers. I'm sure it could
be done, but the drivers code has diverged enough that it makes it
more work than I really want to do. So I yanked out all code that is
specific to hdsp and made it so that it will compile against the hdspm
code. Basically it's a version of hdspmixer for the AES32 and MADI
cards. (I'm not sure how the channel mapping in MADI works out, so
that's not in it yet)
The good news is that it will compile. The bad news is that it
immediately segfaults. And gdb doesn't really tell me anything useful,
but then again I really don't know how to use it.
Therefor I'm shamelessly soliciting any help that I can get. I really
want to be able to use the AES32, but I'm starting to get in over my
head. The hdspm driver is a bit silly without a functional matrix
mixer for it. I can send the code to anybody who would be willing to
give advice, give me code fixes, fill in the missing bits for MADI,
take over the project, or just provide more confirmation that I really
don't know what the hell I'm doing.
Oh, and as a little bonus I discovered that the hdspm driver only has
a dummy typedef for hdspm_peak_rms_t (as well as a few other
functions) so there is no code for input_peaks, output_peaks,
input_rms, playback_peaks or playback_rms. :P
-Reuben
Dear all,
The Linux Audio Conference 2008 in Cologne (Feb 28th - Mar 2nd 2008)
is just one month away now. The programme is shaping up, concerts are
being organized and coffee is about to be ordered.
To help us with planning the LAC2008 we kindly ask you to register now
at the conference website. This helps us to estimate how many visitors
we may expect, what individuals the audience is made of, and allows to
produce name tags for all attendees so that it becomes easier to
identify each other.
To register, please use the "Registration" form at
http://lac.linuxaudio.org
Also we now have put accommodation info plus some maps of the
conference location online. You can find these on
http://lac.linuxaudio.org under "Visitor Info".
Finally if you're living in Cologne or nearby: We are looking for
volunteers who would like to help out in any way, e.g. to host artists
and paper presenters in their flat. If you want to offer your help,
please contact the LAC2008 orga team at lac(a)linuxaudio.org
The LAC2008 chair is looking forward to have another great conference
with you all.
All the best
--
Frank Barknecht and Martin Rumori
Chairs of LAC2008
Hi,
I was planning to spend some effort at LAC this year to get jack
support in my Video Editor right, and I was wondering what would be
the best model to implement.
My App has a simple timeline that can have an arbitrary number of
tracks, and each track can hold clips at arbitrary positions. Each
clip is one audiofile or audiotrack from a videofile, which does not
make much of a difference. Eventually I also perform operations like
resampling or filters per clip.
Potential Models:
A)
1 Thread that connects to the jack-callback with a ringbuffer, and
that does everything, mixing, effects, disk-IO, etc.
Has the advantage that it is probably not very difficult to implement.
However, compute intensive stuff that is not IO bound happens outside
the jack-thread, which I guess is not optimal.
B)
1 Disk thread+ ringbuffer per clip. Advantage: compute intensive stuff
is in jack-thread. Disadvantage: does not scale to long timelines with
many clips and therefore many threads.
C)
1 Disk thread+ringbuffer per Track. Advantage: could eventually be
implemented such that compute-bound stuff happens in jack-thread, or
at least that "some" of this happens in the jack-thread. The number of
threads is likely to be small enough to handle that.
D)
1-n Single Diskthread(s) for everything, and 1 Ringbuffer per Track.
Make number of Disk-Threads limited, and eventually add more for
multi-core/SMP, and let those feed into per Track Ringbuffers
according to some home-brew scheduling algorithm? Wouldn't waste a
thread per track?
E)
Be lazy and reuse something that someone else has written. ;-)
So, what suggestions can you make about what I should do? I know that
there will be some not so nice corner cases when seeking, the
necessity to reset ringbuffers, scrubbing during playback, etc.....
My "Latency" Requirements are probably fairly moderate, so I could
compromise in that respect.
Another "Use-Case" that I would like to implement is
"Project-Nesting", preferable in "Real-Time", and only additionally by
rendering the nested project into a file. For this I think it would be
ok to have a big enough buffer between the current project and the
embedded project, and do the "on the fly rendering+IO" in the
"Disk-Thread".
what do you think?
Cheers
-Richard
--
Don't contribute to the Y10K problem!
We are jubilous to announce CLAM 1.2 'GSoCket plugged-in release'.
We had to wait for some months to make this release as we had to
redeploy the multiplatform release infrastructure [1]. Thus, the
feature buffer for this release is pretty full. It incorporates both,
the results of the Summer of Code [2] students work and the
involvement of David and Pau with Barcelona Media Foundation Audio
Research Lab[3].
We want to thank the involvement of GSoC students Hernan Hordiales[4],
Bennet Kolasinsky[5], Greg Kellum[6], Andreas Calvo, Roman Goj[7] and
Abe Kazemzadeh, Google Inc, and Barcelona Media audio lab members
for their precious involvement in CLAM.
[1] http://clam.iua.upf.edu/testfarm/
[2] http://clam.iua.upf.edu/wikis/clam/index.php/GSoC_2007
[3] http://www.barcelonamedia.org/index.php/linies/10/en
[4] http://h.ordia.com.ar
[5] http://bennettdoesclam.blogspot.com
[6] http://gregkellum.com
[7] http://ro-baczek.blogspot.com
A summarized list of changes follows. See also the CHANGES files[8]
for details, or the development screenshots[9] for a visual guided tour.
As usual binary packages for Windows, MacOSX and several flavors of Linux
are available to download.
[8] http://iua-share.upf.edu/svn/clam/trunk/CLAM/CHANGES
[9] http://clam.iua.upf.edu/wikis/clam/index.php/Development_screenshots
Summary of changes:
The most exciting feature is the new plugin system (acalvo)
which enables third party algorithms to be distributed separately
from the core binaries. LADSPA plugins support has been enhanced
and a first iteration on FAUST[10] integration. The wiki[11] contains
very nice how-to's that cover most of that.
[10] http://faust.grame.fr/
[11] http://clam.iua.upf.edu/wikis/clam
Most of the GSoC work come as plugins: a SMS Synthesizer (gkellum),
a Voice synthesis/analysis (akazem) and some some cool guitar effects
(hordia). Also not included as plugins but in the main repository
several enhancements have been done on the SMS transformations (hordia)
and the tonal analysis (rgoj).
Some interesting work has been done on the Barcelona Media Audio Lab
on having a system to simulate 3D room acoustics which can be reproduced
on several exhibition systems. Some precomputed room databases are
available to try. Check the wiki NetworkEditor Tutorial for more
information.
Regarding the applications, Network Editor incorporates new usability
enhancements, a new on-line Tutorial and a new Spectrogram like view.
The Annotator received Bennet Kolasinsky attention improving its the
flexibility of its interface, the practical effects are multiple
segmentation and low-level descriptors panes and that we are pretty
close to visualization and auralization plugins.
Enjoy.
The CLAM Team
The PCI card I'm working with was made to receive the signal as a 32 bits frame (16 bits by channels) each time I write to the file device so I have to turn the src_data.data_out frames (2 floats) into a unique 32bit unsigned long and write it:
1) "unsigned long frame;" : I declare an unsigned long that will receive the entire frame.
2) "frame = (unsigned long) ((src_data.data_out[sent]))>>16;" : I convert the float at the index equal to sent and initialise the frame with it. So my frame look like this 0000000000000000XXXXXXXXXXXXXXXX Where Xs represent one channel signal.
3) "frame |= (((unsigned long) (src_data.data_out[sent+1])) & 0xFFFF0000);" Apply a mask to the 16 first low bits of the data_out float value and aplly a OR between it and frame. It should look like this :
yyyyyyyyyyyyyyyy0000000000000000 | 0000000000000000XXXXXXXXXXXXXXXX = YYYYYYYYYYYYYYYYXXXXXXXXXXXX
Where Ys is the other channel signal.
Is it clear enough,I hope so :)
I have to emphasize that this stuff seems to work because when I send the src_data.data_in frames buffer this way, the signal is perfect.
> Message du 11/02/08 14:12
> De : "Erik de Castro Lopo" <mle+la(a)mega-nerd.com>
> A : linux-audio-dev(a)lists.linuxaudio.org
> Copie à : "Mathieu Dalexis" <dev.audioaero(a)orange.fr>
> Objet : Re: [LAD] (no subject)
>
> Mathieu Dalexis wrote:
>
> > unsigned long frame;
> > frame = (unsigned long) ((src_data.data_out[sent]))>>16;
> > frame |= (((unsigned long) (src_data.data_out[sent+1])) & 0xFFFF0000);
>
> Sorry, but what is all this stuff?
>
> Erik
> --
> -----------------------------------------------------------------
> Erik de Castro Lopo
> -----------------------------------------------------------------
> "Projects promoting programming in natural language are intrinsically
> doomed to fail." -- Edsger Dijkstra
> ---------------------------------------------------------------------------------------
> Orange vous informe que cet e-mail a ete controle par l'anti-virus mail.
> Aucun virus connu a ce jour par nos services n'a ete detecte.
>
>
>
Hello,
I try to perform dynamic upsampling on a signal (SR=44,1kHz) to double the sample rate.
Here is the code I wrote to test it. It works using a raw audio file (cdparanoia -r 1) with a 1Khz sin signal.
When I look to my output signal with an oscilloscope it looks quite good but I get some noise (glitches?) that makes it really sound bad.
Is there something I do wrong ? Is it a libsamplerate problem ?
**************************
******My code*************
**************************
#include <stdio.h>
#ifdef HAVE_STDLIB_H
#include <stdlib.h>
#endif
#include <math.h>
#include <unistd.h>
#include <fcntl.h>
#include <sys/types.h>
#include <sys/time.h>
#include <samplerate.h>
int
main(int argc, const char *argv[])
{
int TAILLEBUF = TAILLEBUF;
int TAILLEBUF_OUT = TAILLEBUF;
int16_t * p_readbuf= (int16_t *) malloc(sizeof(int16_t)*TAILLEBUF);
float * pf_readbuf= ( float *) malloc(sizeof(float)*TAILLEBUF);
float * pf_writebuf= ( float *) malloc(sizeof(float)*TAILLEBUF_OUT);
int fini=0;
int finiRead=0;
int fd,fdread; /* sound device file descriptor */
struct timeval tv;
tv.tv_sec = 0;
tv.tv_usec = 120000;
int error;
SRC_STATE * src_state;
SRC_DATA src_data;
src_data.src_ratio = atoi(argv[3]);
src_data.input_frames=0;
src_data.end_of_input = 0;
src_data.data_out = pf_writebuf;
src_data.output_frames=TAILLEBUF_OUT/2;
src_state = src_new(2,2,&error);
/* open sound device I'm working on my own sound card */
fd = open("/dev/AudioDriver/data", O_WRONLY);
if (fd < 0) {
perror("open of /dev/AudioDriver/data failed");
exit(1);
}
/*My raw sound file*/
fdread = open("./cdda.raw", O_RDONLY);
if (fd < 0) {
perror("open of cdda.raw failed");
exit(1);
}
fd_set writefds[1];
FD_ZERO(writefds);
FD_SET(fd, writefds);
int k=0;
while (!finiRead) {
/* read TAILLEBUF/2 Frames */
int j;
if(read(fdread,p_readbuf,TAILLEBUF*sizeof(int16_t))!=TAILLEBUF*sizeof(int16_t))
finiRead=1;
int i=0;
/*Convert from 16bit integer to Float*/
for(i=0;i<TAILLEBUF;i++)
{
pf_readbuf[i]=((p_readbuf[i])<<16) ;
}
if(src_data.input_frames==0)
{
src_data.data_in = pf_readbuf;
src_data.input_frames=TAILLEBUF/2;
}
tv.tv_sec = 0;
tv.tv_usec = 120000;
fd_set writefds[1];
FD_ZERO(writefds);
FD_SET(fd, writefds);
while(select(fd+1, NULL, writefds, NULL, &tv)!=1){
tv.tv_sec = 0;
tv.tv_usec = 120000;
fd_set writefds[1];
FD_ZERO(writefds);
FD_SET(fd, writefds);
}
int sent = 0;
if((error = src_process(src_state,&src_data))!=0)printf("error\n");
while(sent < src_data.output_frames_gen)
{
/*Convert my 2-floats frame to an unsigned long frame containing my to channels*/
unsigned long frame;
frame = (unsigned long) ((src_data.data_out[sent]))>>16;
frame |= (((unsigned long) (src_data.data_out[sent+1])) & 0xFFFF0000);
if(write(fd, &frame, sizeof(unsigned long))!=sizeof(unsigned long))
{fini = 1;
}
sent += 2;
}
exit(0);
}
****************************
****************************
****************************
I hope that you can understand my english and that my code is clear enough for you to help me.
Thanks in advance.
Mathieu
I received a private reply to my request regarding
a jack-midi equivalent to pmidi. This may be of
interest to others as well.
Jpmidi
<http://www.geocities.com/kellinwood/jpmidi/index.html>
does the job nicely.
Unlike pmidi, jpmidi is a 'resident' program - it
creates it own command prompt, allowing e.g. to
connect it and start / stop playing (it controls
and follows jack's transport).
Given this it's odd that it doesn't have a 'load'
command. To switch to another midi file you have
to quit and restart.
--
FA
Laboratorio di Acustica ed Elettroacustica
Parma, Italia
Lascia la spina, cogli la rosa.
LAC 2008: bandwidth to burn and volunteers needed
The 6th annual Linux Audio Conference is taking place in Cologne, Germany, Feb
28th to March 2nd, 2008. As with each previous year this year's conference will
be streamed live over the internet in ogg theora via icecast. The stream server
is up at: http://lac2008.khm.de:8000/
There is nothing to see at the moment, but keep checking over the coming days
as we hope to have a test stream up soon.
This year we are in the unique situation of having a Gigabit link donated by
CITIZENMEDIA: http://www.ist-citizenmedia.org/ They have asked us to use up as
much of their bandwidth as we can so they can see how well the link performs.
This year the core team, Joern Nettingsmeier and myself, are recruiting
volunteers to spread the workload. To that end we have set up a mailing list
and irc channel to coordinate our efforts. We will also have a wiki shortly.
If you will be coming to Cologne for the conference please consider signing up
to help. If you are not coming, please enjoy the fruits of our labors by
watching the streams and participating via irc.
stream team mailing list: http://zhevny.com/mailman/listinfo/lac-streams
general conference chat: #lac2008 on irc.freenode.net
stream team tech talk: #lac2008-tech on irc.freenode.net
Thanks,
Eric Rz.
--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.