dssi-vst 0.5 released!
======================
The 0.5 release of dssi-vst is now available.
dssi-vst is a DSSI plugin wrapper for Win32 VST effects and instruments
with GUI support, allowing them to be loaded into any DSSI host.
dssi-vst is available from the download page at
http://dssi.sourceforge.net/
The 0.5 release now comes with Javier Serrano Polo's VST-compatibility
header, as previously distributed in LMMS. (Actually, this header was
already compatible with dssi-vst -- no modifications to dssi-vst were
necessary -- it's just that the header is now included in the package.)
This permits it to be compiled without the official VST SDK and
distributed under pure GPL. No guarantees are made as to the
reliability of the results; your feedback is welcome, but please bear
in mind that I will not do any development work on the compatibility
header myself for legal reasons.
The 0.5 release is also (finally) compatible with version 2.4r2 of the
official SDK, should you wish to use it.
Chris
Dave Robillard wrote:
> Why not make it satisfy most everyone by being extensible?
It *is* extensible. Note that commands 0x00-0x6F and 0x73-0x7F are
unused, so further extensions are free to define them (perhaps we need a
scheme for binding extension URIs to command numbers, to make it more
LV2-ey). And 0x72 command can be used for pretty much any data larger
than 8+2 octets.
While it's "efficient first" and not "generic first", so to speak, it
should be fine for the intended uses.
> The idea of a generic event port is not a bad one,
I think it's not just "not a bad one". The other possibility (multiple
event ports) is less efficient, and speed is crucial here. It's also
more complex, looking from plugin author's perspective. So I had little
choice.
> idea at all (no matter what, someone is going to want to put something
> in there you didn't think of.
Please don't jump to conclusions, and take more time to read and analyse
the proposal.
Of course, it is possible to add new event types with arbitrary length
data, and the limitation of 8 octets per extended block is not that bad,
because you can always fit an interface pointer (32-bit or 64-bit) there.
Just look how binary data extension is implemented.
Notice that I just took the approach of optimizing for most common case
(MIDI events), and tried to maximize functionality while keeping block
size small and constant (to avoid pointer arithmetic that was
complicating Lars' proposal a bit).
> Trying to pre-define things from the top down like this is un-lv2-ey).
Well, sometimes you need to find the right tradeoff between being
efficient (memory- and speed-wise) and generic. I think I've found an
acceptable tradeoff (definitely favouring speed, but not losing
generality and not very memory-hungry).
However, I had to make some assumptions about how it will be used
(mostly implemented by inexperienced people, mostly used for MIDI and
float parameters, seldom used for advanced stuff). Oh well, I'm
repeating myself here :)
I think those are correct assumptions, but you seem to have a different
angle for looking at those things. Well, it took me years (and
failed/inadequate designs) to grow out of the "everything should be as
generic as possible" approach, so I understand why you're doing that,
but I still prefer the priority-based optimization approach that I've used.
I still think my proposal could be improved, and I don't like some
decisions that I made (basically, I made them because the alternatives
looked even more nasty), but stripping off optimizations is not the way
to go, IMO.
> Something more appropriate (IMO) might be like:
> struct LV2_EVENT
> {
> ev_stamp_t time; ///< (ignoring the timestamp type issue)
> ev_type_t type; ///< (again ignoring type issue)
> size_t buf_size; ///< size of buf in bytes
> char* buf; ///< raw event data
> }
You're suggesting a "classic" textbook chunked data approach, which
works, no doubt. However, it has some problems with it, which might not
be considered very major, but seem to make my approach slightly more
favourable:
- too much data to be accessed in the most common use case (in 32-bit
environment, 16 bytes of header plus event data possibly in distant
memory); we don't need to save every byte of RAM, but when you need to
read and write twice as much RAM as you could, then maybe it's worth
rethinking it
- separation of event header and event data in the most common case; it
would be better not to cause cache thrashing too much
- it encourages memory fragmentation (experienced people will allocate
event data for all events in the same buffer, wonder about inexperienced
ones, one malloc per event data? :) )
- it doesn't deal with large data properly (because the plugin cannot
start "owning" the raw event data instead of copying it from the buffer
provided); imagine copying a video buffer in the process() function of a
plugin!
I'm not saying that approach is Really Bad - just that it's kind of a
pre-optimization version of my proposal (I made MIDI data very
efficient, float parameter data slightly less efficient, float parameter
data with deltas even less efficient, and binary data are pretty
inefficient :) ).
The fact that event has to be handled is annoying enough on its own :) -
I have to end the inner loop, store state information somewhere etc. - I
don't want some additional, unnecessary memory accesses which may throw
sample data and buffers out of the cache.
> (Obviously just a quick generic knock-off to get the idea across). In
> networkey terms, this is separating transport from contents, which is
> pretty firmly established as a Good Idea.
In network context, yes. However, _optimizing_ for uncommon case is not
a preferable approach to me.
The arbitrary binary data command (0x72) mentioned in my proposal can
give you practically everything you need, and can be used in a
network-transparent way, as long as data in the "binary data" chunks are
self-contained (don't refer to other buffers).
However, my proposal lacks any mechanism to be used for serializing
arguments of future commands defined by extensions.
Still, that problem was solved many times in history, by deriving extra
interfaces for the new commands from an interface that provides
reference counting and marshalling. IPersistStream type stuff, for the
victims of Microsoft APIs.
It is a bit complex (or at least not as trivial as plain MIDI), but it
would be only used in hosts and complex plugins that use extension
events, so I guess it'd be fine.
> I very strongly feel that if 'more than MIDI' events are going to be
> mixed with MIDI events in the same port (s/MIDI/whatever), then the
> event transport mechanism needs to be 100% event type agnostic.
On the other hand, "100% generic" means "almost 100% unoptimized". By
throwing away extra information, you often throw away the chances for
optimization, so to speak.
Instead of thinking in terms of MIDI vs non-MIDI, try thinking of "my"
event types as "short" (8 octets), "medium" (16 octets) and "large"
(arbitrary-sized blobs). The fact that all short events are MIDI events
is, I think, less important. It's also not set in stone.
> It's the same approach LV2 takes with ports, and it works beautifully there.
On the other hand, it deals with a trivial problem, and solves it in a
complex way. That's not an engineer's dream :)
Regards,
Krzysztof
Hello !
In my application (LiveMix - http://livemix.codingteam.net/) I just
implement the MIDI support.
To initialize the MIDI I do :
snd_seq_open(&seq, "default", SND_SEQ_OPEN_DUPLEX, 0);
snd_seq_set_client_name(seq, "LiveMix");
m_iPort = snd_seq_create_simple_port(seq, "control", 0,
SND_SEQ_PORT_TYPE_MIDI_GENERIC | SND_SEQ_PORT_TYPE_SOFTWARE |
SND_SEQ_PORT_TYPE_APPLICATION);
m_iMidi = snd_seq_create_simple_port(seq, "control",
SND_SEQ_PORT_CAP_READ | SND_SEQ_PORT_CAP_SUBS_READ |
SND_SEQ_PORT_CAP_WRITE | SND_SEQ_PORT_CAP_SUBS_WRITE,
SND_SEQ_PORT_TYPE_APPLICATION);
Than with patchage I connect the midi port to others port...
Than I want to know with port will be connected on witch other to reconnect
it on other LiveMix run.
I sea the function :
int snd_seq_connect_from(snd_seq_t *seq, int my_port, int src_client, int
src_port);
int snd_seq_connect_to(snd_seq_t *seq, int my_port, int dest_client, int
dest_port);
To do the connexion but I don't found not the function to know where I'm
actually connected.
CU & thanks in advance.
Stéphane
--
Stéphane Brunner
mail : stephane.brunner(a)gmail.com
messageries instantanées : stephane.brunner(a)gmail.com (
http://talk.google.com)
--------------------------------------
http://www.ubuntu-fr.org - Distribution Linux
http://fr.wikipedia.org - Encyclopédie communautaire
http://mozilla-europe.org - Navigateur internet / Client de messagerie
http://framasoft.net - Annuaire de logiciel libre (gratuit)
http://jeuxlibres.net - Jeux Libres (gratuit)
--------------------------------------
Il existe 10 sortes de personnes : celles qui connaissent le binaire, et les
autres.
Si Microsoft inventait un truc qui plante pas, ce serait un clou.
Si un jour on te reproche que ton travail n'est pas un travail de
professionnel, dis toi que :
Des amateurs ont construit l'arche de Noé, et des professionnels le Titanic.
midi application for midi control and note playing with a twist ( of the
wiimote ) here it is. <http://miidi.sourceforge.net>
Yannis Gravezas,
Athens, Greece
>> All of MIDI can be reduced to two messages: set-control (addressed by
>> Channel, Voice and Controller ID) .. and SYSEX - for data-dumps.
>What you say is of course a true statement, but it is ignoring what the
>founding fathers of MIDI attempted to accomplish regarding efficiency on
>a bandwith limited connection and the likelyhood of certain messages
>appearing more often than others ,,,
Hi Jens,
Very true. MIDI was an excellent solution at the time and made good use of
limited bandwidth.
But I think as MIDI got extended it got too complicated, and now with
software music systems we don't have such limited bandwith anymore. The
8-bit messages are looking dated as far as simplicity and especially
resolution goes.
The MMA are addressing the resolution aspect with "High definition MIDI",
but it's implemented as SYSEX messages and therefore just adds more
complexity to writing a comprehensive MIDI implementation.
I hope in future we'll have a simplified system...instead of CCs and RPNs
and NRPNs and SYSEX Controllers, have a single unified controller with a
32-bit Controller-Number range. Map all the existing MIDI messages into
that range.
NEW-MIDI would have only one message type, it would look like..
[Timestamp][Channel][Voice Number][Controller-Number][Value]
Anyway, I got way off-topic. My intention was to say that LV2 may not need a
per-note control extension because MIDI already has it via "Key Based
Instrument Controllers", which is a SYSEX message for sending a CC to one
specific note-number.
Best Regards,
Jeff
> However, it (MIDI) also has its own age. And limitations. In particular,
the
> amount of per-note control is pitiful.
> .... In the meantime, maybe the MIDI guys will decide for us :D
They did, "Key Based Instrument Controllers" provide for per-note
controllers.
Granted MIDI remains stupid. MIDI is 15 different ways of saying "set this
control".
All of MIDI can be reduced to two messages: set-control (addressed by
Channel, Voice and Controller ID) .. and SYSEX - for data-dumps.
Jeff McClintock
Message: 13
Date: Fri, 30 Nov 2007 00:30:58 +0000
From: Krzysztof Foltman <wdev(a)foltman.com>
Subject: Re: [LAD] "enhanced event port" LV2 extension proposal
To: Dave Robillard <drobilla(a)connect.carleton.ca>, LAD
<linux-audio-dev(a)lists.linuxaudio.org>
Message-ID: <474F59C2.1050805(a)foltman.com>
Content-Type: text/plain; charset=ISO-8859-2; format=flowed
Dave Robillard wrote:
> I /really/ don't like screwing around with MIDI. Just make the events
> pure, raw MIDI. Jack MIDI events are 'just n bytes of MIDI', Alsa has
> functions to get at 'just n bytes of MIDI', and... well, it's just MIDI.
>
However, it also has its own age. And limitations. In particular, the
amount of per-note control is pitiful.
I can always use hacks to get around the limitations, or introduce a
per-note control via separate "set note parameter" event type. But hacks
are ... hacky, and the extended extension of extension for every single
feature is a bit inelegant too.
Anyway - so far, I have no code that would make use of this, so we might
keep it as plain MIDI. And then we have next 5 years to decide the
details of the feature. In the meantime, maybe the MIDI guys will decide
for us :D
Krzysztof
Hi All
Thank all of you for your answers.
Now the DSP code seems to be well implemented.
I use a small FFT to estimate the fundamental frequency and then find
the exact pitch around this frequency by AutoCorrelation.
I still have to improve it and to implement some optimisations like
windowing, filtering or downsampling in order to reduce CPUload.
But my plugin is still not visible, neither with listplugins nor with
Ardour. I'm pretty sure that the _init() function is good so I don't
know where it comes from.
Could it come from my compilation options, my linking options, my
installation path (/usr/lib/ladspa) ?
My library is linked to fftw3f, but I can't tell if the link is dynamic
or static.
I join my source code, my Makefile and the compiled library (x86).
If somebody could have a look or just give some piece of advice on how
to make a plugin recognized, this would be a great help for me.
Thanks
Rémi
Dear all,
the deadline for the call for papers and music for the Linux Audio
Conference 2008 (LAC2008) has been extended.
The new and final deadline for paper and music submissions is now
Thursday, December 6, 2007, 24:00 UTC
(This is equal to Friday, December 7, 2007, 0:00 UTC)
We invite submissions of papers addressing all areas of audio
processing based on Linux and open source software. Papers can focus
on technical, artistic or scientific issues and can target developers
or users.
We are also looking for music that has been produced completely or
mostly under Linux and/or with open source software from every genre:
compositions, Electronica, Chill-Out, Ambient, etc.
For paper submissions, please use the online form at
http://lac.linuxaudio.org/openconf
For music submissions and further details on both calls, please refer
to the calls below and on the web at
http://lac.linuxaudio.org
We are looking forward to many interesting submissions for the 6th
International Linux Audio Conference 2008 and we hope to see you in
Cologne 2008!
Please feel free to forward this e-mail to anybody who might be
interested.
On behalf of the LAC2008 organisation team,
Frank Barknecht and Martin Rumori
----------
Linux Audio Conference 2008 Cologne, Germany
28.02.-02.03.2008
Call for Papers
===============
http://lac.linuxaudio.org/download/lac2008_callforpapers.txt
We invite submissions of papers addressing all areas of audio
processing based on Linux and open source software. Papers can focus
on technical, artistic or scientific issues and can target developers
or users. This includes (but is not limited to) the following
categories:
* Computer Music
* Music Production
* Instruments
* Drivers and Sound Architecture
* Audio Distributions
* Generic (Usage, Documentation etc.)
The conference is held in English.
Length of a paper is 4-8 pages. Papers have to include an abstract
(50-100 words). The abstract will be published separately on the
conference website once the paper has been accepted. Also, papers
should include up to 5 keywords.
In general talks should take 20-30 minutes followed by 5 minutes
discussion.
Please notify us if you need a special technical setup. The technical
standard setup will be:
* microphone/head set
* projector with XVGA input (resolution 1024x768)
* stereo speaker setup with mini jack input
If you are not able to bring your laptop along with you, please notify
us in advance.
How to submit
--------------
* Do not send papers via email as with the past LAC conferences!
* Instead please use the paper upload form in our conference
management system at:
http://lac.linuxaudio.org/openconf
* File format is PDF, formatted for A4 paper. Make use of the
templates for paper formatting available at:
http://lac.linuxaudio.org/download/lac2008_templates.tar.gz
* Authors of papers selected to be included in the printed
conference proceedings will also have to supply supplemental
material like illustrations needed to layout the printed
proceedings separately.
* Deadline for paper submissions is December 6, 2007, 24:00 UTC
Important Dates
----------------
06 Dec 2007: Paper submission deadline
21 Dec 2007: Notification of acceptance
11 Jan 2008: camera ready version
28 Feb - 2 Mar 2008: Linux Audio Conference in Cologne
----------
Linux Audio Conference 2008 Cologne, Germany
28.02.-02.03.2008
Call for Music
==============
http://lac.linuxaudio.org/download/lac2008_callformusic.txt
The conference will include several concerts. We are looking for music
that has been produced completely or mostly under Linux and/or with
open source software from every genre: compositions, Electronica,
Chill-Out, Ambient, etc.
If you want to participate, either send your composition(s) to this
address:
LAC2008 - Call for Music
Kunsthochschule für Medien
Martin Rumori
Peter-Welter-Platz 2
D-50676 Köln
Germany
or send your submission via email to: lac(a)linuxaudio.org
Please do not attach any media files to email submissions, only
provide a URL to where the piece can be downloaded.
Make use of one of the following media formats:
* Media: Audio-CD, DVD, DVD-R, CD-R, Website download
* File formats: aiff, wav, flac, ogg, mp3
* Samplerate: 44.1 or 48 kHz
* Resolution: 16 or 24 bit
* Number of channels: 1 to 8 channels
* Channel format: multi-channel interleaved, multi-mono
Include the following items with your submission (in English):
* A filled-out and signed printout of the form available here:
http://lac.linuxaudio.org/download/lac2008_musicagreement.pdf
The form can be filled out with a computer and printed out afterwards
for signing.
For the printed program and to be published online and on the
conference CD, in continuous text (no table or list please):
* short commentary on the composition(s) (each ca. 150 words)
* short Curriculum Vitae (ca. 100 words)
Deadline for submissions is December 6, 2007, 24:00 UTC
A jury will select the compositions that will be performed/played.
Besides artistic criteria and technical reasons, these criteria apply
for the selection:
Tape pieces or pieces which are performed by the composers themselves
will generally have more chances to get included. If we get more
pieces than we can include in the program, composers who are attending
the conference are preferred.
Terms and conditions for participation can be found in the form
mentioned above. This form includes among other things:
I will receive no fees whether my composition is played or not. GEMA
fees (in case of performance) will be paid by the organizer. The
material I send to the LAC organisation team will not be returned.
Important Dates
----------------
06 Dec 2007: Music submission deadline
21 Dec 2007: Notification of acceptance
28 Feb - 2 Mar 2008: Linux Audio Conference in Cologne
On Sat, 2007-12-01 at 19:32 +0100, David Olofson wrote:
> On Saturday 01 December 2007, Dave Robillard wrote:
> > On Fri, 2007-11-30 at 11:23 +0100, David Olofson wrote:
> > > On Friday 30 November 2007, Krzysztof Foltman wrote:
> > > [...several points that I totally agree with...]
> > > > If you use integers, perhaps the timestamps should be stored as
> > > > delta values.
> > >
> > > That would seem to add complexity with little gain, though I
> > > haven't really thought hard about that...
> >
> > It does have the significant advantage of eliminating the hard upper
> > bound on the range of time that can be present in a buffer (and with
> > 'null' events, eliminates any such limit entirely, ala SMF). More
> > annoying to work with though..
>
> Yeah; someone has to add the "null" events, and the delta nature of
> it, obviously. You could hide that in event handling
> calls/macros/inlines of course, but still...
Yeah, unless there's some compelling cases where splitting the cycle has
a negative side effect, not worth it.
-DR-
Hi,
I've just done as much as I can to make sure wcnt-1.26 is working. I
would
appreciate if anybody would try compiling on other platforms beside Debian
Etch on i686...
http://www.jwm-art.net/wcnt-1.26-test.tar.bz2
..before I upload it to sf.net, etc.
new features to wcnt-1.26
* uses libsndfile for file I/O - wcnt's first library dependancy :)
* several modules interface with LADSPA plugins from SWH and CAPS plugin
sets
(make sure LADSPA_PATH environment is set :)
glame lowpass and highpass filters,
bode frequency shifter,
plate reverbs (mono input & stereo input versions)
sc1 compressor
fast lookahead limiter
dc offset remover
* orbit module - impliments an orbit fractal which iterates on trigger
input.
hopalong threeply and quadrup orbit types. it also auto-scales the
fractal
output after n test iterations. output_x & output_y.
* adsr_scaler - scales individual sections (ie entire attack decay or
release
sections) of an adsr module.
// please fwd this to LAU if that's a more appropriate list as I'm not
// subscribed there.
Please reply offlist with reports of compilation problems, etc, to:
james att jwm-art dott net
Regards, cheers,
James