-----Forwarded Message-----
From: Son of Zev <sonofzev(a)labyrinth.net.au>
To: linux-audio-dev(a)music.columbia.edu
Cc: Alsa-dev <
Thanks Vincent.
Your reply is appreciated.
However I have been involved in the ardour lists for over 2 years. I
have spent much time reading about potential problems and responded to
all those that have had the courtesy to respond. I have also learnt and
spent much time investigating problems that have nothing to do with the
production of music.. simply to try and help developers iron out
problems.. Credit goes to some who have especially Takashi Iwai, Robert
Jonsson, Tommi Ilmonen and a few others.
If you look back into archives a couple of years back alot of
development into inclusion of MIDI sync was added to some softwares
(especially MusE) to accomodate users like myself who depend on this
sync to make software compatible with real studios.
BUT. In the last few weeks I have spent much time into trying to
configure and compile ardour (the potentially greatest audio recording
software available to us) but all my responses after I sent an opinion
to the string "ahem" concerning Steinberg having produced (but not
released) versions of their commercial softwares under our much loved
platform, have been completely ignored.. except this one..
My responses have not been ignorant nor lacking in information. As I
mentioned I have spent much time learning about the Linux platform and
contributing when I can ...
My annoyance is when it was asked of me to provdie more information and
I did .. no response came.. then when I tried to resolve it myself and
no further response came that's when I could only put 2 & 2 together and
see that a response I made to the string "ahem" could have been related
cheers
Allan
On Thu, 2003-02-13 at 00:50, Vincent Touquet wrote:
> I'm not a "real coder" either (so one might argue I shouldn't be on
> linux-audio-dev, but I'm just interested in the discussions), so I think
> I understand the root cause of your grief.
>
> I think if (if) the Ardour developers are _expecting_ quality feedback
> from normal users (not just programmers) at this stage, they should
> provide tarballs of the stable CVS snapshots they want to be tested.
>
> But maybe they want to wait for 1.0 before letting endusers test it?
> In that case, maybe you tried to use it too early in its development ?
> I think they honestly appreciate your effort to participate, there
> should be no doubt about that, but maybe Ardour is still changing too
> rapidly for you to be able to track it.
>
> I think only the developers of Ardour can clear out this question.
>
> best regards,
> Vincent
Dear Sir,
My name is William Ume,
Presently,I am working in an African country.
I got your contact via the internet and felt you may be willing to pursue this with me.
This proposal may sound strange to you or probably you may even think it is a joke,because of lots of funny mails circulating over the internet .Well if you do,I really understand,but honestly my freind,I am really handicaped,because this is the only means available to me to cominicate to you.
Honestly ,I think you should give me a trial,I need your assistance and the deal is good.
The deal involves the transfer of $45million,safely intoyour account, and for this you are to receive 20% of the fund.
If you are intereted in pursuing this further please contact me via e-mail so that I can furnish you with the relevant detail about the origin of the fund and the modalities for the deal.
Please send your response to my e-mail address.
William.
En ucuz lensler icin lutfen tiklayin.
Bir telefonla adrese teslim.
http://www.akdenizgoz.com
Akdeniz Goz Merkezi
Fevzipaþa No:73 Fatih 0 212 635 74 74
Listeden cikmak icin remove(a)akdenizgoz.com adresine bos mail gonderiniz
Does anyone keep an actual list of required features with motivations?
If not, it might be a good idea to start writing one. We tend to stay
very technical around here, and the required feature set is more or
less something we "maintain" by guarding our own pet features. This
might work for us, but it's probably better to make an official,
prioritized list.
We can use it for XAP of course, and - which is why I had the idea of
writing this post - we could throw it into the GMPI discussion.
BTW, the GMPI list is up now, and there's already some traffic on it.
Consider joining if you haven't had enough of plugin API discussion
already. ;-)
http://www.freelists.org/cgi-bin/list?list_id=gmpi
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
--- http://olofson.net --- http://www.reologica.se ---
This release includes lots of bugfixes to improve stability and
several new features:
* Added grid and grid-snap features. The grid spacing is
adjustable on each module.
* Added reference tempo (bpm) parameter for the Delay Time grid
calculation. It includes units based on both msec and beats.
This allows for much easier rhythmic manipulation if you know
the bpm.
* Pitch now uses semitones as units
* Adjustable maximum delay time choices (up to 20 seconds)
* Min/max legend text on plots. Thus you can see where you are
zoomed (zoom with Ctrl-Alt-Shift left-drag)
* Changing FFT sizes on the fly now stable
* Fixed many nasty crashing bugs
http://freqtweak.sourceforge.net/
Happy tweaking,
jlc
Hi all,
I am in the process of revamping my app as well as adding more features.
The problem my app currently has is that the main window is not
resizable, which may potentially cause problems on displays with limited
resolution. I've decided to fix this issue once and for all but am not
sure which path to take. The app in question is RTMix and its current
layout is split into two parts:
Top with timers and buttons
Bottom with the main notification widget and bunch of tabs for editor
and other functionalities.
The app looks something like this:
_________
|buttons|
| timer |
_________
| tabs |
|widgets|
_________
Now I am ready to begin working on a resizable version (which should
mostly not be a problem, other than being time-consuming) and am
wondering whether I should separate the two widgets all together and
have them therefore fully customizable (size-wise). While this option
seems very easy to implement and also attractive, I am concerned with
the issue that the app will easily lose its default looks and may
quickly become overwhelming for the first-time user (since it is
designed to provide performer-computer interface with the least amount
of familiarization required). I would especially appreciate comments
from people who got to use RTMix as to what would they like to see, but
of course I would appreciate anyone else's opinions on this matter as
well.
The other option I've been thinking is to move the two widgets
side-by-side, rather than having one over the other, which would
certainly make maximizing window easier and would utilize the desktop
space more efficiently. However, the issue is also that the app has a
bunch of other deployable floating sub-windows and as such this may not
prove to be the best solution, and would probably look pretty useless
when having a relatively small window.
Finally, I can simply leave the app as it is but then the notification
interface (the bottom part) as well as other tabbed widgets of the
bottom part (such as the script editor) will not be as efficient in
utilizing the desktop space as they could possibly be.
I also thought of having the two widgets designed in such a way that
they can be "glued" together (i.e. xmms-like) or separated as needed,
but I am just afraid that this option may take too much effort since
then I would have to get rid of the default window decorations (in order
to make them "gluable") and make my own custom-designed window handles,
which seems too much of a nuisance to implement.
I am sure someone might have yet another idea regarding this issue that
I am simply unable to think of so I would greatly appreciate your input
on this matter.
The bottom line is that I am looking to make this app as flexible in
terms of desktop space utilization as possible while sacrificing the
minimum amounts of the "standardization" it currently has to offer.
Your help is greatly appreciated! BTW I am using Qt (if that becomes
relevant at some point).
P.S. I have a bunch of screenshots on my site, but the network is
currently screwed up on the University campus so it may end up not being
available for the next 8-24 hrs. Hence, I am unable to give you the
screenshots. However, you may wanna try the URL in my sig -- who knows,
maybe the network will be up by the time this hits your inboxes :-).
Again thanks for your help! Sincerely,
Ivica Ico Bukvic, composer, multimedia sculptor,
programmer, webmaster & computer consultant
http://meowing.ccm.uc.edu/~ico
============================
"To be or not to be" - Shakespeare
"To be is to do" - Socrates
"To do is to be" - Sartre
"Do be do be do" - Sinatra
"2b || ! 2b" - ?
"I am" - God
I know there are a few pythonistas around these parts...
Are any of you going to PyCon? I am!
Would love to meet other linux-audio-dev regulars and lurkers.
http://www.python.org/pycon/
--
Paul Winkler
http://www.slinkp.com
Look! Up in the sky! It's THE FEMINIST!
(random hero from isometric.spaceninja.com)
Hi all,
New stuff. Good.
* you can now use plugins with unequal numbers of input and output ports.
any excess ports get exposed as extra jack ports with names like
"amplif_1-1_i2", of the form
"<shortened plugin name>_<rack position>-<plugin copy>_{i,o}<port>"
* wet/dry controls for the output of each plugin
* logarithmic controls
* much more resilient file loader
http://pkl.net/~node/jack-rack.html
Bob
--
Bob Ham <rah(a)bash.sh>
Hello everybody,
I'm one of the PTAF authors and I've just subscribed to
this list, seeing the standard is discussed here. I will
try to answer your questions and to collect feedback.
I apologize for this long mail.
> * Three states; created, initialized and activated.
> This may be useful if plugins have to be instantiated
> for hosts to get info from them. Why not just provide
> metadata through the factory API?
I prefered to keep the factory API quite simple. In most
cases, the factory would produce only one kind of plug-in.
Detaching the plug-in capabilities from the plug-in itself
may become a pain. With factory-provided metadata and only
two states, the trivial solution is to instanciate internally
the plug-in to get its properties. If it brings it directly
in the Initialized states, it will likely increase the
overhead.
There is also an advantage when the plug-in is a wrapper,
as the factory can just reflect a directory content,
possibly filtered and checked.
Lastly, plug-in properties may depends on contextual data,
like OS or host versions, date, CPU, environment variable,
etc. - things sometimes useful for copy protections.
Hmm that makes me think about the issue of dual recursion
in property checking (host-plug-host-plug etc) => infinite
loops.
> * Assuming that audio and sequencer stuff is in different
> threads, and even have plugins deal with the sync
> sounds like a *very* bad idea to me.
I think there is a misunderstanding here. The "Sequencer"
thread is not a thread sending notes or events or whatever
related to the sequence. It is the generic thread for the
host, being not the GUI thread nor the audio thread. The
word "Sequencer" is badly choosen, someone already told me
that too. If anyone has a better idea for its name...
Of course I'm 100% for having audio and sequencing in the
same thread. That's why there is only one opcode to process
audio and sequence data at once. However... some VST hosts
actually dissocitate them, allowing process() function to
overlap with setParameter() or ProcessEvents(), which is
1) confusing 2) source of bugs, hard to track 3) difficult
to synchronize.
> * GUI code in the same binaries is not even possible on
> some platforms. (At least not with standard toolkits.)
I'm presonally not familiar with Linux GUI toolkits (I'm
confused with Gnome, KDE, X, Berlin, etc, sorry for
my ignorance), is there a problem with them ? In this
case wouldn't be possible to launch the GUI module from
the plug-in ? What would you recommend ?
.Agreed, this is not ideal, ohm force are a host based
processing shop, so haven't thought about these issues.
The TDM people will obviously have different ideas.
Yes we have tought about it. TDM is not a problem here
because Digidesign is absolutely opposed to wrappers or
non-Digi-controlled standards, it is part of the SDK
license agreement :). UAD cards are a more interesting
case because their plug-ins show as VSTs. It seems that
the current code architecture isn't an issue for them.
Actually this is mostly a design choice. If the plug-in
must assumes that its GUI and DSP codes may be run on
different locations (on two computers for example), both
API and plug-ins should be designed to take that into
account, especially instanciation and communication
between both parts. Current design is oriented toward
a single object, making the API simpler and allowing
flexible and hidden communication between GUI and plug-
in core. Anyway the idea is indubitably interesting.
> * Using tokens for control arbitage sounds pointless. Why
> not just pipe events through the host? That turns GUI/DSP
> interaction into perfectly normal control routing.
Several APIs (MAS for example, and recent VST extensions)
use a similar system, it works like a charm. I don't
understant why just sending events to the host would
work. If several "parameter clients" act at the same
time, how would you prevent the parameter to jump
continuously ?
One solution is to make the host define implicit priority
levels for these clients, for example 1 = automation,
2 = remote controls, 3 = main GUI. This is good enough
for parameter changes arriving simultaneously, but is
not appropriate for "interleaved" changes.
Another solution is to use tokens. It can acts like the
first solution, but also allows the host to know that
user is holding his fingers on a knob, maintaining it
at a fixed position, because the plug-in doesn't have to
send any movement information. This is essential when
recording automations in "touch" mode.
> * Why use C++ if you're actually writing C? If it won't
> compile on a C compiler, it's *not* C.
It was for the structured naming facilities offered by C++
(namespaces). It makes possible to use short names when
using the right scope and to be sure that these names
wouldn't collide with other libraries.
I agree that it's not essential and everything could be
written in C.
> * I think the VST style dispatcher idea is silly
Performance of the calls is not a critical issue here.
In the audio thread, there is roughly only one call per
audio block. And final plug-in or host developer will
likely use an wrapper, adding more overhead.
But it's the easiest way to :
1) Ensure the bidirectional compatibility across API versions
2) Track every calls when debugging. By having a single entry
point you can monitor call order, concurence, called functions,
etc.
The system is simple and can be wrapped to your solution at
a very low cost (just bound checking and jump in table)
> * Is using exceptions internally in plugins safe and portable?
It depends on your compiler. To specify, the use of
exception is allowed within a call only if it has no side
effect for the caller, whatever it is. I think most of the
C++ compilers are compliant with this rule; it seems to be
the case for all compilers I've worked with.
The sentence about exception came after many plug-in
developer questions on mailing lists: "can I use safely
exceptions in my plug-in ?" The answer generally given was
"Yes, but restricted to certain conditions, I don't know
them that much so I wouldn't recommend their use".
In the PTAF API, we wanted to clarify this point.
And no, C++ is not required for PTAF plug-in, because it
defines a communication protocol at the lowest possible
level. C/C++ syntax is just an easy and common way to
write it.
> * UTF-8, rather than ASCII or UNICODE.
Actually it IS Unicode, packed in UTF-8. This is the
best of both worlds, because strings restricted to the
ASCII or partial Latin-1 (character sets are stored in
UTF-8 exactly like in "unpacked" form. Alternatively,
"exotic" characters may be represented using multi-byte
sequences.
> * Hosts assume all plugins to be in-place broken. Why?
> * No mix output mode; only replace. More overhead...
There is several reasons for these specifications :
1) Copy/mix/etc overhead on plug-in pins is *very* light
on today's GHz computers. Think of computers in 5, 10
years... This would have been an issue if the plug-ins
were designed to be building blocks for modular synth,
requiring hundreds of them per effect or instruments.
However this is not the goal of PTAF which is inteded
to host mid- to coarse-grained plug-ins. A modular synth
API would be completly different.
2) This is the most important reason: programmer failure.
Having several similar functions differing only by += vs
= are massively prone to bugs, and it has been confirmed
in commercial products released by major companies.
Implementing only one function makes things clear and
doesn't requires malicious copy/paste or semi-automatic
code generation.
3) Why no in-place processing ? Same reasons as above,
especially when using block-processing. More, allowing
"in-place" modes can lead to weird configurations like
crossed-channels, without considering the multi-pins /
multi-channels configurations with different numbers
of inputs and outputs. Also, leaving intact the input
buffers is useful to make internal bypass or dry mix.
4) One function with one functionning mode is easier to
test. When developing a plug-in, your test hosts probably
won't use every function in every mode, so there are bug
which cannot be detected. In concrete terms, this is a
real problem with VST.
I think it's a good idea that an API specification takes
care about programming errors when performance is not
affected much. Indeed let's face it, none of us can avoid
bugs when coding even simple programs. Final user will
attach value to reliable software, fore sure.
> * Buffers 16 byte aligned. Sufficient?
I don't know. We could extend it to 64 ? For what I've
read about 64-bit architecture of incoming CPUs, 128-bit
registers are still the widers. Enforcing 16-byte
alignment would ensure that SIMD instructions can be
used, and with maximum efficiency. SSE/SSE2 still works
when data is not aligned to 16 but seems slower. However
Altivec produce bus error and this instruction set is
the only way to get decent performance with PPCs.
So for me, 16 byte is a minimum. It could be increased,
but to which size ? It requires a visionary here ;)
> * Events are used for making connections. (But there's
> also a function call for it...)
This is the PTAF features which embarass me... Having
two ways to control one "parameter" is awful in my
opinion.
I think about replacing all these redundant functions
by just one, callable only in Initialized state. It
would be kinda similar to the audio processing opcode,
but would only receive events, which wouldn't be dated.
> * Tail size. (Can be unknown!)
Yes. Think about feedback delay lines with resonant
filters or waveshapers, etc. It may be only a few
ms or an infinite time. The tail size is not essential
because plug-in can tell if it has generated sound or
not at each processing. It is rather a hint for the
host.
> * "Clear buffers" call. Does not kill active notes...
I can't remember exactly why I wrote that ??? A fair
requirement would be "can be called only when there
isn't active notes any more".
> * Ramping API seems awkward...
What would you propose ? I designed it to let plug-in
developpers bypass the ramp correctly. Because there are
chances that ramps would be completly ignored by many
of them. Indeed it is often preferable to smooth the
transitions according to internal plug-in rules.
> * It is not specified whether ramping stops automatically
> at the end value, although one would assume it should,
> considering the design of the interface.
Hmmm... I start to see your point. You mean that it
is impossible for the plug-in to know if the block ramp
is part of a bigger ramp, extending over several blocks ?
> * Note IDs are just "random" values chosen by the sender,
> which means synths must hash and/or search...
Yes. It is easy to implement and still fast as long as
the polyphony is not too high, but I agree, there is
probably a more convenient solution. You were talking
about VVIDs... what is the concept ?
> * Hz is not a good unit for pitch...
Where have you read that pitch was expressed in Hz ?
Pitch unit is semi-tone, relative or absolute, depending
on the context, but always logarithmic (compared to a Hz
scale).
> * Why both pitch and transpose?
Final note pitch is made of two parameters: Base Pitch and
Transpose. Both are from Pitch family (semi-tones). Base Pitch
is a convenient way to express both note frequency or drum hit
id or other way to identify a note within the timbral palette
of an instrument. This solutions refers to a debate on the
vstplugins mailing-list some months ago (I think it was this
ML, correct me if I'm wrong).
If you want only a drum number, just round Base Pitch to the
nearest integer, and do what you want with the fractional
part.
If you want to synthesize a piano with its "weird" tuning,
remap the Base Pitch to this tuning to get the real pitch
in semi-tones, possibly using a table and interpolation.
If you want to make an instrument imposing a specific
micro-tuning, remap Base Pitch as you want, by stretching
or discretization.
These are examples of the Base Pitch, giving backward
compatibility with MIDI note Ids.
Now you can use the Transpose information for pitch bend,
here on a per-note basis (unlike MIDI where you had to
isolate notes into channels to pitch-bend them
independently). Thus final pitch can be calculated very
easily, and log-sized wavetables switching/crossfading
remains fast.
I think we can cover most cases by dissociating both
informations.
> * Why [0, 2] ranges for Velocity and Pressure?
As Steve stated, 1.0 is for medium, default intensity,
the middle of the parameter course as specified in MIDI
specs. But I don't get why it is "wrong" ? IMHO it isn't
more wrong than the 0..64..127 MIDI scales. There is just
a constant factor between both representation.
> * TransportJump: sample pos, beat, bar. (Why not just ticks?)
But musically speaking, what is tick, and to what is it
relative ? If you want to locate a position in a piece,
you need two informations: the absolute date and the
musical date, if relevant. Bar has a strong rhythmic value
and refering to it is important. Beat is the musical
position within the bar, measured ... in beats. Tick trick
is not needed since the beat value is fractional (~infinite
accuracy).
> * Parameter sets for note default params? Performance
> hack - is it really worth it?
Good question.
> * Why have normalized parameter values at all?
> (Actual parameter values are [0, 1], like VST, but
> then there are calls to convert back and forth.)
Normalized parameter value is intended to show something
like the potentiometer course, in order to have significant
sound variations along the whole range, along with constant
control accuracy. This is generally what the user wants,
unless being masochist. Who wants to control directly IIR
coefficients ? (ok this example is a bit extreme but you
get my point). Also automation curves make sense, they
are not concentrated in 10% of the available space any
more.
So parameter remapping is probably unavoidable, being
done by the host or by the plug-in. Let the plug-in do
it, it generally knows better what to do :)
Making the parameters available under a "natural" form
is intended to facilitate numerical display (48% of the
knob course is not a relevent information for a gain
display, but -13.5 dB is) and copy/paste operations,
as well as parameter exchange between plug-ins.
> * The "save state chunk" call seems cool, but what's
> the point, really?
This is not mandatory at all, there is a lot of plug-in
which can live without. It is intended to store the
plug-in instance state when you save the host document.
Let's take a simple example: how would you make a sampler
without it ? You need to store somewhere sample information
(data themselves or pathnames). It is not possible to do it
in a global storage, like a file pointed by an environment
variable, Windows registry, Mac preference files or
whatever.
Actually the content is up to you. GUI data (current
working directory, tab index, skin selection, etc),
control data (LFO phases, random generator states, MIDI
mappings, parameters which cannot be represented with the
classic way...), audio data (samples, delay line contents
even)... Everything you find useful for user, avoiding
him repetitive operations each times he loads a project.
> * Just realized that plugin GUIs have to disconnect
> or otherwise "detach" their outputs, so automation
> knows when to override the recorded data and not.
> Just grabbing a knob and hold it still counts as
> automation override, even if the host can't see any
> events coming... (PTAF uses tokens and assumes that
> GUI and DSP parts have some connections "behind the
> scenes", rather than implementing GUIs as out-of-
> thread or out-of-process plugins.)
Token is managed by the host and requested explicitly
by the plug-in GUI. There is no need of hidden
communication here. Or I miss something ?
-- Laurent
==================================+========================
Laurent de Soras | Ohm Force
DSP developer & Software designer | Digital Audio Software
mailto:laurent@ohmforce.com | http://www.ohmforce.com
==================================+========================
Jack Audio Connection Kit 0.50.0 Released
The Jack team is pleased to announce the release of version 0.50.0 of
the Jack low-latency audio server. Jack allows applications to share
data and audio devices in synchronous operation, and has already seen
a year of hard testing and refinement. The API has stabilized for
the foreseeable future, although backwards compatibility is not
guaranteed.
More information on Jack is available at the group's web site,
[1]http://jackit.sourceforge.net/.
Source packages for Jack 0.50.0 are available [2]here.
What's new:
* Audio block sizes are fixed during runtime so clients can have more
efficient algorithms.
* No partial blocks will be delivered. Again for efficient client
algorithms.
* Thread scheduling hidden from clients for better portability.
* Cleanly compiles with gcc-3.3.
* Works on 64-bit platforms
Work is ongoing to improve transport control.
Developers and users interested in Jack should sign up to
[4]jackit-devel, our mailing list.
References
1. http://jackit.sourceforge.net/
2. http://jackit.sourceforge.net/releases/current/
3. http://jackit.sourceforge.net/apps/
4. http://jackit.sourceforge.net/lists/