>
>are different versions of gcc3 ABI - compatible?
>
AFAIK all versions of gcc3 except version 3.0 which had a bug are
compatible.
- Stefan
_________________________________________________________________
Surf the Web without missing calls! Get MSN Broadband.
http://resourcecenter.msn.com/access/plans/freeactivation.asp
>On Fri, Oct 18, 2002 at 06:47:15 +0000, Stefan Nitschke wrote:
> > -O3 with C is broken, i got an endless loop!
>
>What gcc version? What flags did you use with C?
>
I used gcc 3.2 that comes with SuSE 8.1.
Today i changed the initial values to a=0.5; b=0.0000001; x=0.1; and now
-O3 works!?? Here are the results:
C: -O3 -march=pentium
user 0m11.380s
C++: -O3 -march=pentium
user 0m11.960s
BTW to my surprise today i was able to use ardour without freezing my
machine
as it always did last week. I didnt changed the system and used the same
binaries.
I never saw such a random problem on a linux box before.
>The test I did had the C code using a struct, and I used the . syntax for
>c++ method calls FWIW. I'l dig out the code in a min.
>
>I think it was loop unrolling that was crappy in c++.
>
That would be a bad thing.
- Stefan
_________________________________________________________________
Broadband? Dial-up? Get reliable MSN Internet Access.
http://resourcecenter.msn.com/access/plans/default.asp
> Things like jack have to be graphically wrappered or hidden too, no
> scrolling text windows of xruns. The occasionaly discussed jack session
> saving gizmo would be a knock dead feature.
And any offering to the general public that doesn't contain this feature
will probably end up just plain dead. Be patient.
Tom
>
>erm, sorry, but why not use pointers?
>
Just out of couriosity i made a benchmark test between C and C++ with
gcc3. I dont have a clue abour x86 assembler so i made a measurement.
Here is the C code (not realy useful as real code would have a need for a
struct and a pointer operation to call the filter() function) and the
C++ code.
Both "simulate" a low pass filter and are compiled with:
gcc -O2 -march=pentium -o filter filter.xx
-O3 with C is broken, i got an endless loop!
---------
double x1,y1,a,b;
const double filter(const double x)
{
register double y;
y = a*(x + x1) - b*y1;
x1 = x;
y1 = y;
return y;
}
int main()
{
double x=1;
int i;
x1 = y1 = 0;
a = b = 0.5;
for (i=0; i<1000000000; ++i) {
x = filter(x);
}
}
---------
class LowPass {
public:
LowPass() { x1 = y1 = 0; a = b = 0.5; };
~LowPass() {};
const double filter(const double x);
private:
double x1,y1,a,b;
};
inline const double LowPass::filter(const double x)
{
register double y;
y = a*(x + x1) - b*y1;
x1 = x;
y1 = y;
return y;
}
int main()
{
//LowPass* LP = new LowPass();
LowPass LP;
double x=1;
for (int i=0; i<1000000000; ++i) {
//x = LP->filter(x);
x = LP.filter(x);
}
}
---------
The results on my AthlonXP machine are:
C++ with member:
real 0m11.847s
user 0m11.850s
sys 0m0.000s
C++ with new() and pointer:
real 0m12.337s
user 0m12.330s
sys 0m0.000s
C:
real 0m16.673s
user 0m16.670s
sys 0m0.000s
Well, i will stay with pointer less C++ :-)
- Stefan
_________________________________________________________________
Surf the Web without missing calls! Get MSN Broadband.
http://resourcecenter.msn.com/access/plans/freeactivation.asp
On Friday 18 October 2002 04:25, Patrick Shirkey wrote:
> My opinion is that there is not enough people working on the
> promotional side of ALSA and Linux Audio.
ALSA/LAD is too geeky and too die-hard techno hardcore to appeal to
anyone but geeks. IMHO music geeks are the worst type of geeks and
Linux geeks are the worst type of computer geeks. This isn't a
ALSA/LAD problem - it's a general Linux image problem. Out with the
beards and fusty academia, kernel hacking and tweakery and in with the
primary colours and the big square buttons to get The Kids into it all.
The Linux desktop is now sufficiently advanced that this should all be
possible.
I argued all this with a guy from Newsforge when we showed at Linux Expo
last week. The article has typos all over it (including my name of
course) and it's a little quote light but:
http://newsforge.com/newsforge/02/10/14/196240.shtml?tid=23
The point is that selling Linux Audio isn't just about Linux Audio -
it's about selling the whole desktop. It's about letting people know
that if they want to make music they can just get on and make music.
People shouldn't have to have a degree to install music software and to
start using it. This problem is bigger that LAD/ALSA or even AGNULA
and it crosses that distasteful line between hobbyists twiddling and
research and big business. It treads on a lot of toes.
Yes, this is a troll and not a very original one at that, but it's time
that there was a clear distinction between Linux Sound/Audio and Linux
for Music. The latter has a clearly defined marketplace, the former
doesn't.
B
> -----Original Message-----
> From: nick [mailto:nixx@nixx.org.uk]
>
> > I think gcc is in general not the best choice when you want
> to have highly
> > optimized code. I had no problems with C++ so far. You
> should avoid to use
> > pointers when ever possible and use references instead.
> RTSynth is written
> > in C++ and it performs quite well i think...
> >
> > - Stefan
>
> erm, sorry, but why not use pointers?
it's dangerous... null pointers, memory leaks etc. tendency is not to use
pointers unless absolutely neccessary...
as for the context above, I don't think it has anything to do with
performance (should be same).
erik
Oh yeah I forgot!
And there's another question I _realy_ want to know the answer for:
Until we have such instrument plugin API, what is the
right way to implement the the system
(30 softsynths working together)
with what we have
I mean a bunch of software synths /dev/midi -> /dev/dsp
Can I use these together right now?
Is there a right way to control them all via
a single sequencer and to get their output
into one place?
nikodimka
--- nick wrote:
> Hi
>
> IMO running each synth in its own thread with many synths going is
> definitely _not_ the way forward. The host should definitely be the only
> process, much how VST, DXi, pro tools et. al. work.
>
> No, there is no real "instrument" or "synth" plugin API. but since my
> original post I have been brewing something up. its quite vst-like in
> some ways, but ive been wanting to make it more elegant before
> announcing it. It does, however, work, and is totally C++ based ATM. You
> just inherit the "Instrument" class and voila. (ok, so it got renamed
> along the way)
>
> Although in light of Tommi's post (mastajuuri) i have to reconsider
> working on my API. My only problem with mastajuuri is its dependance on
> QT (if im not mistaken), sorry.
>
> If people would like to my work-in-progress, i could definitely use some
> feedback ;-)
>
> This discussion is open!
>
> -Nick
>
> On Thu, 2002-10-17 at 20:53, nikodimka wrote:
> >
> > Guys,
> >
> > This answer appeared just after I decided to ask the very same question.
> >
> > Is it true that there is no _common_ "instrument" or "synth" plugin API on linux?
> >
> > Is it true that there is no the same kind of media for out-of-process instruments?
> >
> > I see that there are some kinds of possible plugin APIs:
> > -- MusE's LADSPA extensions
> > -- mustajuuri plugin
> > -- maybe there's some more (MAIA? OX?)
> > -- I remember Juan Linietsky working on binding sequencer with softsynths
> > But I dont remember to hear anything about the results
> >
> > So can anyone _please_ answer:
> >
> > What is the right way to use the multiple (e.g. thirty)
> > softsynths together simultaneously with one host?
> > I mean working completely inside my computer
> > to have just one (or even none) midi keyboard as input.
> > So all the synthesys, mixing, processing goes on inside.
> > And to send one audio channel out to any sound card.
> >
> >
> > thanks,
> > nikodimka
> >
> >
> > =======8<==== Tommi Ilmonen wrote: ===8<=================
> >
> > Hi.
> >
> > Sorry to come in very late. The Mustajuuri plugin interface includes all
> > the bits you need. In fact I already have two synthesizer engines under
> > the hood.
> >
> > With Mustajuuri you can write the synth as a plugin and the host is only
> > responsible for delivering the control messages to it.
> >
> > Alternatively you could write a new voice type for the Mustajuuri synth,
> > which can lead to smaller overhead ... or not, depending on what you are
> > after.
> >
> > http://www.tml.hut.fi/~tilmonen/mustajuuri/
> >
> > On 3 Jul 2002, nick wrote:
> >
> > > Hi all
> > >
> > > I've been scratching my head for a while now, planning out how im going
> > > to write amSynthe (aka amSynth2)
> > >
> > > Ideally i don't want to be touching low-level stuff again, and it makes
> > > sense to write it as a plugin for some host. Obviously in the Win/Mac
> > > world theres VST/DXi/whatever - but that doesnt really concern me as I
> > > dont use em ;) I just want to make my music on my OS of choice..
> > >
> > > Now somebody please put me straight here - as far as I can see, there's
> > > LADSPA and JACK. (and MuSE's own plugins?). Now, I'm under the
> > > impression that these only deal with the audio data - only half what I
> > > need for a synth. Or can LADSPA deal with MIDI?
> > >
> > > So how should I go about it?
> > > Is it acceptable to (for example) read the midi events from the ALSA
> > > sequencer in the audio callback? My gut instinct is no, no, no!
> > >
> > > Even if that's feasible with the alsa sequencer, it still has problems -
> > > say the host wanted to "render" the `song' to an audio file - using the
> > > sequencer surely it would have to be done in real time?
> > >
> > > I just want to get on, write amSynthe and then everyone can enjoy it,
> > > but this hurdle is bigger than it seems.
> > >
> > > Thanks,
> > > Nick
> > >
> > >
> > > _________________________________________________________
> > > Do You Yahoo!?
> > > Get your free @yahoo.com address at http://mail.yahoo.com
> > >
> >
> > Tommi Ilmonen Researcher
> > >=> http://www.hut.fi/u/tilmonen/
> > Linux/IRIX audio: Mustajuuri
> > >=> http://www.tml.hut.fi/~tilmonen/mustajuuri/
> > 3D audio/animation: DIVA
> > >=> http://www.tml.hut.fi/Research/DIVA/
> >
> > __________________________________________________
> > Do you Yahoo!?
> > Faith Hill - Exclusive Performances, Videos & More
> > http://faith.yahoo.com
>
> __________________________________________________
> Do You Yahoo!?
> Everything you'll ever need on one web page
> from News and Sport to Email and Music Charts
> http://uk.my.yahoo.com
>
>
>
> __________________________________________________
> Do you Yahoo!?
> Faith Hill - Exclusive Performances, Videos & More
> http://faith.yahoo.com
__________________________________________________
Do you Yahoo!?
Faith Hill - Exclusive Performances, Videos & More
http://faith.yahoo.com
>From: Steve Harris <S.W.Harris(a)ecs.soton.ac.uk>
>
>Hmmm... My experiments with c++, dsp code and gcc (recent 2.96) did not
>turn out very well. For some reason the optimiser totaly chokes on c++
>code. I only tried one routine, and I'm no c++ expert, so its possible I
>screwed something up, but it did not look encouraging. I will revisit this
>and also try gcc3, which has much better c++ support IIRC.
>
>- Steve
I think gcc is in general not the best choice when you want to have highly
optimized code. I had no problems with C++ so far. You should avoid to use
pointers when ever possible and use references instead. RTSynth is written
in C++ and it performs quite well i think...
- Stefan
_________________________________________________________________
Broadband? Dial-up? Get reliable MSN Internet Access.
http://resourcecenter.msn.com/access/plans/default.asp
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
(This safeguard is not inserted when using the registered version)
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
(This safeguard is not inserted when using the registered version)
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
OK, here I go ranting about the same thing again... Can't help it :-),
tired of fighting the issue. :-(
Here's a simple proposal that I have been thinking of (even though my
computing skills are not so good when it comes to system stuff), and you
please tell me whether this is a good idea:
There should be just a simple sound daemon running 24/7, constantly
reading from the /dev/dsp inputs and writing into the outputs with a
small circular buffer that keeps on recycling itself (i.e. 64 bytes to
allow for low-latency work if needed). Then, when an app that does not
care at all what is behind the /dev/dsp resource, addresses the /dev/dsp
resource, it gets rerouted to the appropriate buffers provided by the
sound daemon. This way, infinite number of apps could read and write
into the same buffer, (writing being a bit trickier to do obviously) and
being down-mixed in software. If the app works with larger buffers, it
just simply reads off of the buffer longer and by the same token writes
into the audio buffer as needed (daemon would adjust incoming buffer to
app's needs by reading its OSS or ALSA request for the audio buffer).
Now, someone please tell me why is this not doable? Sound daemon must
be, at least in my mind, compatible with all software, and the only way
it can be that is by making itself transparent. Therefore there would be
no need for JACK-ifying or ARTSD-ing of an app. It would simply work (a
concept that we definitely need more of in the Linux world).
I am sure that with the above description I have covered in a nutshell
both JACK and ARTSD to a certain extent, but the fact remains that both
solutions require application to be aware of them if any serious work is
to be done. And as such, there is only a VERY limited pool of
applications that can be used in combination with either of these.
Any comments and thoughts would be appreciated! Sincerely,
Ico