I believe the C++ standard specifies that vector<> uses contigous memory and that &v[0] returns a valid pointer to an array.
Taybin
-----Original Message-----
From: Chris Cannam <cannam(a)all-day-breakfast.com>
Sent: Jun 8, 2005 4:41 PM
To: linux-audio-dev(a)music.columbia.edu
Cc: Jussi Laako <jussi.laako(a)pp.inet.fi>
Subject: Re: [linux-audio-dev] [ot] [rant] gcc, you let me down one time too many
On Wednesday 08 Jun 2005 21:35, Jussi Laako wrote:
> You can derive a new class from the template and overload the []
> operator to perform exactly same as in C. After compilation the
> result is the same no matter if the template or C array is used.
Are you sure this is still true in the gcc world, after they changed
vector from an array to a real class in gcc 3.3 or whenever it was?
Chris
On Wed, 08 Jun 2005 22:19 , Jussi Laako <jussi.laako(a)pp.inet.fi> sent:
>On Wed, 2005-06-08 at 05:47 -0700, eviltwin69(a)cableone.net wrote:
>
>> I'm working with multibeam sonar, airborne topographic and hydrographic
>> LIDAR, and airborne hyperspectral imagery data.
>
>Sonar and radar applications are very familiar to me. There is no reason
>why those couldn't be very efficient, yet written in C++. My opensource
>HASAS passive sonar signal analysis suite, and the libDSP signal
>processing library it uses, has been written using C++ and asm. I think
>it's still very efficient.
>
>And for example in these kinds of applications the most valuable feature
>is how well it performs it's tasks and reliability. Execution speed is
>secondary. You can buy more CPU power if required. Most large array
>beamformers are heavy SMP systems anyway.
>
We're talking apples and oranges here. You're talking about the real-time
collection of multibeam. That is a piece of cake. I don't mean in terms of the
actual beamforming but in the amount of data per second you have to deal with.
The beamforming is really complicated. I'm talking about post-processing the
data from 7 ships with dual multibeams plus 8 or so hydro survey launches running
dual head multibeams plus a significant number of high-end sidescan systems plus
a plane running hydro and topo LIDAR and hyperspectral. The ships collect data
24/7, the HSLs 10 hours per day. All of these approximately 300 days per year.
The plane collects about 150 days per year, 5 to 6 hours per day. We're
post-processing for hydrographic (and other) types of use. When you're dealing
with tens to hundreds of billions of data points execution speed makes a huge
difference.
>In C/C++ case the performance is more about how you write the code, not
>the language you use.
>
Yes, part of it is about how you write the code but if someone dumps a few
billion data points on you and says "here, check these" then processing speed
becomes extremely important. There have been some very good points made in this
discussion and I will definitely investigate some of them. My problem here is
that I've heard the same type of thing from companies and universities for way
too long. Something along the lines of - "Really, you don't need to write your
own processing code. We collected sonar data for a whole week and we didn't have
any problem processing it with (fill in your favorite GIS or sonar processing
package here)".
Although this thread has about run its course, I would like to continue the
discussion on sonar systems with you off-line if you are interested.
Jan
Greetings:
While waiting for another box I decided to pull the RAM and test each
stick (256 MB each). The problem occurred with either stick. I'm able to
log in, work for a few minutes, then the box just freezes. I can hear
the disk drive make a little activity noise first, then everything's
just gone. Btw, it'll die in X or at the console, it's not an X problem
(I think). And since the problem occurs whether I'm on /dev/hda or
/dev/hdb I doubt if both disk drives are bad.
Anyway, Ivy's bringing over the other machine in another day or so,
I'll switch drives into that machine and see what happens next.
Best,
dp
> Message: 10
> Date: Wed, 08 Jun 2005 15:57:23 +0200
> From: Olivier Guilyardi <ml(a)xung.org>
> Subject: Re: [linux-audio-dev] Re: Software controller for homemade
> edrums
> To: "The Linux Audio Developers' Mailing List"
> <linux-audio-dev(a)music.columbia.edu>
> Message-ID: <42A6F943.4080606(a)xung.org>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> Hi Florian,
>
> Florian Schmidt wrote:
> >For quick tests without libDSP you can tweak the jack_convolve Makefile
> >a little:
> >
> >uncomment the
> >
> >#COMPILE_FLAGS += -DC_CMUL
> >
> >line and remove "-ldsp" from the LINK_FLAGS line. This will use an
> >unoptimized C complex multiplication implementation. jack_convolve will
> >use around 10-20% more cpu than with the libDSP implementation.
> >
> >
>
> It did compile. I also needed to remove the #include <dsp/dspop.h> in
> convolve.c
>
> Okay, so I now have an idea of what convolution is. Your little piece of
> software is very nice, very easy to understand. I used three samples : a
> bassdrum, a snare drum, and a short guitar chord. I plugged the output
> of one of my pads into jack_convolve's input and its output into the
> alsa_pcm playback.
>
> Both the bassdrum and the chord sounded quite nice. But the snare drum
> sounded like very far away. I guess this comes from the silence at the
> end of this sample.
>
> What exactly happens with these "response files" ? Should I use very
> simple samples, like a sine wave with no silence ? Shouldn't convolving
> be coupled with trigerring ? I mean : hitting the pad would start the
> sample playback, and the convolving engine would use both this sample
> playback and the pad signal to produce its output. In this case,
> jack_convolve would then need one output and two inputs :
> - one for the pad signal,
> - and one for the "response signal", that is : the sample playback that
> started right when the pad got hit
>
> Is this possible, or do I misunderstand convolution here ?
Olivier, I think the next step is to run jack_convolve, and connect the
soundcard input directly into jack_convolve's input using qjackctl. then
connect the output of jack_convovle to the soundcard's outputs.
(jack_convolve may already do this for you, I can't check just this moment)
Now when you hit a pad, it should be "convolved" with the response signal
giving you something different than the simple "click" of the input signal.
I'm very interested to see how this works out (I suggested this path to
Olivier, I hope it's not totally worthless :) ). It could be a way to build
a cheap, responsive electronic drum kit that preserves the nuances of the
input signal. I wonder if this is how the Korg Wavedrum worked?
-Ben
On Wed, 08 Jun 2005 09:50 , Simon Jenkins <sjenkins(a)blueyonder.co.uk> sent:
>On Tue, 2005-06-07 at 19:34 -0500, Jan Depner wrote:
>> On Tue, 2005-06-07 at 19:20, Dave Robillard wrote:
>> > "Premature optimization is the root of all evil". Using C arrays and
>> > strings for no reason when a much more robust higher level type would
>> > suffice is /just as stupid/ as always using slow high-level operations
>> > in time critical code.
>> >
>> > It's like arguing about, say, assembly vs. perl. Anyone who says one
>> > side is (always) "better" is automatically wrong. :)
>> >
>> True. I usually try to use the right tool for the job.
>> Unfortunately, with the data I work with, the right tool is almost
>> always the fastest tool.
>>
>You must be working with relatively large amounts of relatively simple
>data then.
>
I'm working with multibeam sonar, airborne topographic and hydrographic
LIDAR, and airborne hyperspectral imagery data.
>Suppose I sum a vector of 5 million integers and it takes 6 seconds. And
>assume - (generously![1]) - that I switch to using an array and now it
>only takes 1 second. Hmmm... a 6 * speedup! So I look to see where else
>my code could benefit from this super performance boost.
>
>Aha! Here's a vector of 5,000 oscillator structures, and it takes 5
>seconds to initialise them all. Switch to using an array and... erm...
>now it only takes 4.995 seconds to initialise them all.
>
>I'm sure my users will notice and appreciate the 5ms saving, but I
>suspect I could have served them better by looking at where my code was
>actually spending most of its time before trying to "optimise" it.
>
As far as data volumes go, for your 5 million integers, you're off by about 5
orders of magnitude ;-) So, now that 5ms just became 500 seconds. Yes, my users
do notice and appreciate that time savings ;-)
Jan
>Cheers
>
>Simon
>
>[1] It really was a generous assumption: I've assumed that arrays are
>
Apologies for cross-postings.
Dear Digital Music Research Network,
[Please forward to relevant researchers in your group]
Following a number of requests, the deadline for abstract submission to
the
DMRN Summer Conference has been extended to
Monday 13 June 2005
Submissions (abstracts) should be emailed to
dmrn05submissions(a)elec.gla.ac.uk
This conference will be an excellent opportunity, particularly for young
researchers, to present their work in a friendly environment and
interact with others in the field.
A specific aim of the conference is to promote collaboration between
those in computational music analysis and musicians. A concert will be
held during the conference to allow those working directly in the music
side to present their work.
Attendance for young researchers is free and UK based students may apply
for travel and accommodation directly through DMRN (so there is no
excuse for not going!).
See below for the call for papers. Further information can be found at
http://www-sigproc.eng.cam.ac.uk/~mps37/DMRNSummerConference05/
Best wishes,
Mark Plumbley
---
Dr Mark D Plumbley
Centre for Digital Music
Department of Electronic Engineering
Queen Mary University of London
Mile End Road, London E1 4NS, UK
Tel: +44 (0)20 7882 7518
Fax: +44 (0)20 7882 7997
Email: mark.plumbley(a)elec.qmul.ac.uk
****************************************************************
CALL FOR PAPERS
DMRN SUMMER CONFERENCE 2005
23 - 25 JULY 2005
Glasgow, Scotland
http://www-sigproc.eng.cam.ac.uk/~mps37/DMRNSummerConference05/
****************************************************************
*** EXTENDED SUBMISSION DEADLINE: 13 June 2005 ***
The Digital Music Research Network invites the submission of papers for
the DMRN Summer Conference 2005.
Submissions will initially take the form of an extended abstract of
between 500-750 words giving an overview of the intended content. In
addition to this, primary authors are asked to indicate their status as
one of the following:
* Young researcher (year of PhD study)
* Research assistant (year PhD obtained)
* Academic
* Other (please specify)
Although paper acceptance will depend on relevance to the conference and
academic merit, preference will be given to those submissions where the
primary author is a young researcher.
Authors will be invited to present their work either by oral
presentation (20 minutes) or as a poster. This allocation will be
decided by the conference committee, however authors may specify a
preference when submitting their abstract.
The DMRN Summer Conference 2005 solicits contributions from the
following topics:
* Digital music content generation
* Sound synthesis, virtual instruments
* Audio analysis: machine recognition of music
* Signal processing tools for music analysis
* Music data structures and representations
* Music notation, musicology
* Psychoacoustics, perception, cognition of music and audio
* Human computer interfacing for music and audio
* Non-western musicology, analysis and performance
Abstracts should be emailed to dmrn05submissions(a)elec.gla.ac.uk no later
than Monday 13th June 2005. Authors will receive email confirmation of
whether their proposal has been successful by Monday 20th June 2005.
Authors will then be asked to submit a camera ready version of their
paper (in pdf format) by Friday 15th July 2005. Latex and word templates
are available at
http://www-sigproc.eng.cam.ac.uk/~mps37/DMRNSummerConference05/CallForPa
pers/
Final pdf submissions should not exceed 4 pages.
DEADLINES
13th June 2005: Deadline for submissions of paper abstracts
20th June 2005: Notification of acceptance of papers
15th July 2005: Deadline for submission of camera ready pdf versions of
full papers
Further information can be found at
http://www-sigproc.eng.cam.ac.uk/~mps37/DMRNSummerConference05/
********************************************************************
> Assembling piezo microphones, cardboard, foam, wood and a cymbal stand,
> I have just made my first DIY electronic pads. Actually it's electronic
> percussions, because I will play these mostly with hands.
>
> I've found a few sites about DIY "edrums" (1), as well as some detailed
> documentation about how to build a trigger-to-midi hardware controller
> (2). But since I started this little project, I've been thinking about
> plugging the piezo mikes directly into my soundcard inputs. My first
> tests are very good : the signal is clean, and indicates faithfully how
> hard the pads get hit.
Hi,
There will be a lot of developments in smack in the coming months on modeling hand drums and i think you might get some good results with your triggers and smack. You can also try pluggin the audio out of your triggers and the excitation signal for the physical modeling drums in smack. This would allow you to get a level of control that is very difficult with midi/osc in that it directly uses the attack/release/velocity etc of your triggers in the sound.
Give me a yell if you want a hand with using them like that, as i'd love the testing.
Loki
>As far as data volumes go, for your 5 million integers, you're off by about
5
>orders of magnitude ;-) So, now that 5ms just became 500 seconds. Yes, my
> users do notice and appreciate that time savings ;-)
>
>Jan
Sooo..... if you stored this stuff on punched paper tape it would be long
enough to stretch something like 10 times around the planet - unless those
were binary orders of magnitude, in which case it would be a mere 1600 km
long :). Heh, I *said* you must be working with relatively large amounts of
relatively simple data, but I had no idea. (Don't tell me its relatively
complex data... if it is then a 500 second saving would become insignificant
next to the hours, days, weeks or whatever that your code would be spending
"inside" the data rather than "in-between" it).
Cheers
Simon
Hi.
I made many photos of the building, including some
nice panoramas. The pics are at resolution 1024x768 or
1600x1200 and if you want, I can stitch some
panoramas.
Paul
>Greetings:
> I've prepared a brief report on LAC 2005 for the
Linux >Journal, it's
>ready for submission but I need an outside photo of
ZKM >+ the Kubus. Did
>anyone take a nice shot of the buildings that they'd
like >to see in LJ ?
>If so, let me know asap. A TIFF is preferred, but
>high-resolution JPG
>will probably do. TIA!
Best,
dp
__________________________________
Discover Yahoo!
Stay in touch with email, IM, photo sharing and more. Check it out!
http://discover.yahoo.com/stayintouch.html