-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi,
I normally have no problem with c++ programming and am really ashamed to
post it here, but I just can't find the bug and spent the whole day
finding it and I'm really fed up.
This is the error messages I get:
../src/Gui.cpp: In member function 'Text* Gui::createText(std::string,
std::string, int, int, int, int, std::string, std::string)':
../src/Gui.cpp:65: error: no matching function for call to
'std::list<Text, std::allocator<Text> >::push_back(Text (&)(std::string,
std::string, int, int, int, int, std::string, std::string))'
/usr/lib/gcc/i686-pc-linux-gnu/4.3.2/include/g++-v4/bits/stl_list.h:875:
note: candidates are: void std::list<_Tp, _Alloc>::push_back(const _Tp&)
[with _Tp = Text, _Alloc = std::allocator<Text>]
You can either get the sourcecode from:
http://cvs.savannah.gnu.org/viewvc/ksseq2008/ksinternalgui/?root=ksseq
Or use this:
/***************************************************************************
* Gui.h
*
* Nov 10, 2008 6:11:08 PM CET 2008
* Copyright 2008 Christian Loehnert
* krampenschiesser(a)freenet.de
****************************************************************************/
/*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
02110-1301, USA.
*/
#ifndef GUI_H_
#define GUI_H_
#include <lo/lo.h>
#include <string>
#include <list>
#include "Box.h"
#include "Menu.h"
#include "Control.h"
#include "Text.h"
#include "Button.h"
using namespace std;
class Gui
{
public:
lo_address address;
string sIp;
string sPort;
string sTextInputElement;
string sTitle;//replaces applicationtab
string sFocusBoxId;
void init( list<Gui> *_guis );
/**
* creates a new box and adds it to the list boxes
* @param _name the name of the box
* @param _superBox the box it corresponds to, empty if there's none
* @param _x the x position
* @param _y the y position
* @param _w the width
* @param _h the height
* @param _shortCut its shortcut
*/
Box* createBox( string _name, string _superBox, int _x, int _y, int _w,
int _h, string _shortCut );
/**
* creates a new button and adds it to the list buttons.
* @param _box the name of the box, the box must exist!!! theres no check
* @param _caption the caption of the button
*/
Button* createButton( string _name, string _box, int _x, int _y, int _w,
int _h, string _shortCut, string _caption );
/**
* creates a new menu and adds it to the list menus
* @param _caption the caption of the first menuitem
* @param _value the value of the first menuitem
*/
Menu* createMenu( string _name, string _box, int _x, int _y, int _w,
int _h, string _caption, int _value );
/**
* creates a new control and adds it to the list controls
* @param _min the minimum value of this controller
* @param _max the maximum value of this controller
* @param _increment the amount of steps which shall be incremented
* @param _value the initialization value
*/
Control* createControl( string _name, string _box, int _x, int _y, int
_min,
int _max, int _increment, int _value );
/**
* creates a text and adds it to the list texts
*/
Text* createText( string _name, string _box, int _x, int _y, int _w,
int _h,
string _shortCut, string _caption );
/**
* Returns a pointer to this box.
* @param _name the name of the box
* @param _superBox its super box if exists
* @return a pointer to the box
*/
list<Box>::iterator getBox( string _name, string _superBox );
/**
* Returns a pointer to this button.
* @param _name the name of this button
* @param _box the box this button corresponds to
* @return a pointer to the button
*/
list<Button>::iterator getButton( string _name, string _box );
/**
* same as getButton
*/
list<Menu>::iterator getMenu( string _name, string _box );
/**
* same as getButton
*/
list<Control>::iterator getControl( string _name, string _box );
/**
* same as getButton
*/
list<Text>::iterator getText( string _name, string _box );
/**
* Removes the give box
* @param _name its name
* @param _superBox its super box
*/
void removeBox( string _name, string _superBox );
/**
* removes a button
* @param _name
* @param _box
*/
void removeButton( string _name, string _box );
/**
* removes a menu
* @param _name
* @param _box
*/
void removeMenu( string _name, string _box );
/**
* removes a control
* @param _name
* @param _box
*/
void removeControl( string _name, string _box );
/**
* removes a text
* @param _name
* @param _box
*/
void removeText( string _name, string _box );
private:
list<Gui> *guis;
list<Button> buttons;
list<Control> controls;
list<Menu> menus;
list<Box> boxes;
list<Text> texts;
};
#endif /* GUI_H_ */
/***************************************************************************
* Gui.cpp
*
* Nov 13, 2008 10:09:18 PM CET 2008
* Copyright 2008 Christian Loehnert
* krampenschiesser(a)freenet.de
****************************************************************************/
/*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
02110-1301, USA.
*/
#include "../incl/Gui.h"
Text* Gui::createText( string _name, string _box, int _x, int _y, int _w,
int _h, string _shortCut, string _caption )
{
Text text( string _name, string _box, int _x, int _y, int _w, int _h,
string _shortCut, string _caption );
texts.push_back( text );
return &texts.back();
}
Control* Gui::createControl( string _name, string _box, int _x, int _y,
int _min, int _max, int _increment, int _value )
{
Control control( _name, _box, _x, _y, _min, _max, _increment, _value );
controls.push_back( control );
return &controls.back();
}
I don't get it, please help me!
Christian
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
iEYEARECAAYFAkkomDYACgkQVC26eJ+o0+1FpACfaJFZuL1fP8HpfoWoX25SsbM2
yBoAn1Xif9SYLW/jpElBYYXHVInI88qv
=HtHw
-----END PGP SIGNATURE-----
On Thursday 20 November 2008 13:45:41 Jens M Andreasen wrote:
> On Thu, 2008-11-20 at 09:00 +1300, Eliot Blennerhassett wrote:
> > Hmmm. Sorry I can't help further, I only know how to set it up when it
> > does work ;)
>
> Your code actually works just fine when used directly at the C-level.
> Any chance of an annotated version - that reads a little like a brief
> tutorial - of the 4 channel merger?
I added a section here.
http://alsa.opensrc.org/index.php/.asoundrc#Joining_devices_to_make_multich…
My code is derived from the documentation of multi plugin found here
http://www.alsa-project.org/alsa-doc/alsa-lib/pcm_plugins.html
except instead of defining separate route and multi plugins, I nest all the
plugin defs so that only one set of $CARD,$DEV,$SUBDEV parameters are
required. As you see in the above example, if you have fixed devices the
setup is much less verbose.
regards
--
--
Eliot Blennerhassett
www.audioscience.com
Dan Mills kirjoitti:
> On Mon, 2008-11-17 at 14:54 +0200, Hannu Savolainen wrote:
>
>
>> Audio (or MIDI) applications produce and consume audio streams. All they
>> need to do is to tell what kind of stream (rate, format, etc) they want
>> to play/record. They can also adjust their input/output volume or select
>> the recording source if necessary. In this way the 'system tools' (or
>> any application dedicated for these purposes) can be used to route and
>> mix the
>>
>
> That is a very simplistic view of audio applications, and while it fits
> simple media players and even simple audio editors, it fails horribly in
> a large and complex environment. Even something like a jack client
> (which is superficially your produce and consume audio streams example),
> often needs significant information about timing, DMA sample position
> and the like to be able to do its thing.
>
Getting DMA sample position fits in the picture. You get it from the
'audio stream'. You don't need to bypass the 'system' and to talk
directly to the device.
> Clock source is certainly something that some subset of applications
> should be legitimately concerned with, as are things like head amp gain,
> phantom power switching and so on. By no means every app needs to care
> about this stuff, but that is policy and should not be the business of
> an API to define.
>
>
The point is that this kind of details should be handled exclusively by
programs dedicated for this kind of purposes.
You may also need to go deep inside the hardware clock/timing if you are
doing very special things like correlating measured brain/EEG signals
with acoustic stimulus played through the sound card.
>> However if the application also tries to interact with the hardware (by
>> opening /dev/mixer in OSS or by doing similar things with ALSA) then
>> shit will happen. This kind of interaction with hardware may mean that
>> the application refuses to work with a pseudo/loopback device that hands
>> the signal directly to Icecast server.
>>
>
> That is what jackd or similar (that happen at a higher level then the
> driver) are for, trying to do this at too low a level is always going to
> cause pain and compatibility issues as the low level stuff needs to
> support all the warts the hardware has.
>
> Consider also that not everything output from a soundcard is
> automatically PCM audio, both AES3 and spdif have non audio modes and
> both are useful upon occasion.
>
These too are attributes of the audio stream. The application simply
needs to tell that it has an AC3 stream.
>> All this just because the
>> developer of the application wanted to add nice features to wrong place.
>>
>
> Or because the developer NEEDED the audio interface in a particular mode
> for anything to work (A digital surround encoder would be a reasonable
> example - it needs to set the 'non audio' flag in the serial data stream
> as otherwise the data makes no sense.
>
It's responsibility of the system to ensure that integrity of the stream
is preserved (by telling it has a digital bitstream). However the
application should not try to access the actual device to turn the
audio/data bit on or off or to reconfigure the signal path to unity gain.
>
>>>> Equally well an audio player application should just open a connection
>>>> to the audio device, set the rate/format and then just start to play.
>>>> They should not try to do things like automatic unmuting the sound card.
>>>>
>
> That is a policy issue and is probably untrue for at least some
> applications and use cases: A softphone that 'just works' for the common
> cases is impossible without the ability to set microphone routing and
> gain (Otherwise you end up with a lot of faffing about to get the thing
> to work).
>
There is nothing wrong with this. For example the _audio_ API of OSS has
ioctl calls for selecting the recording source and level. In addition
the sofphone may need to open the microphone (audio) device so that
virtual mixer is bypassed (also for security reasons).
> Now I may take the view that ideally the softphone should stay out of
> setting that sort of thing, but if I was distributing one, I would like
> to have the thing work out of the box in the common cases (it cuts down
> on support calls).
>
Not at all. The application just needs to use the right API (subset)
instead of trying to peek/poke the hardware in some random way.
OTOH a softphone application that always wants to switch the
input/output to microphone/headset may be a pain in the ass.
>
> The thing is, in professional audio (I do theatre sound for a living),
> the sort of thing fons in doing is just not that uncommon, lots of
> speakers (on delay lines), lots of dynamically changing routing, a feed
> to the OB truck (at a different sample rate), positioning audio with
> dynamically changing delays and levels, hundreds of cues (that can
> dynamically reconfigure routing and desk setup).... Being able to
> control the interfaces at a low level (and on the fly from automation
> scripts) is **IMPORTANT**.
>
IMHO it's important that applications developed for given tasks are kept
focused on the that particular task only. An application that takes care
of automation can trigger other applications to do the right things at
the right moment. For example to send SMS to your wife just before the
final curtain (so that she can turn the coffee machine on just before
you arrive). It can trigger sound/light effects or run scripts that
reconfigures this and that between acts. It can send MIDI messages to
move the sliders on the mixing console. However it's not a good idea to
stuff this application with all possible functionality such as audio
players/recorders, EQ, limiters/compresors, audio analyzers and so on.
If they are in separate programs you can use the best available programs
for each purpose instead of sticking with some sticky secondary tools of
othewise fine automator program.
Best regards,
Hannu
Fred Gleason wrote:
> On Sunday 16 November 2008 04:16:19 pm you wrote:
>
>> Programmers are not stupid. However the way how typical sound
>> applications are implemented is wrong.
>>
>
> So this is a reason to cripple the API -- because 'typical'
> application programmers don't know what they're doing? What about
> those who *do*?
>
>
Programmers who know what they do don't do stupid things. All the
problems are caused by the programmers who *think* they know what they
do. They are so clever that they don't need to open any manuals, etc.
>> Network or disk performance analyzers/monitors/optimizers are good
>> tools. They are not "ordinary" applications but system tools.
>>
>
> Whatever the semantic difference between 'applications' and 'system
> tools' might be, *both* end up having to interact with the hardware
> via some sort of API, so I'm not sure that the distinction is
> particularly meaningful in this context.
>
Audio (or MIDI) applications produce and consume audio streams. All they
need to do is to tell what kind of stream (rate, format, etc) they want
to play/record. They can also adjust their input/output volume or select
the recording source if necessary. In this way the 'system tools' (or
any application dedicated for these purposes) can be used to route and
mix the
However if the application also tries to interact with the hardware (by
opening /dev/mixer in OSS or by doing similar things with ALSA) then
shit will happen. This kind of interaction with hardware may mean that
the application refuses to work with a pseudo/loopback device that hands
the signal directly to Icecast server. All this just because the
developer of the application wanted to add nice features to wrong place.
>
>> However it's wrong if programs like Mozilla try to do this kind of
>> functions.
>> All a web browser does is opening a TCP/IP socket to the
>> http/ftp/whatever server, sends the request and waits for the response.
>> Equally well an audio player application should just open a connection
>> to the audio device, set the rate/format and then just start to play.
>> They should not try to do things like automatic unmuting the sound card.
>>
>
> Right, but I think you're kind of missing the point. We're not
> talking about garden variety 'audio player' applications here. The
> world of audio -- especially professional audio -- is a much larger
> place. This doesn't make such applications 'system tools', merely
> applications that work outside of the simple assumptions adequate when
> designing garden-variety 'audio players'. To hardwire those simple
> assumptions into the driver system is IMO a design error, one that
> imposes serious limits on the usefulness of the overall system.
> Effectively, it's dictating policy in a layer that should be primarily
> concerned with mechanism.
>
Professional and professional. One definition of professional is
available in Wikipedia (http://en.wikipedia.org/wiki/Professional).
Audio professional is a professional specialized on audio. 'Professional
audio' is what audio professional does for living. Professionals use
hardware/software with much more features than ordinary users do.
Professionals indeed use HW/SW that has "professional features". However
they will not buy a (say) sampler that tries to reset the master mixing
console every time it's powered on.
On the other hand 'professional audio' is also a marketing term that
originates from popular _consumer_ sound cards such as Sound Blaster
Pro, Pro Audio Spectrum and many others. In this context "pro" means
that the device has all the bells and whistles. Usually there is also
loads of more or less useless bundled software included in the package.
So what kind of professional do you mean?
Best regards,
Hannu
Hello all,
Reading the file produced by 'alsactl store', I learn
that my sound hardware has a number of control parameters
that have names, types, values, ranges, etc. etc.
I now want to write some hopefully not too convolved
C or C++ code to read and write these parameters.
Is there, after X years of ALSA, any documentation that
explains the basic concepts and tells me how to do this ?
If such a thing exists I can't find it.
The Doxygen info on the ALSA site is completely useless
for the purpose of learning to understand and use the
control interface.
The textual information provided there usually provides
*nothing* that can't be read from the C types, structs or
functions it is supposed to document. It just repeats the
jargon used in the code, and is at least 99.9% redundant.
What these things actually mean, how they fit together
and what is the big picture is AFAIK nowhere and never
explained. Which is strange, because if you design a
system such as this, that would be the absolutely first
thing you need to define. No doubt the designers have it
in their heads. No doubt it's well structured and also
abstracted to almost absurd levels. But it remains a
complete mystery unless you have the time and energy and
someone is paying you to spend at least half a year to
reverse-engineer the so-called docs. If ever there was
an example of Doxygen or similar system being no more
than a pretext to keep the quality department happy,
ALSA is the best one I know of.
Now if someone can point me to some existing docs that
explain how I can e.g. set the sample clock source on a
RME MADI card in less than ten lines of C code (knowing
the parameter names, ranges, etc - no need to find them
out dynamically, I can read them asound.state) then I'll
eat my hat. It shouldn't be difficult. On some competing
systems all it takes is one ioctl().
Ciao,
--
FA
Laboratorio di Acustica ed Elettroacustica
Parma, Italia
Lascia la spina, cogli la rosa.
Hi,
I can't find anything online that gives me a way to run /sbin/mkdosfs as
a normal user.
Is it just that I need to add the user to the mkdosfs group or something
similar?
Cheers.
--
Patrick Shirkey
Boost Hardware Ltd
Hello folks!
One question, I hope it's not too dumb. :-(
If you have your average patchbay, how does it know, when new MIDI/audio
ports/clients come to live or die? And how does it know, that some connection
was killed by some other application.
Does it simply query it all the time? I wouldn't think so... But perhaps I'm
wrong...
Thanks for hints on this.
Kindest regards
Julien
--------
Music was my first love and it will be my last (John Miles)
======== FIND MY WEB-PROJECT AT: ========
http://ltsb.sourceforge.net
the Linux TextBased Studio guide
======= AND MY PERSONAL PAGES AT: =======
http://www.juliencoder.de
Ok, things have settled down, and i've tweaked a little here and there.
Seems to be running nicely now, and fairly stable.
A screenshot. of a generic setup.
http://shup.com/Shup/81262/patchage3.png
Alex.
lpatchage
jackdbus
rosegarden
linuxsampler
ardour2
jconv
On Tue, Nov 11, 2008 at 10:35 PM, alex stone <compose59(a)gmail.com> wrote:
> Nedko,
>
> This is what i get when i try, in the messages window of lpatchage, when i
> try to connect linuxsampler audio out:
>
> [JACKDBUS] ConnectPortsByName() failed.
>
> jackdbus log is attached. (I've renamed a copy for your perusal)
>
> Alex.
>
>
>
>
> On Tue, Nov 11, 2008 at 8:55 PM, Nedko Arnaudov <nedko(a)arnaudov.name>wrote:
>
>> "alex stone" <compose59(a)gmail.com> writes:
>>
>> > But i'm still at a loss as to why i can't connect LS audio out, to
>> Ardour
>> > audio in, in lpatchage, visibly.
>> > It works in Qjackctl, but stubbornly refuses to connect in lpatchage,
>> even
>> > though the actual connections are made in Ardour, and most importantly,
>> > work.
>>
>> Do you get any errors in jackdbus log file when you are trying to
>> connect using lpatchage?
>>
>> --
>> Nedko Arnaudov <GnuPG KeyID: DE1716B0>
>>
>
>
release candidate 2 has some important fixes:
* Fix for #46 - on first save of newly appeared clients, their state
was not correcttly recorded as being saved and thus was not being
restored on project load afterwards.
* Memory corruption fixes caused by bug in stdout/stderr handling
code. Was happening when lash client outputs lot of data to stdout or
stderr
* Improved handling of repeating lines sent to stdout/stderr
I would like to ask LASH beleivers and other interested parties to test
the 0.6.0 release candidate. Juuso Alasuutari and me have been doing
some major changes to the lash code. We have done lot of work, we've
fixed several big implementation issues and we need stable point before
doing more changes (0.6.1 and 1.0 milestones).
In the tarball there is simple lash_control script. One can also control
LASH through patchage-0.4.2 and through lpatchage (availabe through
git).
User visible changes since 0.5.4:
* Use jack D-Bus interface instead of libjack, enabled by default, can
be disabled. Ticket #1
* Allow controlling LASH through D-Bus. Ticket #2
* Use D-Bus autolaunching instead of old mechanism. Ticket #3
* Log file (~/.log/lash/lash.log) for LASH daemon. Ticket #4
* Client stdout/stderr are logged to lash.log, when clients are
launched by LASH daemon (project restore). Ticket #5
* Improved handling of misbehaved clients. Ticket #45
* Projects now can have comment and notes associated. Ticket #13
Download:
http://download.savannah.gnu.org/releases/lash/lash-0.6.0~rc2.tar.bz2http://download.savannah.gnu.org/releases/lash/lash-0.6.0~rc2.tar.bz2.sig
--
Nedko Arnaudov <GnuPG KeyID: DE1716B0>