Hi LADs!
a new free software is born: IceStream -
https://sourceforge.net/projects/icestream/
it's a beta version, we are writing the known bugs & todo, some parts
are in french but it is already possible to use it to mix audio
streaming, to start an icecast from gui and the to send an audio
stream to it or elsewhere. Thanks to mplayer, ices, icecast, vorbis
and jack.
It works with jackd server and use only ogg for the moment (we don't
use mp3 anyway here).
Of course, we are looking for beta-tester and more people are also
welcome to help, support, develop and more if you want!!
mailing list on its way and forum already in place :
https://sourceforge.net/apps/phpbb/icestream/
bug tracker to come ;-)
cheers
Julien
--
APO33
space of research and experimentation
http://www.apo33.org
info(a)apo33.org
Hello,
I can't compile the last SVN (280) version of LV2 core due to an error
when I run "waf configure"
blablack@igor:~/src/Launchpad/lv2core$ waf configure
Setting top to : /home/blablack/src/Launchpad/lv2core
Setting out to :
/home/blablack/src/Launchpad/lv2core/build
Global Configuration
* Install prefix : /usr/local
* Debuggable build : False
* Strict compiler flags : False
* Build documentation : False
Traceback (most recent call last):
File "/home/blablack/src/Launchpad/lv2core/.waf-1.6.6-158eec7a0749a003782d4a9a502e3d08/waflib/Scripting.py",
line 94, in waf_entry_point
run_commands()
File "/home/blablack/src/Launchpad/lv2core/.waf-1.6.6-158eec7a0749a003782d4a9a502e3d08/waflib/Scripting.py",
line 146, in run_commands
run_command(cmd_name)
File "/home/blablack/src/Launchpad/lv2core/.waf-1.6.6-158eec7a0749a003782d4a9a502e3d08/waflib/Scripting.py",
line 139, in run_command
ctx.execute()
File "/home/blablack/src/Launchpad/lv2core/.waf-1.6.6-158eec7a0749a003782d4a9a502e3d08/waflib/Configure.py",
line 127, in execute
super(ConfigurationContext,self).execute()
File "/home/blablack/src/Launchpad/lv2core/.waf-1.6.6-158eec7a0749a003782d4a9a502e3d08/waflib/Context.py",
line 87, in execute
self.recurse([os.path.dirname(g_module.root_path)])
File "/home/blablack/src/Launchpad/lv2core/.waf-1.6.6-158eec7a0749a003782d4a9a502e3d08/waflib/Context.py",
line 127, in recurse
user_function(self)
File "/home/blablack/src/Launchpad/lv2core/wscript", line 36, in configure
autowaf.define(conf, 'LV2CORE_PATH_SEP', lv2core_path_sep)
File "/home/blablack/src/Launchpad/lv2core/.waf-1.6.6-158eec7a0749a003782d4a9a502e3d08/waflib/extras/autowaf.py",
line 63, in define
conf.define(var_name,value)
AttributeError: 'ConfigurationContext' object has no attribute 'define'
Is anybody else facing the same issue?
Thanks in advance,
Aurélien
>> instead of
>> waf configure
>> run:
> > ./waf configure
>> from root directory of the zip (trunk), not from core.lv2 directory.
>
>This would certainly do it. That you even have a system installed waf
>is very odd...
>
>The build system is entirely self-contained, you only need Python.
>
>Please let me know if this was the problem. You do need to use the
>included ./waf
Using ./waf didn't solve the problem.
I'm doing it from the core.lv2 directory because I'm creating packages
for Ubuntu.
If I come back to the SVN revision 273, it works fine.
Since 274, I can't do "./waf configure" from the core.lv2 directory.
(doing it from the trunk directory works though...)
Hope that helps,
Aurélien
Hi everyone!
I have been using AlsaModularSynth for the past several years and I just
love the sound of its internal modules... But AlsaModularSynth is a bit old
looking, and it doesn't support LV2...
On the other hand, Ingen looks brilliant, but I have never been able to
recreate what I was able to do with AMS... I always had some weird issues to
set a simple VCO/VCA/VCF setup...
I see two solutions to that problem :
- integrate LV2 to AMS...
- extract the AMS internal modules and create LV2 plugins from them...
>From the quick study I have done of the AMS source code, that second
solution seems "quite simple" (as quite simple can be!), but I have few
questions before I start on this quest :
- first of all, would porting the AMS modules to LV2 change the sound?
- is it actually possible to do it? Or did I miss something?
- what is the future of Ingen? Altough it has be around for a while, it is
never available in official repositories of Ubuntu for example... Is Ingen
more meant to be testing LV2 plugins or can it be uses as a proper modular
synth? I believe only Ingen and AMS have this modular way of creating
synthesis?
- hmm, anything else I should consider?
Thanks in advance for the help,
Aurélien
Hi all,
I've just released version 1.0.25. Main thing is a fix for Secunia
Advisory SA45125, a heap overflow in the PAF file parser. Since the
heap was getting overwritten with zeroes, there is little that an
attacker can acheive other than causing a program that uses
libsndfile to segfault.
Secunia suggest remote system access is possible:
http://www.securelist.com/en/advisories/45125
but I call bullshit.
Secunia also join my shit list for going public with this a week
early that they originally stated, meaning I had to rush this
release out. The rush of the release means the windows builds
have not been tested as thoroughly as I would have liked.
As usual, its available from:
http://www.mega-nerd.com/libsndfile/#Download
Cheers,
Erik
--
----------------------------------------------------------------------
Erik de Castro Lopo
http://www.mega-nerd.com/
2011/7/11 Renato <rennabh(a)gmail.com>:
> On Sun, 10 Jul 2011 19:43:53 +0200 rosea grammostola wrote:
>> On 07/10/2011 06:33 PM, Emanuel Rumpf wrote:
>>>
>>> http://wiki.linuxaudio.org/wiki/user/emrum/jack_session_2_draft
>>>
>> Good job. I added some comments.
Thank you.
>>
>
> just wanted to notify that one of the "additional ideas" is asking
> support for multiple sessions, while one of Emanuel's
> "conclusions" (first bulleted list, third item) says that this is now
> possible but should actually not be, as it could be an error-source.
>
I am for/pro multi-session, IF it is completely reliable, works without
hassle (including user-handling) and doesn't complicate
implementation too much. That might be possible. Eventually.
The current behavior, to me, feels strange and unexpected, though:
Opening a running session (twice) opens another set of windows.
What's that ? Do I have a "doubled" session now ?
It is very opaque, unclear, which window belongs to which session.
Is that how it should be, is that what we'd want ?
I'm adding a new section to the wiki page, "(Multi-) Session Handling".
--
E.R.
Linux Audio Developer,
May I make a feature request here for your Linuxaudio application(s)?
Could you please add JackSession support? It makes working with JACK
standalone applications a lot more user friendly. There are some apps
who support it already and they work fine, like Yoshimi, Qtractor,
Pianoteq, Ghostess, Guitarix, Jack-Rack, Ardour3, Bristol, Seq24, Jalv,
Ingen, Connie, Specimen and probably more.
It is possible to use applications without JackSession-support in a
session (via so called infra clients), it starts the applications, make
the connections, but doesn't save the state. So obviously it would be
far more useful if those applications would get JackSession-support also.
Qjackctl is able to work as Session Manager, so is Pyjacksm (and likely
Patchage in the future).
According to comments on IRC by Paul Davis, it's very easy to add
JackSession support to your application.
"Its really easy, just handle 1 more callback from the server. Torben's
walkthrough shows what is necessary."
Torben's walktrough:
http://trac.jackaudio.org/wiki/WalkThrough/Dev/JackSession
Thanks in advance,
\r
Hi,
It is very promising that devs like Torben, Paul Davis, Rui and David
Robillard (to name a few), are 'backing up' Jack Session and that the
Jack Session API is in the Jack API. This will give the community a very
good chance that many apps will get JackSession support soon (or later).
However, it's still reasonable to expect that not all LAD applications
are going to be patched with JackSession support.
In other words, there are and will be apps which might be useful (for
one or more of us) to use in a session but which won't have JackSession
(JS) support. From a users perspective, it would be very useful to be
able to use that application (without JS support) in a session in some
way nevertheless.
At the moment one Session Manager (SM), Pyjacksm (Qjackctl will follow
soon, and also Patchage I expect) makes this possible by manually adding
'infra clients' to a configuration file, .pyjacksmrc. See example below.
Infra clients are designed for applications without a state, like a2j.
But it is also possible to use apps without JS support as infra client.
Amsynth is an application without JS support and in this way I am able
to load amsynth, with project A. The SM makes sure that Amsynth is
started and that the Jack connections are restored (that's the only
thing the SM can do for you for apps without JS support). But I don't
want to use Amsynth with Project A always (Session 1). I might be
working on a totally different project and want to make a session for
that also (Session 2). This time I want to load amsynth as: amsynth -b
/home/user/projectB.amSynth.presets (I don't use Session 1 and 2
together in this example).
To be able to load Session 2, I have to edit my .pyjacksmrc file or make
symlinks.
*Feature request*: It would be nice if the SM could provide me a way to
load a different configuration file.
For example: JackSessionManagerX --load configurationfileSession2
Thanks in advance,
\r
.pyjacksmrc:
[DEFAULT]
sessiondir = ~/linuxaudio/JackSession
[infra]
a2j = a2jmidid -e
amsynth = amsynth -b /home/user/projectA.amSynth.presets
configurationfileSession2:
[DEFAULT]
sessiondir = ~/linuxaudio/JackSession
[infra]
a2j = a2jmidid -e
amsynth = amsynth -b /home/user/projectA.amSynth.presets
guitarix/gx_head is a simple guitar mono tube amplifier simulation.
please refer to our project page for more information:
http://guitarix.sourceforge.net/
new features in short:
* fixed jack session support
* add amp-model (push/pull)
* add amp-model (feedback)
* fix build/runtime issue on OSX
* reformat source to the Google C++ Style Guide conventions
* some minor fixes and maybe new bugs
have fun
_________________________________________________________________________
guitarix is licensed under the GPL.
screen-shots and sound examples:
http://guitarix.sourceforge.net/
direct download:
http://sourceforge.net/projects/guitarix/files/guitarix/guitarix2-0.17.0.ta…
download site:
http://sourceforge.net/projects/guitarix/
please report bugs and suggestions in our forum:
http://sourceforge.net/apps/phpbb/guitarix/
________________________________________________________________________
For extra Impulse Responses, gx_head uses the
zita-convolver library, and,
for resampling we use zita-resampler,
both written by Fons Adriaensen.
http://kokkinizita.linuxaudio.org/linuxaudio/index.html
We use the marvellous faust compiler to build the amp and effects and will say
thanks to
: Julius Smith
http://ccrma.stanford.edu/realsimple/faust/
: Albert Graef
http://q-lang.sourceforge.net/examples.html#Faust
: Yann Orlary
http://faust.grame.fr/
________________________________________________________________________
For faust users :
All used Faust dsp files are included in /gx_head/src/faust,
the resulting .cc files are in /gx_head/src/faust-generated
The tools we use to convert (post-processing and plot)
the resulting faust cpp files to the needed include format,
stay in the /gx_head/tools directory.
________________________________________________________________________
regards
guitarix development team
Pierre, I suspect it was intended to send it to the list?
-------- Forwarded Message --------
From: pierre jocelyn andre
To: Ralf Mardorf
Subject: Re: [LAD] audio format abadie.jo
Date: Thu, 7 Jul 2011 08:06:36 +0200
Oui c'est exactement ça " He's looking for help with the C language,
testing, and creating a new
generation of audio card", sauf que je ne demande pas d'aide, je donne
des bases pour pouvoir développer. Chacun est libre de modifier, de
garder ou de donner ses sources. La seule limite est dans la licence que
j'ai fixé.
L' allemand, ou en néerlandais sont des langues compatibles, mais cela
fait trop longtemps que je ne les ai plus parlé.
Effectivement avec 10 octets " 10 bytes english", je reproduis une voix
humaine "human voice"
J'espère que ce concept audio sera celui de linux pour les prochaines
générations, car beaucoup plus performant, moins gourmand en ressources,
moins gourmand en énergie, et qui possède sa propre technologie.
Amicalement
2011/7/7 Ralf Mardorf <ralf.mardorf(a)alice-dsl.net>
On Wed, 2011-07-06 at 19:11 -0400, Tim E. Real wrote:
> On July 6, 2011 05:33:44 am Florian Paul Schmidt wrote:
> > > Http://www.letime.net/legere/index.html
Scrolling down, at the bottom of the webpage there's Google
translate.
The English translation seems to be better than the German
translation.
> > Hi,
> >
> > I think you'll get more responses if you state your text in
english.
> > Good luck,
> > Flo
>
> Hi Florian. We meet again, on LAD!
>
> I think he says he has created a new audio format for Linux.
> He's looking for help with the C language, testing, and
creating a new
> generation of audio card.
> The next part I'm not sure. I think it says he has created an
audio file
> with human voices using only ten octaves? Don't know what
'Ko' is.
>
> I'm English Canadian with some French knowledge.
> We were all taught French in school, long ago.
> Tim.