Not sure if this is going to help but I did notice with pd-l2ork on a
laptop with an AMD video card (fglrx driver) when I have the tearfree
(vsync) option enabled pd-l2ork's gui gets very sluggish. Disabling the
vsync option makes it fine.
On Feb 24, 2013 3:54 PM, "Fons Adriaensen" <fons(a)linuxaudio.org> wrote:
Hello all,
I'm seeing a strange probelm with Pd. I'm testing a very simple patch
(total CPU load < 1.5% with DSP enabled). There are a few message boxes
used to set a hslider to some preset values. With DSP enabled they take
something like a second to respond to a click. What's wrong here ?
Ciao,
--
FA
A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)
_______________________________________________
Linux-audio-dev mailing list
Linux-audio-dev(a)lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev
A couple of days ago, I tried out non-session-manager, and thought,
this is really nice. It really works in a practical, easy way.
Unfortunately the number of apps supporting non-session is quite small
(as is the case for *all* session management systems, AFAIK). There may
be "a complete audio studio" supporting non-session, from one choice of
sequencer to one synth to one sampler, but people like to use their
favorite software. You can't just say, "if you want to use a sequencer,
use X because it supports non-session". So its usefulness is limited.
However, many apps support one session management framework or the
other. So the obvious thing to do if you want to give people more
choices would be to create some kind of interoperability layer between
session management systems.
What do you think about this? Is there an effort for something like
this already underway? I personally think a good first step might be to
create some compatibility between non-session (because I like it) and
jack-session (because most people are using jack).
Hello all,
I'm seeing a strange probelm with Pd. I'm testing a very simple patch
(total CPU load < 1.5% with DSP enabled). There are a few message boxes
used to set a hslider to some preset values. With DSP enabled they take
something like a second to respond to a click. What's wrong here ?
Ciao,
--
FA
A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)
Hello both lists,
I see the reason for a non-discussion announcement list, LAA, but is it really necessary to have twp lists?
I know there are several people who only subscribed to one list, but I don't know if this is intentional or not.
Why not merge both. The immediate effect would be a stop at the cross posting just to reach the whole community.
The mailinglists are from the same service, but I think in the end it would be more convenient and better for everyone to have it all in one place.
Yes, I know some of you fear (mostly people in the dev list who do not read LAU) that they will get unwanted mails. But as you can tell from experience topics like "why linux audio sucks", "water as fuel" and other mega/nonsense/offtopic threads get cross posted anyway.
And I know some of you think devs and users should be kept seperated. But be honest: Who in the world is subscribed to a linux mailing list and could not stend the occasional developers topic. If you want beginner-level support you most likely don't know how to use lists :)
Just to be clear:
I am a friend of diversity and seemingly "redundant" applications and projects. There cannot be enough sequencers, samplers, synthesizer, notation programs etc.
But when it comes to infrastructure and core building blocks I see no sense in seperation. This is the strong point, compared to other operating/eco-systems. We may friendly compete on a musical or feature basis, but there is no need to border a program just to make it harder for the users to use the program of the "enemy" (from a Windows/OSX POV).
In a Linux-Audio world of diversity and individuality, it is good to have central places and instances. We can do whatever we want but unlike the closed source world we have no need to create factions and standards that rival with each other, creating artificial gaps.
The most successful of these instances is JACK itself. A centralized audio server, hailed and praised by everyone.
And since I am writing this mail already, my personal wish list:
Please join the two blog/RSS planets as well (http://www.planet.linuxmusicians.com/, http://linuxaudio.org/planet/ )
Merge Yoshimi and ZynAddSubFx, the JACK versions and experimental forks, the session managers/protocols and a few audio distributions which have not enough manpower and that can be used to boost the other distributions. "Joining/Merging" also can mean for one side to step down honorable and retire the project.
Nils
http://www.nilsgey.de
MFP -- Music For Programmers
Release 0.01, "Mining For Participants"
MFP is an environment for visually composing computer programs, with
an emphasis on music and real-time audio synthesis and analysis. It's
very much inspired by Miller Puckette's Pure Data (pd) and Max/MSP,
with a bit of LabView and TouchOSC for good measure. It is targeted
at musicians, recording engineers, and software developers who like
the "patching" dataflow metaphor for constructing audio synthesis,
processing, and analysis networks.
MFP is a completely new code base, written in Python and C, with a
Clutter UI. It has been under development by a solo developer (me!),
as a spare-time project for several years.
Compared to Pure Data, its nearest relative, MFP is superficially
pretty similar but differs in a few key ways:
* MFP uses Python data natively. Any literal data entered in the
UI is parsed by the Python evaluator, and any Python value is a
legitimate "message" on the dataflow network
* MFP provides fairly raw access to Python constructs if desired.
For example, the built-in read-eval-print console allows live
coding of Python functions as patch elements at runtime.
* Name resolution and namespacing are addressed more robustly,
with explicit support for lexical scoping
* The editing UI is largely keyboard-driven, with a modal input system
that feels a bit like vim. The graphical presentation is a
single-window style with layers rather than multiple windows.
* There is fairly deep integration of Open Sound Control (OSC), with
every patch element having an OSC address and the ability to learn
any other desired address.
The code is still in early days, but has reached a point in its
lifecycle where at least some interesting workflows are operational
and it can be used for a good number of things. I think MFP is now
ripe for those with an experimental streak and/or development skills
to grab it, use it, and contribute to its design and development.
The code and issue tracker are hosted on GitHub:
https://github.com/bgribble/mfp
You can find an introductory paper (submitted to LAC-2013) and
accompanying screenshots, some sample patches, and a few other bits of
documentation in the doc directory of the GitHub repo. The README
at the top level of the source tree contains dependency, build,
and getting-started information.
Thanks,
Bill Gribble
Hi,
I compiled Aliki in a new machine and I can not capture impulses because
aliki crash when I try to load a sweep file in the capture dialog.
I can create a sweep file, I tried to create diferent sweep files but
any of these files can be loaded in the capture dialog.
When Aliki crash there is some Backtrace in the console, I pasted here:
http://hastebin.com/pobileyiru.avrasm
Using:
Ubuntu12.10 Quantal
kernel: 3.5.0-23-generic
x86_64
Thanks in advance,
federico lopez
PD: I used Aliki in another 32bits machine without problems.
Message: 7
Date: Sun, 17 Feb 2013 17:10:11 -0500
From: Paul Coccoli < <mailto:pcoccoli@gmail.com> pcoccoli(a)gmail.com>
>You're effectively serializing your object and passing them over the
ringbuffer. If you do it this way, you should at least consider explicitly
embedding the type and length as the first member of EventBase (and have
each derived class set their own type and length in their constructors), and
reading those before touching the object you read.
Very good, that's what I do. The event is serialised to the ringbuffer, one
member at a time (so I don't rely on a particular memory layout, or padding.
The Type of message and the length are the first two values. (this example
sends some text to the GUI to display a message box)..
my_output_stream& strm = MessageQueToGui();
strm << id_to_long("mbox");
strm << (int) ( sizeof(wchar_t) * msg.size() + sizeof(int) +
sizeof(icontype)); // message length.
strm << msg;
strm << icontype;
Jeff