Hello Developers,
I have written a short blog post about what I think is the ideal license for open source sampled instrument libraries. It is in fact either a CC license or GPL, I am not so naive to think I am able to write an entire license, but I have added necessary additions and exceptions.
I am not a lawyer and I have no education or official background in law and rights. This is a suggestion, based on my experience with existing licenses and the requirements of sample based virtual instrument libraries, a.k.a "Samples"
http://www.nilsgey.de/2013/02/28/License_Proposal_for_Sample_Instrument_Lib…
I welcome any comments here or in the blog itself since I know that licenses are a serious matter. And I don't want to make a fool of myself. I remember some weird licenses, even in the linux audio community, and I don't want to create one of them :)
Greetings,
Nils
Ignorant here. Trying to scrounge around and make something work for a demo
purpose.
In python I am trying to build this pipeline:
pipeline_txt = (
'jackaudiosrc ! '
'level name=level interval=1000000000 !'
'jackaudiosink')
pipeline = gst.parse_launch(pipeline_txt)
I have been trying that a number of ways.
So, I basically watch the bus for level info.
In a subroutine, I can print the peak info to the terminal.
I can't seem to figure out how to pass this info back to the rest of the
program so that I can hook it up to a graphical meter.
Cna anyone point me to some simple code doing something like this? Give me
some clues that might help someone who seems to be being very dense for days
now?
all the best,
drew
Hello all,
we are working on some development of audio/video web live
applications at the moment. We are facing a little problem, maybe some
people here already came across. Some people advised us to use red5,
an audio/video streaming and multi-user solution to take sound from
mic / sound card input + webcam/cam to be streamed online via your
browser. The biggest problem we are facing is that is based on adobe
closed flash system and it use only proprietary format/codecs such as
mp3, flv and h264 :-(
We are looking for a way to do the same with floss technologies such
as ogg, theora, icecast or others...
The idea is that the final user will stream his sound and video with
his browser and everyone could see/hear him and do the same via their
browser.
We are actually testing some solutions with gstreamer but we still
need an external client, although we are thinking about using some
python script to run on the server to handle gstreamer live stream...
but we are not sure...
Anyway if any of you have any hints or some floss solutions for that,
it would be really highly appreciated :-)
thank you
cheers
Julien
--
APO33
space of research and experimentation
http://www.apo33.org
info(a)apo33.org
Not sure if this is going to help but I did notice with pd-l2ork on a
laptop with an AMD video card (fglrx driver) when I have the tearfree
(vsync) option enabled pd-l2ork's gui gets very sluggish. Disabling the
vsync option makes it fine.
On Feb 24, 2013 3:54 PM, "Fons Adriaensen" <fons(a)linuxaudio.org> wrote:
Hello all,
I'm seeing a strange probelm with Pd. I'm testing a very simple patch
(total CPU load < 1.5% with DSP enabled). There are a few message boxes
used to set a hslider to some preset values. With DSP enabled they take
something like a second to respond to a click. What's wrong here ?
Ciao,
--
FA
A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)
_______________________________________________
Linux-audio-dev mailing list
Linux-audio-dev(a)lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev
A couple of days ago, I tried out non-session-manager, and thought,
this is really nice. It really works in a practical, easy way.
Unfortunately the number of apps supporting non-session is quite small
(as is the case for *all* session management systems, AFAIK). There may
be "a complete audio studio" supporting non-session, from one choice of
sequencer to one synth to one sampler, but people like to use their
favorite software. You can't just say, "if you want to use a sequencer,
use X because it supports non-session". So its usefulness is limited.
However, many apps support one session management framework or the
other. So the obvious thing to do if you want to give people more
choices would be to create some kind of interoperability layer between
session management systems.
What do you think about this? Is there an effort for something like
this already underway? I personally think a good first step might be to
create some compatibility between non-session (because I like it) and
jack-session (because most people are using jack).
Hello all,
I'm seeing a strange probelm with Pd. I'm testing a very simple patch
(total CPU load < 1.5% with DSP enabled). There are a few message boxes
used to set a hslider to some preset values. With DSP enabled they take
something like a second to respond to a click. What's wrong here ?
Ciao,
--
FA
A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)
Hello both lists,
I see the reason for a non-discussion announcement list, LAA, but is it really necessary to have twp lists?
I know there are several people who only subscribed to one list, but I don't know if this is intentional or not.
Why not merge both. The immediate effect would be a stop at the cross posting just to reach the whole community.
The mailinglists are from the same service, but I think in the end it would be more convenient and better for everyone to have it all in one place.
Yes, I know some of you fear (mostly people in the dev list who do not read LAU) that they will get unwanted mails. But as you can tell from experience topics like "why linux audio sucks", "water as fuel" and other mega/nonsense/offtopic threads get cross posted anyway.
And I know some of you think devs and users should be kept seperated. But be honest: Who in the world is subscribed to a linux mailing list and could not stend the occasional developers topic. If you want beginner-level support you most likely don't know how to use lists :)
Just to be clear:
I am a friend of diversity and seemingly "redundant" applications and projects. There cannot be enough sequencers, samplers, synthesizer, notation programs etc.
But when it comes to infrastructure and core building blocks I see no sense in seperation. This is the strong point, compared to other operating/eco-systems. We may friendly compete on a musical or feature basis, but there is no need to border a program just to make it harder for the users to use the program of the "enemy" (from a Windows/OSX POV).
In a Linux-Audio world of diversity and individuality, it is good to have central places and instances. We can do whatever we want but unlike the closed source world we have no need to create factions and standards that rival with each other, creating artificial gaps.
The most successful of these instances is JACK itself. A centralized audio server, hailed and praised by everyone.
And since I am writing this mail already, my personal wish list:
Please join the two blog/RSS planets as well (http://www.planet.linuxmusicians.com/, http://linuxaudio.org/planet/ )
Merge Yoshimi and ZynAddSubFx, the JACK versions and experimental forks, the session managers/protocols and a few audio distributions which have not enough manpower and that can be used to boost the other distributions. "Joining/Merging" also can mean for one side to step down honorable and retire the project.
Nils
http://www.nilsgey.de
MFP -- Music For Programmers
Release 0.01, "Mining For Participants"
MFP is an environment for visually composing computer programs, with
an emphasis on music and real-time audio synthesis and analysis. It's
very much inspired by Miller Puckette's Pure Data (pd) and Max/MSP,
with a bit of LabView and TouchOSC for good measure. It is targeted
at musicians, recording engineers, and software developers who like
the "patching" dataflow metaphor for constructing audio synthesis,
processing, and analysis networks.
MFP is a completely new code base, written in Python and C, with a
Clutter UI. It has been under development by a solo developer (me!),
as a spare-time project for several years.
Compared to Pure Data, its nearest relative, MFP is superficially
pretty similar but differs in a few key ways:
* MFP uses Python data natively. Any literal data entered in the
UI is parsed by the Python evaluator, and any Python value is a
legitimate "message" on the dataflow network
* MFP provides fairly raw access to Python constructs if desired.
For example, the built-in read-eval-print console allows live
coding of Python functions as patch elements at runtime.
* Name resolution and namespacing are addressed more robustly,
with explicit support for lexical scoping
* The editing UI is largely keyboard-driven, with a modal input system
that feels a bit like vim. The graphical presentation is a
single-window style with layers rather than multiple windows.
* There is fairly deep integration of Open Sound Control (OSC), with
every patch element having an OSC address and the ability to learn
any other desired address.
The code is still in early days, but has reached a point in its
lifecycle where at least some interesting workflows are operational
and it can be used for a good number of things. I think MFP is now
ripe for those with an experimental streak and/or development skills
to grab it, use it, and contribute to its design and development.
The code and issue tracker are hosted on GitHub:
https://github.com/bgribble/mfp
You can find an introductory paper (submitted to LAC-2013) and
accompanying screenshots, some sample patches, and a few other bits of
documentation in the doc directory of the GitHub repo. The README
at the top level of the source tree contains dependency, build,
and getting-started information.
Thanks,
Bill Gribble