Hi all,
I'm glad to announce the release of NASPRO 0.4.0.
NASPRO (http://naspro.atheme.org/) is meant to be a cross-platform
sound processing software architecture built around the LV2 plugin
standard (http://lv2plug.in/).
The goal of the project is to develop a series of tools to make it
easy and convenient to use LV2 for sound processing on any (relevant)
platform and for everybody: end users, host developers, plugin
developers, distributors and scientists/researchers.
This release is a huge one! The main news are the introduction of the
ino and gino libraries and of the FreeADSP application, the addition
of threading and unnamed semaphore APIs to NASPRO core, as well as
some API breakage there w.r.t. UTF-16 string encoding/decoding, some
fixing of preset data generation in NASPRO Bridge it and various
cosmetic changes here and there. You can find detailed ChangeLogs in
the tarballs.
It includes:
- NASPRO core: the portable runtime library at the bottom of the architecture;
- NASPRO Bridge it: a little helper library to develop
insert-your-API-here to LV2 bridges;
- NASPRO bridges: a collection of bridges to LV2 which, once
installed, allow you to use plugins developed for other plugin
standards in LV2 hosts;
- LV2proc: a simple command line effect processor using LV2 plugins;
- ino and ino/JavaScriptCore: minimalist C API to execute JavaScript
code and to expose native methods to JavaScript execution contexts +
JavaScriptCoreGTK+ 2/3 based implementations;
- gino and gino/WebKitGTK+: minimalist C API to create GUIs using
HTML/CSS/JavaScript and interfacing them with C code + WebKitGTK+2
implementation.
- FreeADSP: MIDI-controlled real-time stereo effect rack using LV2 plugins.
In particular, the NASPRO bridges collection includes two bridges: a
LADSPA (http://www.ladspa.org/) 1.1 and a DSSI
(http://dssi.sourceforge.net/) 1.0.0/1.1.0 bridge.
*BEWARE*: most of the new stuff is in early stages of development!
NASPRO core, NASPRO Bridge it and NASPRO bridges are released under
the LGPL 2.1, LV2proc and FreeADSP are released under the GPL 3, ino,
ino/JavaScriptCore, gino and gino/WebKitGTK+ are released under an
ISC-style license.
Enjoy!
Hi all,
I'm developing a small app which uses aubio to extract the fundamental
Frequency of a monophonic guitarsignal to drive a set of wavetable
synths. It works quite well and I'll put up on the net when I have the
time.
But I would like to extract the fundamentals of a polyphonic signal.
Does anyone know of a lib which does this in realtime? Or at least a
state of the art paper I could implement?
thanx,
Gerald
Hello all.
Some updates available on
<http://kokkinizita.linuxaudio.org/linuxaudio/downloads>
Zita-resampler is a C++ library for sample rate conversion of
audio signals. Full documentation is available in HTML format,
see the 'docs' directory.
Release 1.1.0 (26/01/2012)
---------------------------
* VResampler class added - provides arbitrary and variable
resampling ratio, see docs.
* This release is NOT binary compatible with previous ones
(0.x.x) and requires recompilation of applications using it.
* This release is API compatible with the previous one. But if
you are using the now deprecated filtlen() function please
replace this by inpsize() which provides the same information.
* The inpdist() function has been added, see docs.
* The ratio_a() and ratio_b() calls have been removed, if this
is a problem (I'd be surprised) they can be added again.
* The include files are now in $PREFIX/include/zita-resampler/.
Please DO remove any old ones manually after installing this
version. Compiling using the old includes and linking with
the new library will create havoc.
* #defines and static functions are added for compile time and
run time version checking, see resampler-table.h.
Zita-alsa-pcmi is the successor of clalsadrv. It provides easy access
to ALSA PCM devices, taking care of the many functions required to
open, initialise and use a hw: device in mmap mode, and providing
floating point audio data.
Release 0.1.1 (26/01/2012)
---------------------------
* This release is almost API compatible with clalsadrv-2.x.x.
The only changes your source code will need are:
- Change the include file.
- Change the type of any objects defined by the library.
- Replace calls to stat() by state().
- If you want error reporting on stderr, add an optional
parameter to the constructor. See include file for details.
* Added support for big-endian PCM formats.
* Added support for reading and writing interleaved user buffers.
* Error messages on stderr can be selectively enabled. If an app
is compiled without them, they can be re-enabled at runtime by
defining the environment variable ZITA_ALSA_PCMI_DEBUG, so they
are now by default off. See source code for details.
* Two simple demo programs are provided, one of them is the ALSA
version of jack_delay. Complete documentation will follow later.
The clalsadrv lib will remain available for some time, but any
new releases of JAAA and JAPA will switch to the new one. Patches
for AMS are being prepared.
Both libraries have been updated mainly to provide the necessary
functionality for two new apps: zita-a2j and zita-j2a. These allow
to add ALSA hw: devices as a Jack client, same as the alsa-in and
alsa-out clients that come with Jack. To see why I wanted something
to replace those, have a look at
<http://kokkinizita.linuxaudio.org/linuxaudio/resample.html>
Both apps still need some cosmetics but they are in a working state.
I'd want some more testing before they are released. If interested
drop me a line off-list.
Ciao,
--
FA
Vor uns liegt ein weites Tal, die Sonne scheint - ein Glitzerstrahl.
Hello!
LMMS's Trippleoscillator is one of my favourite instruments on Linux, but
LMMS isn't my favourite pick of production tool. I've looked around and
noticed that it's only availible internally in LMMS, which poses some
difficulties since LMMS (to my knowledge) cannot be synced with jack-clock
and thus my favourite midi-tools. This is a problem for automating
filters/other parameters etc.
My question is, quite simply, since LMMS is FLOSS, would making
Trippleoscilator + perhaps other instruments in LMMS into plugins
(lv2/linuxVST) be a realistic possibility? Bear with me, I'm no developer
but merely a musician, so I really don't know what I'm asking/talking
about. But I have a firm belief that a plugin-version of these instruments,
usable in for example Ardour3 when that arrives, would make a great
addition to the Linux-musician community.
(ps. Is there anything similar already existing perhaps? ds.)
I'll give it a shot,
Regards,
Gerald
On Tue, 2012-01-24 at 23:07 +0100, andersvi(a)notam02.no wrote:
> >>>>> "G" == Gerald Mwangi <gerald.mwangi(a)gmx.de> writes:
>
> G> Hi, has some got PolyPitch compiled on linux? It complaining that
> G> SCWorld_Allocator is missing.
>
> Yes, i compiled PolyPitch and got it running (very cpu-heavy) some time
> ago. FC14, 2.6.35 kernel and SC3-dev sources pulled from git january
> 18th. Only some minor tweaks to some header-files iirc to build it.
>
> The missing SCWorld_Allocator might suggests your version of the
> sc3-sources are not recent enough? Perhaps try with a sc3 source-tree
> from more recent git.
>
>
> _______________________________________________
> Linux-audio-user mailing list
> Linux-audio-user(a)lists.linuxaudio.org
> http://lists.linuxaudio.org/listinfo/linux-audio-user
Hi, has some got PolyPitch compiled on linux? It complaining that
SCWorld_Allocator is missing. I've installed SuperCollider 3.4 on ubuntu
oneiric.
Greets,
Gerald
On Mon, 2012-01-23 at 12:57 +0000, Dan S wrote:
> Hi Gerald,
>
> I don't know the exact state of the art, but Nick Collins recently
> used "Anssi Klapuri's great 2008 paper 'Multipitch analysis of
> polyphonic music and speech signals using an auditory model'" to build
> a polyphonic pitch tracker for SuperCollider.
>
> Since Nicks' code is GPL, maybe you can even re-use it. See this
> email: https://www.listarc.bham.ac.uk/lists/sc-users/msg10760.html
>
> Dan
>
>
> 2012/1/23 Gerald Mwangi <gerald.mwangi(a)gmx.de>:
> > Hi all,
> > I'm developing a small app which uses aubio to extract the fundamental
> > Frequency of a monophonic guitarsignal to drive a set of wavetable synths.
> > It works quite well and I'll put up on the net when I have the time.
> > But I would like to extract the fundamentals of a polyphonic signal. Does
> > anyone know of a lib which does this in realtime? Or at least a state of the
> > art paper I could implement?
> > thanx,
> > Gerald
> >
> > _______________________________________________
> > Linux-audio-user mailing list
> > Linux-audio-user(a)lists.linuxaudio.org
> > http://lists.linuxaudio.org/listinfo/linux-audio-user
> >
>
>
>
Looks interesting . I'll take a look at it.
Thanx,
Gerald
On Mon, 2012-01-23 at 12:57 +0000, Dan S wrote:
> Hi Gerald,
>
> I don't know the exact state of the art, but Nick Collins recently
> used "Anssi Klapuri's great 2008 paper 'Multipitch analysis of
> polyphonic music and speech signals using an auditory model'" to build
> a polyphonic pitch tracker for SuperCollider.
>
> Since Nicks' code is GPL, maybe you can even re-use it. See this
> email: https://www.listarc.bham.ac.uk/lists/sc-users/msg10760.html
>
> Dan
>
>
> 2012/1/23 Gerald Mwangi <gerald.mwangi(a)gmx.de>:
> > Hi all,
> > I'm developing a small app which uses aubio to extract the fundamental
> > Frequency of a monophonic guitarsignal to drive a set of wavetable synths.
> > It works quite well and I'll put up on the net when I have the time.
> > But I would like to extract the fundamentals of a polyphonic signal. Does
> > anyone know of a lib which does this in realtime? Or at least a state of the
> > art paper I could implement?
> > thanx,
> > Gerald
> >
> > _______________________________________________
> > Linux-audio-user mailing list
> > Linux-audio-user(a)lists.linuxaudio.org
> > http://lists.linuxaudio.org/listinfo/linux-audio-user
> >
>
>
>
Hello list!
For those who have PyQt4 for Python3 installed:
I have a software, a Music Notation Editor, that can start in a one-liner and I need to find a bug that only occurs on some systems.
git clone git://github.com/nilsgey/Laborejo.git && cd Laborejo && ./laborejo-qt.sh
This will download and run my software Laborejo as normal user without installing anything*. The only dependency is pyqt and git to download it
You will see 5 lines and a symbol. The symbol must be perfectly alingned within the five lines (one pixel above can be tolerated). It should look like this: http://www.wargsang.de/pyqt-bug-report.jpg
Do you see that symbol shifted up or down or is it correct?
Could you please answer me with the following information attached: Your graphic driver (type ("ati, nvidia, intel" etc. and closed or open source?) and desktop enviroment/window manager (Gnome, KDE, xfce, i3 etc.). If you want to add more information like qt version or X-Server it would be nice as well. Everything display related helps:
I believe closed nvidia drivers will shift the symbol. I tested it myself on ati and intel graphics, both 32 and 64 bit and it looked good, both on Linux and Windows. Other users with ati and intel GPU's had no problem. But two persons with an nvidia card had the wrong display.
It would be very nice to hear from you!
Nils
http://www.laborejo.org
*The only modifications to your system are new dir .laborejo in your home directory and the downloaded files via git.
Greetings,
I've just received a notice from the Linux Journal that they will no
longer be running my monthly articles. I know that some people on this
list have enjoyed reading them, but alas, all things must end. I've been
invited to contribute full-length articles to the digital edition, which
I will do, but those articles will be available only to subscribers.
I have given some thought to collecting all my LJ articles to date (12+
years worth) and posting updated versions at linux-sound.org. However,
the work is non-trivial - it takes considerable time to research and
write those articles - and I have bills to pay. Other work for hire will
necessarily take precedence, meaning I'll probably teach more and
perform more often.
So, I hope you've enjoyed my work for LJ. Thanks for the reads, I've
enjoyed the writing.
Best,
dp