Hello.
I updated the nord2pd project at
ftp://ftp.funet.fi/pub/sci/audio/devel/nordmodular/
The NOTES describes the algorithms of the modules. Not complete,
nor correct explanations. Includes references to existing software
and papers.
I made screenshots of the PD files I have already written to.
Screenshots are, e.g., for people who does not use PD.
I included the parser codes from NMEdit project.
OK. Only 14 of the 109 modules has some PD code. Only 3 modules
are complete. 10 modules missing little things. Because I started
this project 25 days ago, I expect I have finished this project
at November with 109 modules having some code and only 23 modules
completely finished. Then next 8 years and I give up. Clavia
should not be worried about this project at all, I'm sure.
Here are some thoughts the discussion has raised:
(1) Volunteers should now read the NOTES file. The information
I need is how you would implement the modules in PD or in any
other system. Modulars: csound, alsamodular, supercollider, galan,
ssm, beast, reaktor etc. Non-modulars: ladspa, vst, music-dsp code
archive, etc.
(2) The PD files includes now only trivial material. It should
be easy to write them with the other systems. Please do it.
(3) The free clone of Nord Modular is not important. I'm aiming
at a minimal system which makes NM patches to run in other systems.
The GUI for building the patches is not part of my project.
The UI or GUI for controlling the patch parameters is part of
my project. OSC control could be simplest way to add the control to
the patches.
(4) Only NM patches, not NM G2 patches.
Tasks we have:
(1) To write the NM file parser which only maps the patch data to
a C data structures. E.g.,
module[4].name
module[4].col
module[4].row
module[4].p[2]
module[4].im[1]
module[4].ih[1]
module[4].ic[1]
That could be simpler than expected. NMEdit already has parser
code in "nmedit/libs/libnmpatch". I included a copy in my
project package, no need to download NMEdit.
Somebody expert should understand the code and write a standalone
program for us.
(2) Finish the PD modules. Or modules in any other system.
(3) Update the parser with the patch conversion codes.
Juhana
--
http://music.columbia.edu/mailman/listinfo/linux-graphics-dev
for developers of open source graphics software
Hi all,
As some of you may know Linux.conf.au is on in Canberra, Australia
at the moment and that today we had an audio mini-conf followed by
a performance night.
The standout tonight was a duo called Deprogram:
http://www.deprogram.net/
who performed some great electronica with live keys and vocals over
prerecorded tracks. The backing tracks had been recorded on and were
being played back using Ardour running on a Linux laptop.
The performance was simply stunning. The fact that Ardour, Jack
and other linux software was used in its production and performance
is a *huge* validation of what we have been doing all these years.
Congratulations to everyone involved in making audio on Linux happen
and thanks to Nick and Mary of Deprogram for showing us how its
good this stuff actually is.
Cheers,
Erik
--
+-----------------------------------------------------------+
Erik de Castro Lopo nospam(a)mega-nerd.com (Yes it's valid)
+-----------------------------------------------------------+
Saying Python is easier than C++ is like saying that turning a
light switch on or off is easier than operating a nuclear reactor.
Hi.
Yesterday I released ZynAddSubFX 2.2.1.
News:
- made to work with mxml-2.2 (will NOT work on
older versions of mxml)
- it is possible to remove completely the
graphical user interface (e.g. it can r
un without X). For this you need to modify the
DISABLE_GUI option from the Makefile.inc
- added a commandline -L which load a
instrument (.xiz) - now it only loads to pa
rt 0 (you can use this option with -l to load a master
file and after this the option -L
to replace the part)
You can find it at the usual place:
http://zynaddsubfx.sourceforge.net
Paul
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
Mx44, previously known as Mx4, got itself a brand new set of
features:
Wawe Shaping:
This works kind of like a VCF but the other way around,
creating resonant sweeps out of sine wawes. Turns noise
into chaos. Cotrolled by envelope & friends (keybias,
velocity)
Low Pass Filter:
This is just a fairly simple ~2dB/octave filter. Works
great with the wawe shaper and also balances the weight
between high and low keys into something fairly enjoyable.
Sounds a bit "wet" and very DX-ish ...
LFO:
This is an LFO with dual speed/intensity. Controlled by
wheel or its own time parameters. Works in sync (all
oscillators moving up / down together) or "spread out".
Get it here:
http://hem.passagen.se/ja_linux/
Please remember to report bugs. I don't have a /dev/midi
these days, so some wild guesswork is indeed going on :)
--
(
)
c[] // Jens M Andreasen
Hi,
Just noticed about this recent release of JUCE 1.10:
(http://www.rawmaterialsoftware.com/juce/)
which I think is the first one ready for build on Linux. Indeed, I've just
tested the "jucedemo" program and it seems to be running quite fine.
Impressive, I may say.
As expected, only the GUI code is now working, and I almost sure it is
working pretty well, at least on my X.org boxes. I'm quite excited to know
that JUCE Linux native port has evolved and is materializing.
However, the Audio and MIDI abstraction framework is still to be filled in.
And this is exactly the purpose of this very post--yet again, if you
remember, since last January exchange on this :)
You might remember then, that I've brought this very subject into
attention to the Linux Audio Developers and USers (LAD/LAU) mailing list,
and to put a long story short: help is being here reiterated to write the
native Linux Audio and MIDI implementations of the JUCE C++ framework,
which among other things, may bring a native Tracktion Linux port into
light :P
Given that JUCE is being released under the GPL, and if some of the LADs are
willing to help (me included), things just could happen sooner than later ;)
That said, I'm all about taking explicit directions for a JACK
implementation on JUCE's Audio interface, and ALSA-sequencer for MIDI. Is
there something already in the works or is it something that some of us
(the LADs) can step in and give a hand ?
And what about public hosting of the JUCE project, given that its being
released under an open-source license (GPL)? Yet again, this is just a
humble suggestion of mine. To give you an hint, sourceforge.net's project
name "juce" is still available for registration, or so I believe.
I'll be very happy to know about some comments on this. Specially from the
LADs. Thanks anyway.
Cheers.
--
rncbc aka Rui Nuno Capela
rncbc(a)rncbc.org
>Hi Majik,
>
>majik wrote:
>
>>---Here is an email I sent a little while ago to Kai Vehmanen:
>>
>> Â
>>
>>> Â Â Â I have been looking through the Soundtracker 0.6.7 code as I have
>>> Â Â Â been
>>>wanting to improve the jack output. It would be excellent if Soundtracker
>>>could output each of its channels to a Jack port, instead of (as well as?)
>>>outputting the mix the two mono channels. Unfortunately, Im only a beginner
>>>when it comes to audio coding and was wondering if someone could hack it
>>>together for me?
>>> Â Â
>>>
>>
>>
>>---And his reponse:
>>
>>that would be a cool feature to have, but unfortunately not a trivial
>>thing to add (though not impossible either). You could try to ask about
>>this on the soundtracker mailing list (or possible on linux-audio-dev)...
>>maybe someone else is also interested and has the time to help.
>>Unfortunately I don't have much time for FOSS-development atm, so I'll
>>have to pass. .(
>>
>>
>>---Will anyone help with this? I believe that the problem is that the mixer
>>code works in a monolithic way, thus needs a rewrite.
>>
>> Â
>>
>This issue's been discussed on the soundtracker mailing list, months ago.
>
>Here's a quote from Yury Aliaev, a Soundtracker contributor :
>
><quote>
>In the current state of ST such a thing (multichannel output) is almost
>impossible because of the monolitic structure of the mixer code. The
>optimal way to solve this is (I mean) rewriting the whole mixer in the
>modular way. This also will make ST more flexible and universal and, in
>particular, will make adding new effects (including LADSPA processing)
>more easy.
>
>Currently I have some ideas how to do this (but still have no free time
>for this :( ), but there is another way: libremix written by Conrad
>Parker (see remix.sf.net) seems to be good for this purpose.
></quote>
>
>
>A question I asked, a few weeks later :
>
><quote>
>
>> About per track JACK outputs : In your answer to Emiliano Grilli, on
>> june 1st, you explain that this needs a big rewrite. But do you
>> believe a sort of hack is possible ? Like a small patch that lies
>> around for those (I like JACK :) who need this feature before
>> soundtracker engine gets rewritten... If yes, any advise ? Does it
>> imply playing with the mixer assembly routines ?
>
></quote>
>
>And Yury's answer :
>
><quote>
>Unfortunately, it will be a havy hack of assembler routines :( Because
>they mix sounds from different channels directly after resampling,
>rather then write to the separate buffers and then mix them. This is why
>I decided to rewrite the mixer entirely rather then inventing kludges...
></quote>
>
>
>Hope it helps... Yury may have started to rewrite the mixer.
>
>Regards
>
>--
> og
Yes, I did see this, I just wanted to re-illuminate the topic and appeal
through LAD to see if anyone did have some free time to develop this, as I
feel it would be such a great feature.
Matthew Carey
Dear sir,
The HTPC market is growing and Linux has a strong presence here via
software offerings like MythTV (http://www.mythtv.org/). A significant
portion of HTPC users are interested in quality audio such as the one
offered by professionnal sound cards.
Simultaneously there is a drive to make kill all sources of low latency in
the Linux kernel (http://lwn.net/Articles/120797/) to accomodate
professionnal audio needs. Large professionnal Linux audio communities
already exist (http://jackit.sourceforge.net/,
http://www.linuxdj.com/audio/lad/) and the level of interest is high
enough to be seen by some as driving the next major release of the Linux
kernel
(http://www.computerworld.com.au/index.php/id;669959914;fp;16;fpid;0 last
§)
Current Linux sound card support is good for mass-market products and
high-end (RME, M-audio...) ones but no one has filled the intermediate
audiophile/semi-pro/home-studio niche yet. The E-MU 1212m
http://www.emu.com/products/product.asp?maincategory=754&category=754&produ…
would seem to be ideally positionned to respond to this demand on the
hardware side, except it has no Linux driver support right now. The Alsa
project (http://www.alsa-project.org/) however has expressed interest on
working on the problem provided they get an hardware sample.
Since Alsa already develops drivers for other Creative sound cards with
Creative support cooperating it would seem natural to expand the current
partnership to the E-MU sound card family. I hope I made the case for such
a partneship clear (I'm no native english speaker) and I'll soon be a
happy Linux E-MU customer.
Kind regards,
--
Nicolas Mailhot
Hi,
we're currently four people for the linux audio booth. This is
still too less because each of us wants to have some leisure
time to visit other projects and talks. I think that six
people is great, so at least three of them can always be at
the booth. Even if you do not plan to be at Linuxtag over all
the days your help is really welcome.
So we're still looking for people who want to join us,
mainly being at the booth, answering questions and demoing
software.
Information about the Linuxtag can be found on
http://www.linuxtag.org/2005/en/home.html
If you're interested, please contact me via personal mail.
Thanks & best regards
ce
---Here is an email I sent a little while ago to Kai Vehmanen:
> Â Â Â Â Â Â I have been looking through the Soundtracker 0.6.7 code as I have been
> wanting to improve the jack output. It would be excellent if Soundtracker
> could output each of its channels to a Jack port, instead of (as well as?)
> outputting the mix the two mono channels. Unfortunately, Im only a beginner
> when it comes to audio coding and was wondering if someone could hack it
> together for me?
---And his reponse:
that would be a cool feature to have, but unfortunately not a trivial
thing to add (though not impossible either). You could try to ask about
this on the soundtracker mailing list (or possible on linux-audio-dev)...
maybe someone else is also interested and has the time to help.
Unfortunately I don't have much time for FOSS-development atm, so I'll
have to pass. .(
---Will anyone help with this? I believe that the problem is that the mixer
code works in a monolithic way, thus needs a rewrite.
>From: Andy Wingo <wingo(a)pobox.com>
>
>socketpair(2) will do, either polling or reading in the low priority
>thread will sleep until the high priority thread writes a byte.
I hope we get all the possible methods listed now.
I have used signals for years in my alsashmrec. One process
reads A/D to a ring buffer and another process empties the ring
buffer to the disk. The problem was how to make the disk
process (non-RT process) to wait.
When the disk process has no data in the ring buffer, it goes to
sleep:
kill((pid_t)diskpid,SIGSTOP);
When the A/D process has written enough data to the ring buffer,
it wakes up the disk process:
kill((pid_t)diskpid,SIGCONT);
Because A/D process uses smaller buffer size, the A/D process
is executed multiple times before SIGCONT is sent.
Juhana
--
http://music.columbia.edu/mailman/listinfo/linux-graphics-dev
for developers of open source graphics software