Hi.
I released ZynAddSubFX 1.4.0 and contains many new
features:
- added instrument's own effect (effects that
are loaded/saved with the instrument)
- FreeMode Envelopes: all Envelopes can have any
shape (not only ADSR)
- Added instrument kits: It is possible to use
more than one instruments into one part (used for
layered synths or drum kits)
- Amplitude envelopes can be linear or
logarithmic
- added interpolation on the Resonance user
interface
- user interface improvements and cleanups of
it's code
- initiated a mailing list to allow users to
share patches for ZynAddSubFX. Please share your
ZynAddSubFX patches; look at
http://lists.sourceforge.net/mailman/listinfo/zynaddsubfx-user
for more information about the mailing list.
For those you don't know about it, ZynAddSubFX is a
powerfull software synthesizer for Linux and Windows.
It is a opensource software and is licensed under GNU
GPL 2.
The homepage is:
http://zynaddsubfx.sourceforge.net/
Paul.
__________________________________________________
Do you Yahoo!?
The New Yahoo! Search - Faster. Easier. Bingo
http://search.yahoo.com
On Monday 14 April 2003 06:26 am, you wrote:
> the ALSA sequencer does not do this. it could probably be coerced into
> doing so, but it wouldn't work correctly on kernels pre 2.5.6X or
> so. the "scheduling" requirements for delivering MTC are impossible to
> satisfy in earlier kernels without patches (and not the low-latency
> patch, but others).
So I am working on a new composition that is ready for some computer
assistance. The way I choose to work, I need a sequencer application like
Muse or Rosegarden to sync either to my ADAT or to Ardour. Both options
would be best, but one or the other would get me going. Because this is a
priority for me, I am interested enough in making this happen that I will
hack on code for a while instead of composing.
May I have some guidance from the LAD wizards about what is the most realistic
way for this to happen?
1) Which tools, hardware or software, have the cleanest timing designs ready
for a satisfying sync implementation between a sequencer and a recorder?
2) In those codebases, which part(s) need the work, and what is the most
satisfying way to go about it?
3) Are there any new pieces of independent software like a driver module that
would be convenient to have as part of a good, clean, sync solution?
For composing my last record, I used my ancient black face ADAT with a
Steinberg MTC generator, a Motu MIDI Express XT for getting the MTC to the
computer, and an ancient version of Cakewalk in 'doze that slaved to the
ADAT. Although using 'doze and Cakewalk was extremely painful and was
generally far from what I really wanted, the syncronization seemed to work
OK. My demands were not particularly high, though, because none of the
sequencer/ADAT work was used for the album other than as a scratch track.
(All the released tracks were 100% analog, the way I like it.) I'm hoping
that soon, I can get this basic set of Linux tools that do everything as well
or better without the pain of windoze.
Thanks for any advice,
John
Hello Developers.
I have my project going. It is hosted on Source Forge.
http://www.sourceforge.net/projects/audiostarhttp://audiostar.sourceforge.net
You can download the latest ALPHA release and listen to a sample of some
funky beats.
The project aims to produce real-time music creation software. I am
developing now exclusively on Linux. I use KDevelop studio because it's
very quick and easy. But yes I can hack away at the command prompt. The
only thing I don't understand yet are autoconf files.
Tekno Composer is a studio for making tekno/dance/hip-hop/break-beats
and any other form of electronic music. It is modelled after very famous
instruments/machines without breaking any copyright laws. There is no
copyright on how these machines were programmed.
I use FLTK 1.1.3 and Port Audio. There will be Jack connectivity in the
future.
The project needs a developer or two to help to get a stable release out
so Linux DJ's can start using it. I can do it all myself it just takes
time and I'd like to have a stable release in a few months. If you'd
like to join with me send me an e-mail. This is not an ego project I
really don't care if you are better than me or want to help steer the
direction of the project or even help run it.
Peace
--
Nick <nicktsocanos(a)charter.net>
>and as i will discuss at the LAD meeting at ZKM in 2 weeks, writing
>audio software like this has the easy-to-fall-into trap that arecord
>demonstrates: a basic design that falls apart as soon as a few basic
>assumptions turn out to be false.
Will notes or slides from the talk be available afterwards for those of us who can't go?
Taybin
Hi Nick,
>For something different I read this:
>
>(this list).. " Home page for the low-latency hard-realtime audio application gurus. The LAD group also develops API standards to promote interoperability between audio application .. "
>
>oops, I thought you were a bunch of hobbyist like me. I didn't realize
>this was for such serious talk.
>
>I'll leave this list be from here out, but thanks for your pointers and
>helping me get started.
>
No, no, no, no! Nick, I'm pretty sure you are taking the quote waaaay to
seriously.
If you are making sound applications you definately belong on this list.
To be truthful I've not not programmed any sound applications at all
(not open source anyway), but I still think I belong here just because
it is a very dear interest of mine.
There are about 600 subscribers to this list, I'm sure some of them
actually live up to the the quote above (and I'm grateful), but most
people don't have a clue. And I actually think that is the way it should be.
Atleast to me, this list definately isn't an elite, invitation only,
meeting place for initiated. Rather it's a meeting place for friendly
people with a common interest, mainly sound apps and linux.
As for for some encouragement to you, Nick, I've always admired people
who are able to keep on working on a project of this magnitude until it
actually is _something_, just because they want to! I saw your
announcement about your recent audio application project, and am very
impressed with the scope of the project, I haven't tried it myself so I
don't know how much you've got in place, but it seems like it will be
one impressive software (if it isn't already). (with that in mind it's
even harder for me to imagine that you don't belong in here, you
certainly surpass my own accomplishments towards infinity.)
So.... If you have unsubscribed already, resubscribe! And keep up the
good work! :-)
Regards,
Robert
I'm going to Bath this afternoon for a friends 30th birthday party, so if
anyone is in the Bath area and wants to meet up tommorow that would be
cool.
Phone me on 07970 557047
- Steve
Greetings:
Once again I've updated the Linux soundapps sites. All sites are
current and can be accessed via these URLs :
http://linux-sound.org (USA)
http://www.linuxsound.at (Europe)
http://linuxsound.jp/ (Japan)
Many thanks to Frank Barknecht for his assistance with linuxsound.at.
Many thanks also to all my site providers: the mirrors have been donated
by their respective owners as a service to the community, for which I am
most grateful.
Enjoy !
Best regards,
== Dave Phillips
The Book Of Linux Music & Sound at http://www.nostarch.com/lms.htm
The Linux Soundapps Site at http://linux-sound.org
Thanks alot Martin and Paul for the useful hints,
My questions:
Basically the impossibility to do sync-start on multiple delta 1010 cards
is because the hardware lacks the ability to do so ?
de m-audio site/manual talks about
"sample accurate sync between multiple cards" but I guess this means
only that once started the frequencies do not drift but there could be
a small offset between the single channels.
Assuming there is a small offset when starting the cards, one could
assume too that (if you run the audio init code SCHED_FIFO) it would
be quite small (in order of a few samples).
This means that if you use this approach:
while(1) {
snd_pcm_write_to_card_1();
snd_pcm_write_to_card_2();
}
the small start offset would mean that one of the two card audio buffers
are a bit less filled than the other one and like Martin has said,
as long as you do not need sample accurate sync start it would work ok.
Paul: sorry I did not know that ALSA allows you to treat two cards as
a single logical card.
As said it has been about two years I last used the ALSA API and many
things have changed since then.
Are there some online resources available that describe how to do this
card linking ?
Are the cards then started in sync (or near in sync for cards that do
not support sync start) ?
BTW: what kind of cards support multicard sync start ? RME ?
Regarding the SPDIF cable between the two cards can I use a common
cinch mono cable or must a SPDIF cable be shielded and/or have a
precise resistance ?
Note: In my case I do not need sample accurate sync but I was just curious
if it would be possible to do that and/or if a single threaded audio app
could experience problems in presence of playback start offsets.
I searched the net for postings or notes about sync start
with the delta 1010 but I haven't found any except this
(article from year 2000)
".... Midiman UK told me that the current drivers can already keep four cards in
perfect sync, but there are some fixed offsets between them; this will be
overcome in a future driver release. ...."
see here:
http://www.sospubs.co.uk/sos/jan00/articles/midiman1010.htm
So I was wondering what "overcome" would mean in this case: starting
the two cards as close as possible in order to minimize offsets or
using some adaptive algorithm that (assuming that the hw allows it)
measure the dma ptr offsets and adjust them accordingly.
again thanks for your useful infos.
cheers,
Benno
-------------------------------------------------
This mail sent through http://www.gardena.net
Hi,
If someone would like to take a quad chorus I made and make it a LADSPA
plugin feel free.
The source code is @ www.sourceforge.net/projects/audiostar
I had an implementaion of StereoPhonic-Quadraphonic Matrix processor,
but I don't think the phase is correct, and nor is is 4-channel either.
If you had four channels you would calculate the left rear channel as
90 degrees phase shifted, and then sum it into the formula.
The quadraphonic processor still does make an interesting surround sound
effect even though it is not technically correct.
The algorithm is from an electronics book, the formulas were given as
part of the circuit design. If someone is interested I can post those
formulas to you for implementation, but really the
quadraphonicprocessor.cpp already does it, just the left rear channel is
phase shifted 90 degrees and mixed into the stereophonic stream. Since
it is operating on only stereo channels, I keep a copy of the previous
left channel about n samples behind, and slightly mix it to achieve
something like a phase effect. It does actually sound like surround
sound still.
The Quad Chorus is just a stereo chorus but it has two additional taps.
This is the ultimate in fatting up analog waveforms, but the
implementation suffers from swishing if you set the LFO rate too high.
It also does a deep flange if you set rate all the way down and put
feedback up. I had it go through the quadraphonic, but the way I do
stereo on my new synth, it doesn't sound proper, so I removed it.
I'd make it a LADSPA plugin but I don't know yet how to make them.
I have another chorus design but it is not real time. It is a six tap
chorus, passed through a phaser, and then put through an all pass with a
variable delay length setting. This can make echo effects. It was
actually really cool sounding.
I also had an FFT chorus that mixed pitch shifted copies of the signal
onto the stream like suggest in The Computer Music Tutorial. I am
working on getting it working with FFTW.
Hrmm, oh, I also had one funny chorus, that was also not real well with
real-time.I took the idea of the QuadraFuzz pedal, split the stream into
four (or more) bandpass (with resonance) and then put them through a
chorus effect. Then I remixed them through comb filters to get a reverb
feel to it. That also sounded pretty cool, but I had too of a hard time
getting it to work in real-time.
IF someone could take those designs and make them real-time they were
REALLY funky and people would like them.
Just some ideas you can experiment with.
--
Nick <nicktsocanos(a)charter.net>