Hi
I found this interesting project:
http://cofundos.org/
It seems suited for
- developers looking for a paid open source task
- users who want to sponsor / support open software development
- projects who require sponsorship
It's still new, but I think it looks promising and can be successful.
It's worth to become public and more known.
Does anyone know any comparable project ?
Hi, computer created images
http://www.complexification.net/gallery/
very nice if you ask me, some seem actually painted. Source code is avaiable
and as far as I can see it's Processing.
why doesn't algorithmic music give such pleasing results? or have I just not
listened to the right things? - maybe it's just I am much more critic about
music
cheers
renato
I have an old Alesis QSR which has an optical digital output
that is in a "proprietary Alesis format". The manual says:
> The digital connector follows a proprietary Alesis format
> that carries all four audio outputs of the QSR (Main and
> Aux, Left and Right) on a single fiber optic cable.
> Either pair of outputs can be converted into standard AES/EBU
> or S/PDIF stereo digital audio format by using the Alesis AI-1
> interface.
So, short of buying a used Alesis AI-1 (for maybe $100) is there
anyway I can convert the output of the Alesis to something
that can be read by an Spdif input?
Thanks;
If I have a already have a ASUS motherboard with a Coaxial S/PDIF
port, then is there any reason to get the Delta 66 over the Delta 44?
Delta 66: 6-in/6-out, S/PDIF digital I/0 with SCMS control
Delta 44: 4-in/4-out, no digital port.
The only difference between the two cards (besides the number
of in/outs) is the addition of the S/PDIF port on the Delta 66.
I know the audio on motherboards is inferior to a good
sound card, but is there any functional difference between
S/PDIF on an ASUS motherboard and an M-Audio card?
Thanks.
Hi,
I want to ask what behaviours users expect to hear regarding the voice
operations of monophonic and polyphonic synths/samplers. I need to get
a good understanding and my limited experience with real
synths/samplers isn't helping much when it comes to getting the
concepts/behaviours straight enough in my head to get down to coding them.
---
1) Monophonic
in the event a note is playing and a new note is played, would you expect
a) the envelope release stage of the old note to continue while the
new note plays?
b) the old note to cut and the new note play?
2) Polyphonic - This is in terms of a sampler whose playback mode is
'Singleshot' where the sample is played back in full regardless of
note-off events. Would you expect the retriggering of the same note
within time < sample-duration to cause:
a) a second instance of the sample to play back simultaneously (albeit
beginning and ending later in time) without affecting the first?
b) playback of the first instance to stop before being retriggered as
the second (if we're going to talk about first and second)?
---
I would expect to be able to have a choice between a and b in both
questions 1 and 2 above. But do general users? Should I expect them to
go through the same confusion I have to learn the subtle differences
(subtle to untrained ears at least).
The program in question is Petri-Foo. I made comparisons between it
and Specimen and found differences. I'm trying to settles these. I
arrived at the conclusion there could be a fourth voice mode as the
polyphony behaviour in Petri-Foo/Specimen is that of 2a above but 2b
makes more sense in terms of real instruments.
I looked also at Phasex which has a multitude of monophonic and
polyphonic modes (of which MonoMulti is particularly interesting) but
I was unable to identify differences between some of them.
Thanks,
James.
Hi All,
I'm having a very strange problem setting up Jack.
I have a card which aplay -l lists as
card 0: SB [HDA ATI SB], device 0: ALC892 Analog [ALC892 Analog]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 0: SB [HDA ATI SB], device 1: ALC892 Digital [ALC892 Digital]
Subdevices: 1/1
Subdevice #0: subdevice #0
I'm only interested in the analog outputs.
when I start up jackd (from qjackctrl), I am offered three different options
for my sound interface:
hw:0 HDA ATI SB
hw:0,0 ALC892 Analog
hw:0,1 ALC892 Digital
(input devices are similar)
However, when I start up jackd, I get a constant stream of xruns. This is
true regardless of what combination of input and output hardware device I
choose, as well as real-time settings and period size (I obviously haven't
tested everything, but a large representative swath).
On the other hand, if I choose plughw:0, jackd gives me a warning, but I can
get zero xruns and no artifacts for as low as 128 frames/period.
Similarly, if I start jack with the dummy driver, and then run alsa_in, I
can get down to 32 frames/period with no artifacts nor xruns.
What could be going on?
Thanks,
Jeremy
Hi,
I'm a keyboard player; tired of LASH, waiting LADISH and Jack-session, I
developed some scripts for using linux synths in a live context. If it
can help someone, this is the link of the "tutorial" that explains how
to do it (sorry only in italian for the moment):
http://www.eclepticbox.altervista.org/index.php?option=com_content&view=sec…
If someone uses a similar (or not similar...) setup for playing in a
live show, please send me any comments or suggestions.
Thanks
Alessandro Filippo
I'm sure this is an easy one but I thought I'd ask here first.....
I've got a bunch of 24 bit flac files that I'd like to convert to 16 bit in
order to play them on my Android phone.
I'm not totally sure how they've been encoded to 24 bit (I must've done that
at some point but I don't remember when/how).
Also, I've noticed that a bunch of them are also "multi-channeled".
How in the world did they get that way? I've never recorded anything but to
2 channels so the fact that I have a few flac files that have "8 channels"
seems really weird.
So, really, I'd like to be able to knock down the bit rate from 24 to 16 and
also put these into stereo as well.
Command line and/or gui is fine.
Any input is much appreciated.
Thanks.
-Aaron