ce wrote:
>there was a talk at LAC 2005; AFAIR the latency was 50msecs this time.
>I don't know if it is less these days.
I believe I am at 10 msec. now - have not pushed it
further yet.
>are they already working on a true ALSA driver?
I believe they are. There is a dev. road map on the sourceforge
page.
Regards,
Brad Hare
It's my first post here. I'm developing an audio player which has a
fading
up/down facility. This is working fine except for 22050 mono samples.
I'm getting patchy loud noises (intermittent white noise?) while fading
the audio data up and down. The original audio can also be heard
between the noise.
The basic equation I'm doing is (Excuse the Pascal syntax):
Data := Data * CurrVol / 32767;
Data is always 2 bytes of audio data (16 bit data). CurrVol goes from 0
to 32767 or vice
versa over several bytes. If I remove this line, there is no noise
generated, but
obviously no volume change either.
The problem only occurs when it's mono 22050 samples. Stereo 22050
samples are fine, and mono 44100 samples are fine.
I don't understand why this is happening. Any ideas?
Thanks,
Ross.
The Sineshaper is a monophonic DSSI synth. This is the first release.
Source tarball, screenshot and Vorbis demo are available here:
http://ll-plugins.sf.net. The knob graphics are created by Thorsten
Wilms and Peter Shorthose.
The Sineshaper synth has two sine oscillators and two waveshapers.
The sound from the two oscillators is mixed and passed through the
waveshapers, first through the first waveshaper and then the second.
You can control the tuning of both oscillators as well as their
relative loudness, and the total amount of shaping and the fraction of
that amount that each shaper applies. Both waveshapers use a sine
function for shaping the sound, but for the second shaper you can shift
the sine function (with maximal shift it becomes a cosine function) to
produce a different sound.
You can also add vibrato and tremolo, and change the ADSR envelope
that controls the amplitude and shape amount (as well as setting the
envelope sensitivity for both the amplifier and the shapers). There
is also a "Drive" control that adds distorsion, and a feedback delay
with controllable delay time and feedback amount. All control parameters
can be changed using MIDI.
The Sineshaper synth comes with some presets that you can play or use
as starting points for your own synth settings. You can not change
these "factory presets", but you can create and save your own presets.
They are written to the file .sineshaperpresets in your home directory.
If you make any nice presets I would really like to hear them.
--
Lars Luthman
PGP key: http://www.d.kth.se/~d00-llu/pgp_key.php
Fingerprint: FCA7 C790 19B9 322D EB7A E1B3 4371 4650 04C7 7E2E
> Are you using a customized jackd? What version? What command line? Do
> you have any evidence that anyone has ever made this work?
>
Opps, sorry for skipping obviously needed details. Was really upset.
I tried freebob + jackd from freebob.sf.net.
libavc from svn, libiec61883 1.0, libraw1394 1.2
cmdline: jackd -d iec61883 -o osc.udp://localhost:31000
FreeBoB wiki's list of working setups contains FA-101 + gentoo (my distro).
When run in the first time, jackd starts, but there's no sound, and
seems processing callbacks aren't called (no interrupts?).
Dmitry.
Hi!
I have been thinking about combining sequencing, (live) looping
and sampling.
I call the concept I arrived at Transport Regions for now (might
call it Areas, Scopes, Frames ... a native speaker's take on this?).
A Transport Regions groups n tracks, defining a common playback
posistion and transport state (playing, paused, reverse ...).
Loops can be defined with Markers.
A 'classic' sequencer/daw project would use only 1 Region, but
having several could allow sooperlooper like action, with the
advantage of simple extension, moving on from jamming to
production.
Transport commands to specific Regions could be recorded and
played back themselves.
Instrument type samples could be treated the same, with the
known markers, just for rather short loops, provided transport
actions could be mapped to midi/notes.
Multisampling would require means to map (midi) parameters
to track level changes and/or soloing/muting.
Patterns could actualy be Transport Regions triggered from a
track, from which they would need to 'inherit' tempo for normal
operation.
Well ... just food for thought!
---
Thorsten Wilms
On Fri, 2005-10-21 at 14:48 -0700, Mark Knecht wrote:
> > I think this would be a better question for the Freebob list, and cc:
> > the jackit-devel list, as you're using a version of JACK that the
> > Freebob people have customized. I've never heard anyone on LAD or LAU
> > report that this works.
> >
> > First and foremost, we need to get the iec61883 driver into JACK CVS, so
> > that Paul Davis and the other JACK experts can help you.
> >
> > Lee
>
> In case some folks don't know this stuff iec61883 is part of the 1394
> stack. Why would it go into Jack CVS?
Sorry, I mean "the iec61883 backend". JACK used to call these "drivers"
but as you can see it's confusing. JACK does not need to include the
iec61883 stack but it does need to know how to talk to it, just like
with ALSA, OSS, etc.
Look at his command line:
jackd -d iec61883 -o osc.udp://localhost:31000
If I run this I get:
rlrevell@mindpipe:~/kernel-source/linux-2.6.13$ jackd -d iec61883
jackd: unknown driver 'iec61883'
So he must be using a third party patch to jackd from the Freebob people
that implements the 'iec61883' backend.
Lee
Hi all,
just a note: A got it more or less working.
This morning I installed the vanilla-2.6.14-rc4 and I could start
jackd in realtime with -n6 -p256 and run ardour and
qsampler/linuxsampler in parallel without an xrun during normal
operation.
It just crashed jackd when I used -n6 -p128 but even after that it
didn't ruin the sounddriver so I could start jackd again.
That really takes a big stone from my heart as I have to do some
serious music with the laptop this weekend and I feared I had to mess
around with my old one...
So thanks for your patience,
Arnold
--
visit http://dillenburg.dyndns.org/~arnold/
---
Wenn man mit Raubkopien Bands wie Brosis oder Britney Spears wirklich
verhindern könnte, würde ich mir noch heute einen Stapel Brenner und
einen Sack Rohlinge kaufen.
I am in the early planning stage of an audio processing application and I
have come to the point of making the choice between floating point or fixed
point (signed 2's complement) processing.
What do you think is better, and why? Why does jack use floating point? Why
does AES use fixed point in most of their standards?
I understand the technical difference between the formats, but...
I can't come to a conclussion.
The adavantages of floating point I see are:
- easier processing, specially amplification and mixing (multiplying by
fractions)
- more sse support, more packed array arithmetics
- relationship to voltage and SPL easier to understand. (1.0f vs. 2^31-1
max. values)
And its diadvantages compared to fixed point would be:
- generally slower, specially things like addition
- 30 bits of resolution for 32 bits of data in IEEE single-precision
(exponents larger than 0 not used for audio, neither are denormalized
numbers and redundant zeroes, infinities, etc), compared to the full 32 bits
of resolution of fixed point.
- non-linear saw-tooth-like precission (precission gets higher as mantissa
gets larger and then falls abruptly at the point the exponent is increased)
I am sure I fail to see the more important points and would be thankful for
any comments.
If it has been discussed on the list earlier, i am sorry to post it again. I
couldn't find it.
Greetigs, Dimitri
--
NEU: Telefon-Flatrate f�rs dt. Festnetz! GMX Phone_Flat: 9,99 Euro/Mon.*
F�r DSL-Nutzer. Ohne Providerwechsel! http://www.gmx.net/de/go/telefonie