Hi,
I wrote a small plugin for spreading mono content to stereo and I would
like to receive some feedback on it from people more knowledgable in DSP
matters than me.
It has one nice property: It spreads the signal over the spectrum by way
of a conjugate pair of random phase all-pass filters, and since it's a
conjugate pair it sums back to unity when down-mixing to mono. So, no
ugly comb filter effects like when downmixing a Haas-expanded signal.
https://github.com/fps/stereospread.lv2
The page has an example sound..
How it works:
It's actually rather simple:
1. create a vector of random phases (matlab notation):
filter_length = 1000;
spread = pi;
hfft1 = exp(-1i*(spread*rand(filter_length,1) - spread/2));
And for the second filter just take the complex conjugate:
hfft2 = conj(hfft1);
This ensures that what is a phase theta in the first filter becomes a
phase of -theta in the second filter, and summed that just gives a phase
of 0.
2. Then assemble the coefficients such that it corresponds to a fft of a
real signal and do the inverse fft (possibly I have a small error here
which i needed to fix with the 'symmetric' flag in matlab):
f1 = ifft([1; hfft1; conj(hfft1((end-1):-1:1))], 'symmetric');
f2 = ifft([1; hfft2; conj(hfft2((end-1):-1:1))], 'symmetric');
3. The two IRs f1 and f2 implement the pair of filters and can be
applied via convolution (which the above plugin does).
It seems to be possible with little ill effect to reduce the length of
the filter down to a size of 50 samples or so by just cutting it off
before the convolution.
What do you think?
Kind regards,
FPS
--
https://dfdx.eu
Hello all,
I've been contemplating trying out Pipewire as a replacement
for Jack. What is holding me back is a what seems to be a
serious lack of information. I'm not prepared to spend a lot
of time and risk breaking a perfectly working system just to
find out that it was a bad idea from the start. So I have a
lot of questions which maybe some of you reading this can
answer. Thanks in advance for all useful information.
A first thing to consider is that I actually *like* the
separation of the 'desktop' and 'pro audio' worlds that
using Jack provides. I don't want the former to interfere
(or just be able to do so) with the latter. Even so, it may
be useful in some cases to route e.g. browser audio or a
video conference to the Jack world. So the ideal solution
for me would be the have Pipewire as a Jack client.
So first question:
Q1. Is that still possible ? If not, why not ?
If the answer is no, then all of the following become
relevant.
Q2. Does Pipewire as a Jack replacement work, in a reliable
way [1], in real-life conditions, with tens of clients,
each maybe having up to a hundred ports ?
Q3. What overhead (memory, CPU) is incurred for such large
systems, compared to plain old Jack ?
A key feature of Jack is that all clients share a common idea
of what a 'period' is, including its timing. In particular
the information provided by jack_get_cycle_times(), which is
basically the state of the DLL and identical for all clients
in any particular period. Now if Pipewire allows (non-Jack)
clients with arbitrary periods (and even sample rates)
Q4. Where is the DLL and what does it lock to when Pipewire
is emulating Jack ?
Q5. Do all Jack clients see the same (and correct) info
regarding the state of the DLL in all cases ?
The only way I can see this being OK would be that the Jack
emulation is not just a collection of Pipewire clients which
happen to have compatible parameters, but actually a dedicated
subsystem that operates almost independently of what the rest
of Pipewire is up to. Which in turn means that having Pipewire
as a Jack client would be the simpler (and hence preferred)
solution.
[1] which means I won't fall flat on my face in front of
a customer or a concert audience because of some software
hickup.
Ciao,
--
FA
Hi all,
I just have released SoundTracker v1.0.4-pre1. I've decided to make a
feature-freeze at this point, this means that 1.0.4 will have no new
features comparing to this pre-release, only fixes and small
improvements if something will be found inconvenient. So I ask everyone
who has interest in SoundTracker to test this release. Here are main
features of the 1.0.4-pre1 release comparing to 1.0.3:
* General:
- Full-screen mode
- Volume of all samples of an instrument / all samples in the module can
be set to a value, not only gained. Also all samples panning can be
adjusted. These functions can also modify the explicitly given values of
volume / panning of notes in patterns
- Improved compatibility with FastTracker, also MOD files are played
more correctly
- Unused patterns are listed in the Pattern Sequence Editor
- Sampling rate can be specified while saving a sample as WAV
- New module optimizer with many control parameters
* Track Editor:
- Moving notes up / down is implemented
* Sample Editor:
- External programs can be used for samples processing. The interaction
between such a program is described using XML spec, no ST recompiling
required
- The whole sample is drawn after recording
- Exponential / reverse exponential volume fade transients
* Instrument Editor:
- Envelope inversion and shifting is implemented
SoundTracker download page:
https://sourceforge.net/projects/soundtracker/files/
Regards,
Yury.
Hey hey,
I want to record MIDI into a sequencer application, using RtMidi. My
understanding is:
inside that application's tracks events are directly stamped with delta
times, measured in ticks, since the last event in that track. Internally
the sequencer runs a clock at a high rate (480PPQ or higher).
It's probably best to "quantise" events to that internal clock while
they are recorded. So the track object, with its connected MIDI input
device, must be linked to the clock, which will wake up, whenever it is
time for a tick. Linked meaning there must be some way to exchange relevant
info (one of them owning a reference, callback, shared data, ...)
If that basic, and simplified, principle is sound. Which solutions have
been proven feasible to achieve that? For example: make every active
track a thread and have a global mutex? Each track registers a callback
function with the clock to store events. ...?
RtMidi's input object (RtMidiIn) offers to register a callback function,
that will be called with a received MIDI message, when input is
available. Such messages could be stored in a container and be delivered
when it's tick time. -- So again, a track might own a reference to the
Clock thread object and query the current tick count since the last
record start command and use that information to calculate the correct
delta. Not sure though, whether that might raise issues with execution
times. Using an atomic variable as a tick counter will avoid having to
use mutex locks, at least.
Is there any practical advice? Something based on experience? A good
read?
Best wishes and thanks,
Jeanette
--
* Website: http://juliencoder.de - for summer is a state of sound
* Youtube: https://www.youtube.com/channel/UCMS4rfGrTwz8W7jhC1Jnv7g
* Audiobombs: https://www.audiobombs.com/users/jeanette_c
* GitHub: https://github.com/jeanette-c
But now I'm stronger than yesterday <3
(Britney Spears)
Hey hey,
how is a logarithmic curve usually programmed in a DAW or sequencer? Do you
scale the values of (log1) to log(2) to the desired range and stretch it over
time? Do you ajudst steepness by either using more less of the log function or
changing both values like log(20) to log(21)?
I'm sure there are many ways to do it, but I'd assume that there are a few
"classic" ways to approach this with anything from controller "automation" to
envelope curve settings to ...
I'd appreciate any hints or simple examples from software practise.
Best wishes and thanks,
Jeanette
--
* Website: http://juliencoder.de - for summer is a state of sound
* Youtube: https://www.youtube.com/channel/UCMS4rfGrTwz8W7jhC1Jnv7g
* Audiobombs: https://www.audiobombs.com/users/jeanette_c
* GitHub: https://github.com/jeanette-c
Baby, take the time to realize
I'm not the kind to sacrifice <3
(Britney Spears)
Hello all,
What exactly is meant by a linear or exponential tempo change ?
Is the tempo a lin/exp function of time, or of score position ?
A bit of algebra leads to this:
Let
t = time (seconds)
p = score position (beats)
v = tempo (beats / second) [1]
We have
v = dp / dt # by definition
If v is a linear function of t, then
v (p) = square root of a linear fuction of p
If v is a exponential function of t, then
v (p) = a linear function of p
if v is a linear function of p then
v (t) = an exponential function of t
If v is an exponential function of p, then
v (t) = inverse of a linear function of t
So there is plenty of room for confusion...
[1] Using SI units :-)
Ciao,
--
FA
Hey hey,
I'm experimenting with c++ trying to program something nice for MIDI. I'm now
experimenting with clocks, using both the standard c++ chrono library and the
boost chrono library. My example program sets a desired delta between ticks of
250ms. I see that there is a difference, since Boost chrono can also use
pthread thread parameters foor realtime scheduling and priorities.
With the standard chrono and thread library my ticks are usually out by
anything between 150.000 to 290.000 nanoseconds. Using boost a tick is out
anything between 110.000 to 124.000 nanoseconds. Yes, much better. But,
assuming that I correct the drift, does it make a difference for MIDI
instruments?
I know one of my MIDI instruments with a clock sync'able delay can be rather
touchy with MIDI clocks, but are there good estimated guidelines experience
values how precise ticks should be?
Best wishes and thanks for any help,
Jeanette
--
* Website: http://juliencoder.de - for summer is a state of sound
* Youtube: https://www.youtube.com/channel/UCMS4rfGrTwz8W7jhC1Jnv7g
* Audiobombs: https://www.audiobombs.com/users/jeanette_c
* GitHub: https://github.com/jeanette-c
Top down, on the strip
Lookin' in the mirror
I'm checkin' out my lipstick <3
(Britney Spears)
Hello all,
Just hours after the release of zita-resampler-1.10.0 yesterday,
a new bug was reported, not in the library but in the zresample
application.
This is now fixed in 1.10.1
Ciao,
--
FA