Hi all,
During the last days, I looked into phase-ration: components of a signal
are delayed differently depending on their frequency.
A good write up on the subject can be found at [1], and a commercial
tool is available from [2].
The interesting aspect is that phase rotation does not alter the sound
of the signal nor the loudness. However changing the phase vs. frequency
relationship between lower and upper harmonics changes the waveform and
can affect where the digital peak occurs.
For this reason phase rotation is commonly used by radio stations to
reduce the signal peak, and make the signal more symmetrical. Phase
rotation circuits are also used during mastering to increase headroom.
To implementing this, the following operations are performed:
* FFT
* rotate phase, retain magnitude
* inverse FFT
This works well, except for the first FFT bin: 0 Hz, DC offset. If the
phase-shift changes the average DC level of the signal there is a
discontinuity.
I could use some of the collective expertise of this list.
Has anyone looked into this before?
Is it even possible with piecewise block processing?
Is there a way to calculate the DC offset?
Since the overall magnitude does not change simply summing up the bins
results in the same 0th bin. I considered to low-pass filter the DC
offset, or simply remove the DC offset using a high-pass before the FFT
to mitigate the issue. But either introduces different artifacts. If I
understand correctly, a Hilbert transform as mentioned at [1] has the
same issue.
I've condensed the DSP into a simple test tool to toy around with the
issue: https://gist.github.com/17fd61b04d4c4939727dfdccd79f53a5
with low frequencies, the differences are obvious:
* https://i2.paste.pics/cc32f6c5d622e31ab0e830ab1ce205e9.png (2.5 *
FFT's base-freq)
* https://i2.paste.pics/3e6eb827148b90800569843c11da2e48.png (0.37 *
FFT's base-freq)
I'd appreciate any insights.
--
robin
[1]
https://forum.juce.com/t/how-to-rotate-the-phase-of-an-audio-signal/39072/10
[2] https://www.izotope.com/en/products/rx/features/phase.html
Hi,
a the new version of B.Oops is out now. With really a LOT of new features:
* New: Slot shape mode: Controlled by a user-defined shape instead of a
pattern
* New: Slot keys mode: Controlled by user-defined MIDI events instead of
a pattern
* New: Pattern randomization
* Fx
* New Banger
* New EQ
* Tremolo: Waveform option
* Oops: New sample
* Default optimization flags `-O3 -ffast-math` for compiling DSP
* Improved binary compatibility / portability using static libs
* User friendly hiding patterns for inactive slots
* New presets
* New: Provide binary packages
* Bugfixes
* Fix pattern Y flip glitches
* Correctly X flip merged pads
* Fix paste merged pads causing overlaps
* Bugfix remove slots may cause segfault
* Fix clicks on decay
* Bugfix clicked handles if shape changed
Thanks to the community for the ideas and suggestions. B.Oops has AFAIK
now got more features than any commercial and closed-source effect
sequencer! And you can further contribute to this project.
Github: https://github.com/sjaehn/BOops
Download: https://github.com/sjaehn/BOops/releases/tag/1.8.0
Preview: https://www.youtube.com/watch?v=nHJlSlvxit8
Regards,
Sven
Same context as my previous mail …
The RME HDSPe cards have a large number of inputs and outputs for which the standard ALSA channels names and mappings make little sense.
Names reflect the hardware interfaces, e.g. Analog.L, Analog.R, AES.1/1 … AES.1/8, ADAT.1 … ADAT.8 etc… and not their function.
Right now, the hdspm driver provides a ad-hoc virtual file in /proc/asound/card<n> containing these names.
I am looking for a more generic driver <-> user space API for communicating PCM cannel names. Has anyone been facing the same / similar issues? How have you solved it? Are there any ideas, suggestions, preferences concerning this topic?
Again looking forward to your feedback.
Best regards and thanks in advance,
Philippe.
Dear all,
I am working on a new driver for the RME HDSPe cards, which eventually could replace the hdspm driver.
These cards have a hardware mixer / matrix router, freely mixing tens of hardware inputs and software playbacks into tens of outputs (8192 mixer controls on the HDSPe MADI).
Right now, mixer state (cached by the driver) can be read efficiently in one ad-hoc ioctl call, and individual channels modified through a HWDEP ALSA control element with 3 parameters (input/playback index, output index, gain value).
I understand there is a desire to get rid of ad hoc ioctls and am therefore looking for a more generic driver <-> user space API to read and write huge mixer state.
Does such more generic huge mixer interface already exist? Has someone been facing this or similar issues in this community? If not, are there any ideas, suggestions or preferences how it should look like?
Looking forward to your feedback.
Best regards and thanks in advance,
Philippe.
Hello all,
I need some advice from a web protocols expert...
I want to record the _audio_ part of
<https://www.skylinewebcams.com/en/webcam/ellada/crete/rethymno/karavostasi-…>
during a long time without wasting bandwidth on the video part,
and avoiding being 'timed out' or blocked by 'please whitelist
our ads' popups.
Anyone has an idea of how to do this ?
TIA,
--
FA