My name is Roman, I'm studying Computer Science and I'd like to participate in the GSoC this year.
Also I'd like to do this in a linux audio related project if possible, because I want to help improve the Linux Audio world!
So if anyone here is part of or knows an organisation that would like to mentor a GSoC student and have them work on one of their projects, please let me know!
I'm a Master student in CS and my focus so far has been centered around operating systems (incl. kernel development), security, concurrency and (hard) real-time.
At the University I also took a few signals, systems and DSP courses, so I know what an LTI system is, how digital filters work and what a hilbert transform does.
To pay my rent and food I work part time as a repair technician fur electronic musical instruments and equipment and therefore have a background in electronics as well.
I'm also a passionate hobby musician and live mixing technician.
I know how to write C code that doesn't blow up. I'm familiar enough with C++ to get around comfortably.
Recently I started writing an 8-bit microcontroller emulator as a University project in Rust and so far I really like the language.
Python is also a very nice language in my opinion.
Audio related things I've written include python bindings for the jack dbus interface, a jack application managing tool to start/stop/mute applications via hardware buttons and used mididings to map MIDI CC to Sysex for my hardware synths.
I've also written a bit of FAUST code to create a number of effects I want to use.
A few ideas and fields I can imagine working on (non-exhaustive, no particular order):
- Mididings backend for embedded devices
- Polyrhythmic sequencing
- Sysex integration in sequencers
- Linux kernel work
- Emulation of analog hardware (setBfree still needs a nice overdrive afaik ;) )
- jack and/or pipewire
Something I'd really like to see someday is being able to sit down at (or stand up with) my Instrument and just jam and when I played something I like, I can go back to, say, 28s ago and extract a few bars and build a song from there.
Also I'd really like to hear your ideas and suggestions! :)
I'm really looking forward to your responses and hopefully a great collaboration as a result!
Feel free to ask any questions, as will I! ;)
Roman / etXzat
The FFADO project is pleased release FFADO version 2.4.5. This is a bug-fix
release to address issues encountered since version 2.4.4.
This is a source-only release. It can be downloaded from
A release announcement can be found at
Changes since FFADO 2.4.4:
* Spelling, capitalisation and quoting fixes in documentation and source
* Use a dark theme by default in ffado-mixer.
* Remember the user’s ffado-mixer theme choice.
* The ffado-mixer desktop file includes a Dutch translation.
* Saffire mixers gain additional tool tips.
* Correctly report the Saffire Pro 24 and Pro 56 in messages from the
driver when these interfaces are in use.
* Reduce the need for scrolling of the Saffire Pro24 panel in ffado-mixer.
* Build correctly under scons 3.0.5 and above.
* Address type-related issues encounted in ffado-mixer when using python
* Correct ffado-mixer routing assignments for the Profire-2626 device.
Thanks to those who have helped with this release, including Pander, Filippo
Bardelli, Nils Philippsen, Takashi Sakamoto and Daniel Baeuerlein.
(on behalf of ffado.org)
inspired by the "A History of Audio on Linux somewhere?" I took a look at my
very old Linux Audio projects.
I have one which amazingly still compiles, namely FeigenSound from the Linux Magazin 03/2001,
however I can't hear the output, even when the pcspkr module is loaded.
I assume that todays Laptops just don't have speaker anymore.
There I was using the KbdCtl.bell_* to generate tones with a given frequency and duration:
// switch to new state
KbdCtl.bell_percent = 100;
KbdCtl.bell_pitch = Frequency;
KbdCtl.bell_duration = Duration;
XChangeKeyboardControl(pDisplay, KBBellPercent | KBBellPitch | KBBellDuration, &KbdCtl);
Is there still such a simple method available to generate a tone with a given duration and frequency.
Of course I could generate a corresponding array and send it to jackd, I'm just looking for something simple.
Looking at the Jack Transport state machine on
<https://jackaudio.org/api/transport-design.html>, there is only
one 'reposition' transition, and it goes to the 'Starting' state
which then sooner or later will go to 'Rolling'.
Q1: Does this mean it is impossible to reposition without starting ?
Or is there just a transition missing in the diagram from 'Stopped'
to itself ?
Q2: Is there any way to find out, while 'Stopped', if all clients
are ready to start immediately without actually starting ?
I'd say at least one more state would be required.
I am just a regular user of Linux audio but I am interested in the
history of how software was developed and what problems they were meant
to solve on Linux eg OSS, ALSA, Jack etc and more recently PipeWire.
Is there such a documented history already in existence on the web
somewhere? (ie NOT a HOWTO) - that would be intelligible to non-audio
I am interested in learning and understanding more about audio and
perhaps making better use of my system (Fedora 34 + Wayland soon to be
updated to 35).
PO Box 896
Cowra NSW 2794
My QJackCtl Patchbay doesn't work any more and it's obvious there are
new ways to get similar functionality with WirePlumber, but a little
example would help. I seem to want to pipe the output of pw-link -l
somewhere (pw-link -l | wireplumber --make_it_so).
Need to always connect jack-play this way:
$ pw-link -l
It gets harder to learn new things as we gets old.
Mididings may be the only JACK-api MIDI router/filter tool around; if
there is another I'd love to know about it. On the other hand t was
working just dandy until, I think, some changes to both Python3 and
legacy support for Python2. I was using its Python3 AUR adaptation
(mididings-git) under Manjaro, but then after package upgrades, it began
to fail at 'from mididings import *' (see below). I found similar
failures in Ubuntu 20, and found different failures in both when I tried
setting up to use Python2 (which obviously we would rather not be
doing). The failure is similar to a number of failures reported
starting with Python 3.10, so this may be related, but I was not able to
figure out how to apply those workarounds to the mididings source, I
tried several different variations, including a number of the different
forks in github.
Thoughts, anyone? I found Pigiron and Jamrouter and puredata, but none
of these appear to do JACK. I'm using pipewire now, so I could
theoretically revise the whole rig around ALSA MIDI, but all of the apps
needing MIDI use JACK, and mididings is so elegant...
|[jeb@newbnr ~]$ python Python 3.10.1 (main, Dec 18 2021, 23:53:45) [GCC
11.1.0] on linux Type "help", "copyright", "credits" or "license" for
more information. >>> from mididings import * Traceback (most recent
call last): File "<stdin>", line 1, in <module> File
linux-x86_64.egg/mididings/__init__.py", line 15, in <module> from
mididings.engine import run, process_file File
line 15, in <module> import mididings.patch as _patch File
linux-x86_64.egg/mididings/patch.py", line 15, in <module> import
mididings.units as _units File
line 14, in <module> from mididings.units.engine import * File
line 55, in <module> def SceneSwitch(number=_constants.EVENT_PROGRAM):
line 46, in composed return arguments.accept(*constraints, **kwargs)
line 49, in __init__ self.constraints = [_make_constraint(c) for c in
line 49, in <listcomp> self.constraints = [_make_constraint(c) for c in
line 160, in _make_constraint elif isinstance(c, collections.Callable):
AttributeError: module 'collections' has no attribute 'Callable' >>>|
I'm trying to help someone (OSX user trying out Linux) use Csound
with Jack. What I'd need to know is which are the Csound command
line options to
* run Csound with Jack,
* using Ninp input ports and Nout output ports,
* not autoconnecting any ports,
if that is possible at all...