I'm working on a project using NTK as the gui toolkit, and am trying to
decide if I need to distribute NTK with it or not.
Does anyone have any suggestions on this front? Do most distributions
have NTK packages by now, or most not?
Thanks for any pointers,
Nick
Hi all,
In a tentative of giving more personality to the modular synth modules
of the avw.lv2 set, I have been looking at clipping.
I would like here to discuss here a few assumptions, the fruit of my
research and get a few feedback/comments to decide what direction to
take.
- in some cases (or let say modules of a synth), clipping is
implemented more to copie what an analogue system would do than a
mandatory part of the algorithm... Let's take an example: 2 sin waves
mixed together of amplitude -1/1 will just have an amplitude of -2/2
(as long as they are in phase)... A digital mixer without clipping
would be able to cope with that, but an analogue one wouldn't... and
that's why the analogue system would clip the signal......right?
- What method of clipping is used will give a "personality" to the
module: hard clipping, soft clipping, the method used for soft
clipping, etc...right?
- Hard clipping is something of the digital world - it doesn't exist
in the analogue world... right?
- Soft clipping will deform any waves of amplitude -1/1 even if it
doesn't exceed the accepted threshold, because just before reaching
the threshold the algorithm will take over and softly make the signal
reach the maximum amplitude and keep it there until the original
signal goes back under a set threshold.....right?
- Is there a preferred stage for clipping? In the case of a filter,
should we clip before filtering, after or both? Or are all these
options valid and that's what will give an additional personality to
the filter?
Thanks in advance for any comments!
Aurélien
Hi
Did anyone here know if the GPL+ v2.0 /v3.0 is compatible with the CC-BY
v3.0 (unported)
http://creativecommons.org/licenses/by/3.0/
I only found here
http://wiki.debian.org/DFSGLicenses#Creative_Commons_Attribution_Share-Alik…
that the CC-BY-SA v3.0 is compatible, but no mention of the CC-BY v3.0
My understanding is that the CC-BY v3.0 has less restrictions then the
CC-BY-SA version, but I'm a bit unsure.
Background: I would include some work which is under the CC-BY v3.0 to
my project, which is under the GPL+ v2.0 (or later). I wouldn't violate
the DFSG, so I would make sure there is no issue at all when I'm do so.
The Author of the CC-BY v3.0 files is fine with my wishes.
any hints?
hermann
Going to finally build a new machine. I'ts going to be Intel this time -
AMD for 15 years or so - can any one here give some advice as to how
many cores are optimal given current kernel >3.8 performance. Any
install/operational issues? Any pitfalls ?
any advice very welcome.
cheers
g.
Hello all,
I wonder if any other users have experienced this problem and
how they handled it.
This has occured three times when doing an fresh Archlinux install
on a system using the RME MADI cards.
There seems to be something in the combination of recent versions
of the driver and alsactl that leads to alsactl freezing when the
configured (external) clock source for the card is not available.
The 'freeze' seems to be quite deep: it's impossible to kill the
process (even while that process is still a child of e.g. the
xterm from which it was launched, and not of PID 1). Any other
process trying to access the sound card (e.g. jackd) hangs in
the same way. This also means that when doing a poweroff or reboot
systemd will hang on the 'alsactl store' service, and the only
option is a power cycle.
An added difficulty when trying to resolve this (things will be
OK once you have the correct /var/lib/alsa/asound.state) is that
recent systemd doesn't allow to disable or enable the alsa store/
restore services easily (why not ?), you have to manually edit
some symlinks in order to do that.
Note: if this happens to be a driver problem, please do NOT revert
to the ancient behaviour of silently changing the clock source to
'internal' when the external clock is not available. I DO still
expect to see opening the device fail if the external clock isn't
present, as has been the case for some time. The thing that shouldn't
happen is that alsactl chokes on this condition - it didn't before
so it shouldn't have to.
Ciao,
--
FA
A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)
Hi,
I started working with Linux audio very recently. Primarily, I wanted to
understand the Linux audio stack and audio processing flow. As a simple
exercise,
- I write a simple capture/playback application for PCM interface and store
the captured audio data in a .wav or .raw file
- The device parameters I played with are: hw:2,0/plughw:2,0 (for USB
headset) , 1/2 channel, 44100 sample rate, SND_PCM_ACCESS_RW_INTERLEAVED
mode, period size 32, S16_LE format.
- Use Ubuntu, kernel 3.9.4 and Logitech USB headset for development/testing
To understand the control flow, I inspected the ALSA driver source code and
got an understanding of the flow in kernel space. However, I am kind of lost
in the user-space ALSA library.
- What happens after ALSA handovers the data i.e. the audio data from mic is
copied from USB transfer buffer to userspace buffer? What are the data
processing steps?
- Where is Pulseaudio working here?
- Is there any software mixing happening?
- In the USB sound driver, I changed the data in urb->transferbuffer to a
pattern like 'abcdefghijklmnop', but in the capture application when I store
the audio data in a .raw file, I don't completely get the data back.
If I set 1 channel in the hw params, I get almost 'abcdefghijklmnop' pattern
but sometimes get 'abcdefghijklm' i.e. the pattern incomplete and another
pattern starts over. For 2 channels, the data interleaves in a pattern like
'ababcdcd...' but there are also some incomplete patterns and also I see
some unexpected characters.
I know it's a long post, but even if you can help with a part of it, I'll be
greatly benefited. Thanks!
--
View this message in context: http://linux-audio.4202.n7.nabble.com/Linux-Audio-Architecture-tp85571.html
Sent from the linux-audio-dev mailing list archive at Nabble.com.
Hello all,
Returning home late, on the way from the car parking to my
door I was greeted by a nebula of hundreds of fireflies doing
their social thing.
A lovely thing to see, but it also reminded me that I should
really post the following:
The last months I'm receiving lots of invitations to join
Circles, Friends, Contacts etc. etc. on Google+, Facebook,
LinkedIn etc. etc, many of which from members of this list.
While I do appreciate the motivation behind such requests,
I will never accept them, and from now on I will also stop
responding to any such invitations. If you want to discuss
anything (Linux) audio you're welcome to get in contact via
private email or the LAU or LAD mailing lists.
Ciao,
--
FA
A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)
Hi!
Just a very brief update for those who care about AVB on Linux:
We've recently discussed the subject at LAC 2013 and agreed to use the
open-avb-devel mailing list for "our" internal communication.
Last Friday, we came closer to a working solution. I've already shared
the status quo in a private e-mail with involved parties, one of them
being Intel.
Long story short: the follow-up to this e-mail will be crossposted to
open-avb-devel, which means discussion is open to all. So if you care
about AVB on Linux, you are more than welcome to subscribe to
https://lists.sourceforge.net/lists/listinfo/open-avb-devel
and share your views.
Note that all this is very early development and highly technical,
especially since there is no definite kernel<->userspace API, yet.
Cheers
Dear Free Audio Tool Lovers,
I am very pleased to announce the first official release of FLAC, the Free
Lossless Audio Codec, in over 6 years. FLAC is not dead! It is however a
mature software product that is now being maintained by a team working
under the auscpices of the Xiph.Org Foundation.
The executive summary of changes in this new version:
* Nothing major.
* Source tree is now hosted in Xiph.org git: git clone git://git.xiph.org/flac.git
* Read and write appropriate channel masks for 6.1 and 7.1 surround input WAV files.
* Added support for encoding from and decoding to the RF64 format.
* Lots of build system fixes for your building enjoyment.
The full changelog is here: https://www.xiph.org/flac/changelog.html
Happy lossless encoding and decoding.
Cheers,
The FLAC project contribitors
Lacking access to the full midi specs document, I don't know
if this question is addressed. I've looked at manuals for products
which support them and searched the web but I don't see a clear
answer to my question:
Is it safe to assume that a product or app which allows
binding a *single* HW or GUI control to either 14-bit CC
or 14-bit (N)RPN, would *always* send the value LSB, even if
the LSB did not change but the MSB did, when the control moves?
Do the midi specs address this?
Or do you know of examples of such LSB optimizing-out?
Thanks.
Tim.