>Well, this has been discussed to death on the jack-devel lists. I
can see that from an audio developer's point of view,
> it would be nice to have video within the same >server as audio.
>However, there are fundamental differences between video and audio,
which make this in my mind impractical.
> Firstly, there is the problem of latency - for audio, generally the
aim is to have a latency < 4ms. For video -
> since we are dealing with much larger chunks of data, a >latency an
order of magnitude greater than this is usually acceptible.
>Second, the timing is different. For audio generally you have a 1024
sample buffer and a rate of 44.1KHz or 48KHz. Video requires usually
something like 25fps.
> So you can either have two clocks, or you can split the video into
chunks (ugh), both solutions have problems.
> If you can solve these problems, then there is absolutely nothing
stopping you running video and audio in the same server
> (Video simply adds a new port type). Regards, Salsaman.
Video in jack1 won't happen because of several reasons that can be
explained again: we want to fix and release jack1 soon and video in
jack is a too big change to be integrated in the current state of the
proposed patch.
The future of jack is now jack2, based on the jackdmp new
implementation (http://www.grame.fr/~letz/jackdmp.html). A lot of work
has already been done in this code base that is now API equivalent to
jack2. New features are already worked on like the DBUS based control
(developed in the "control" branch) and NetJack rework (developed in
the "network" branch).
I think a combined "video + audio in a unique server" approach is
perfectly possible: this would require having 2 separated graph for
audio and video running at their own rate. Video and audio would be
done in different callbacks and thus handled in different threads
(probably running at 2 different priorities so that audio can
"interrupt" video). Obviously doing that the right way would require a
bit of work, but is probably much easier to design and implement in
jackd2 codebase.
Thus I think a better overall approach to avoid "video jack fork" is
to work in this direction, possibly by implementing video jack with
the "separated server" idea first (since is is easier to implement).
This could be started right away in a jack2 branch.
What do you think?
Stephane
2008/5/4, Paul Davis <paul(a)linuxaudiosystems.com>:
> the thing to do is to
> fire up aplay with a long audio file, then put the laptop into suspend.
> if it comes back from suspend with working playback, its a JACK issue.
> otherwise, its an ALSA card-specific driver issue.
>
>
ok
now managed to play a wav through alsaplayer (alsa output driver).
result:
With the internal card (Intel hd_audio) resume IS working.
With the external usb card (ua-25) resume does NOT work.
no resume from hibernate-ram (playback stops, alsaplayer must be
restarted) with:
$ alsaplayer -r -oalsa -d default:CARD=UA25 ultra_test4.wav
successfull (playback continues) resume from hibernate-ram with:
$ alsaplayer -r -oalsa -d default:CARD=Intel ultra_test4.wav
The usb-card is powered off on hibernation.
A presumption: Maybe it doesn't get enougth time to re-power on.
2008/5/4, Justin Smith <noisesmith(a)gmail.com>:
> Some sound cards have suspend reset issues, could this be part of the problem?
>
I'm using a usb-card (edirol ua-25) as jack-output, if that matters.
Setting jack timeout to 5000 msec didn't help it.
Jack is running in RT mode.
make jack and jackified apps work with hibernate-ram
reasons:
- It's great coming back to my/your pc and have it ready in seconds
- saves energy
- saves money
- might help the nature
- better system integration
- no requirement to close jack and all jack-apps, when hibernating
- improved acceptance as THE audioserver
cheers
i'm sure the linux audio community has something to contribute to
this initiative!
Begin forwarded message:
> From: "Graham Coleman" <gcoleman(a)iua.upf.edu>
> Date: 25. April 2008 18:01:28 GMT+02:00
> Subject: DAFxTRa 2008: announcement and call for participation
>
> Apologies for multiple postings. We will attempt a public evaluation
> of digital audio effects. We encourage your feedback and
> participation.
>
> --
> Salutations!
>
> We announce and call for your participation in a cross-community
> evaluation of audio effects.
>
> DAFxTRa 2008 (DAFx Transformation RAting) is a new initiative
> promoted by
> MTG-UPF and DAFx-08 aimed at evaluating and comparing algorithms for
> audio effects.
> Our goal is to have the main evaluation in September 2008 during
> DAFx-08
> (http://www.acoustics.hut.fi/dafx08/) but to make it happen we need
> the
> involvement of the audio effects research community.
>
> We do not know of any major initiative to compare audio effects
> algorithms, which might be the related to the difficulty of the
> evaluation task. It is hard because it requires standardized
> procedures for
> carefully controlled subjective experiments. But we believe
> that now is the time to try it, so we can all learn from the
> process and in
> turn improve our audio effects algorithms.
>
> Inspired by the success of MIREX in the evaluation of Music
> Information
> Retrieval algorithms, and having acquired some experience by
> organizing the
> audio description contest at ISMIR 2004-Barcelona, we want to
> promote a
> similar initiative for the digital audio effects community.
>
> However most audio effects tasks do not afford an objective
> measure of ground truth, like in MIR, and thus we have to define
> specific evaluation strategies. We want to do it by involving the
> developers of algorithms interested in participating and by specifying
> with them the evaluation process. The participants should learn
> from the
> process and the whole DAFx community should benefit from it.
>
> Our initial aims are the following:
> 1. Propose several audio effects categories. Initial proposal: time-
> scaling,
> pitch-shifting, source separation, morphing, distortion effects and
> deconstruction.
> 2. Select the categories for which there is a sufficient number of
> participants.
> 3. Select sounds from Freesound (http://freesound.iua.upf.edu) to be
> used as test sounds for each category.
> 4. Define the evaluation procedure for each category.
> 5. Ask the participants to submit the transformed sounds (not the
> algorithms).
> 6. Perform the evaluation both live at DAFx-08 and also online in
> Freesound.
> 7. Publish the results of the evaluation.
>
> Our proposed time-line is:
> - 1st August 2008: Finalize categories and evaluation procedures
> - 30th August 2008: Submit transformed sounds
> - 1st-4th September 2008: Run life evaluations at DAFx-08 Conference
> - 10th-30th September 2008: Run on-line evaluations on the
> Freesound site
> - 15th October 2008: Publish the results
>
> If you are interested in participating or in getting involved in the
> process join the DAFxTRa mailing list
> (http://iua-mail.upf.es/mailman/listinfo/dafx-eval) for an open
> discussion.
> Results of the discussion and organizational details of the evaluation
> will be posted in a wiki (http://smcnetwork.org/wiki/DafxTRa2008)
>
> Your input and ideas will be most welcome.
>
> Graham Coleman (MTG-UPF) (contact person)
> Jordi Bonada (MTG-UPF)
> Perfecto Herrera (MTG-UPF)
> Xavier Serra (MTG-UPF)
Greetings,
After another quarantine period, I am pleased to announce (yet) another
maintenance release of my flag-ship toy, Qtractor, an Audio/MIDI
multi-track "bedroom" sequencer for the techno-boy (and girl:).
Probably, the major feature highlight for this release, is the new
optional support for in-place audio clip pitch-shifting through Chris
Cannam's Rubber Band Audio Time Stretcher library. This one alone just
closes the gap on the techno-boy/girl bedroom-studio prospects, so let's
move along, nothing really new to see here :) However, given there were
many inner changes in the audio rendering engine everything might just
sound a lot less glitchy than previous releases. Therefore, everybody is
welcome to upgrade. And please, don't be shy ;)
Qtractor 0.1.3 (frugal damsel) has been released!
Grab it while visiting the project pages:
http://qtractor.sourceforge.nethttp://sourceforge.net/projects/qtractor
Here's some direct links to the most wanted pieces:
http://downloads.sourceforge.net/qtractor/qtractor-0.1.3.tar.gzhttp://downloads.sourceforge.net/qtractor/qtractor-0.1.3-user-manual.pdf
And don't (ever) forget to drop by, over the upstream :)
http://www.rncbc.org
As usual, the complete change log is worth a look too, for the record:
- As one may find convenient sometimes, the global time display
format (frames, time or BBT) may now be changed on the main
transport time spin-box context menu.
- Left-clicking on the track list number column now toggles all
track content clip selection.
- Prevent audio-buffer initialization mashups when editing short
audio clips while playback is rolling and within clip region.
- Audio peak files gets a bit simplified, dropping the peak frame
count from its header; peak waveform graphics are now rendered
as straight lines when over the end of audio file.
- The drop-span option (View/Options.../Drop multiple audio files
into the same track) now also applies when importing tracks (as
in Track/Import Tracks/Audio...) to concatenate multiple audio
clips into one and the same new track.
- Audio and MIDI meter level colors are now user configurable (as
global configuration options, View/Options.../Display/Meters)
- First attempt for Qt4.4 build support, regarding the bundled
atomic primitives, which have changed upstream as advertised
(thanks to Paul Thomas, for spotting this one first time).
- Record monitor switch is now an accessible button option on all
track mixer strips; for visual consistency, the old bus "thru"
switch button has been renamed to "monitor".
- Force track-view position reset to origin on session close.
- Fixed segfault on inserting an external file into files widget.
- Mixer splitter sizes are now better saved/restored when closed.
- Track record monitoring is now a state option, being toggled
from the Track/State/Monitor menu; applies both to audio end
MIDI tracks: when set all input will be pass-through to the
current assigned output bus, including track plug-ins chain.
- Session dialog gets split in its own tab components, between
descriptive, time and view configuration ones.
- Drifting correction among audio and MIDI engines is now back,
but avoided while recording or should it be while looping?
(EXPERIMENTAL REGRESSION)
- Time-stretching percent value gets its semantics inverted,
as thought consistent with ones general sense for relative
stretching ie., lower to shrink and higher to make longer.
this is a major up-side-down change and should affect all
sessions saved with time-stretched audio clips.
- Slack space in main tracks and MIDI clip editor views are now
proportional to viewport width, leaving enough room for drag
and moving content past the current session length, specially
at the lower zoom levels.
- Clip end time is now also shown on tool-tip.
- When armed for recording, MIDI tracks are now monitored and
filtered through their own output bus, thus having the same
behavior as audio tracks; this also implies that all record
armed tracks won't playback their current content material
when recording is engaged and rolling; track mute and solo
states are now honored on record monitoring.
- Audio clip pitch-shifting makes its first appearance, with
the optional help from Chris Cannam's RubberBand library.
- A new MIDI editor tool is available: note/pitch randomize.
- Avoid (re)setting the default session directory if a session
cannot be open or loaded for some reason.
- Another nastiness bites the dust: a subtle but progressive
drifting has been swept away from the audio buffer looping;
zero buffer flushing is now also taken into account, which
was the cause for serious drifting on time-stretched clips.
- A major digital audio processing bug was tamed: audio clip
fade-in/outs are now linearly piece-wise applied, even at
the clip edges, giving a much smoother rendering and thus
mitigating the nasty click-and-pop artifacts that were in
fact due to some early design optimization with a poor and
sloppy implementation.
Cheers && Enjoy
--
rncbc aka Rui Nuno Capela
Is it possible to configure a system (included changes to programs) so
that nothing can interfere with a few RT threads?
I have an audio and video thread in a program, and they are running as
SCHED_FIFO, but so many things can interfere with those two threads,
such as USB plugs/unplugs, cron log rotations, disk access (PIO flash
disk), ssh logins, etc. I really would like to shore up performance
here.
I am not using Ingo's RT patches, but I still see people talking about
turning off things like cron and server processes to prevent xruns when
using Ingo's RT patches. While I have turned off cron, turning off sshd
and the web server aren't options. I want to instead somehow make sure
that neither can every interfere with the AV threads, even if it means
that SSH and web traffic are extremely slow.
Version 1.2 of the free open source Linux software synthesizer
Minicomputer is out. This audiosynthesizer creates complex waveforms and
shapes them in a three stage formant filter. More at
http://minicomputer.sourceforge.net/
or directly at
http://sourceforge.net/project/showfiles.php?group_id=203751
Changes:
1.2 1.May 2008
2 - new: installer/deinstaller
3 - new: installerscript for presets
4 - new: unified behaviour, editor is called now minicomputer and when
started by user, launches t he core and shuts it down too
5 - fix: improved midihandling while using less cpu cycles
6 - fix: backup of memory files should work now
------------------------------------------------------------------------
Industrial synthesizer for Industrial people