Xjadeo is a video player that displays a video-clip in synchronized to
an external time source (MTC, LTC, JACK-transport).
http://xjadeo.sf.net/
-=-
Greetings Soundtrack Designers and fellow Multimedia Artists,
Xjadeo version 0.8.0 just came out and brings a lot of significant
changes. Most notably:
* openGL display
* video-frame indexing
* built-in UI / context menu
With openGL, video-scaling is now performed in hardware and playback
synchronized to the screen's vertical refresh (if the hardware permits
that; most graphics cards do). This is the new default display and
supersedes prior platform-specific video outputs (XVideo, X11/imlib2,
SDL, quartz, which are still available via the --vo option and also used
a fallback).
Video files are now scanned and indexed on load which provides for
reliable seeking to video frames for a wide variety of codecs where
frame-accurate seeking was not possible with earlier versions of xjadeo.
This also acts as a guard to detect and refuse broken video files early on.
User interaction has been overhauled, most notably by adding a menu that
facilitates discovering key-bindings. This deprecates the external
control application qjadeo which previously came with xjadeo.
There have been over 200 changes since the last release, the complete
changelog is available at https://github.com/x42/xjadeo
Other highlights include:
* separate On-Screen-Display for Sync-Source and Video Timecode
* self-documenting OSC API
* disable screensaver
* 64 bit timeline
* new website
Note that various command line options have changed. The seek-related
-K, -k parameters are no longer needed due to the change to indexing.
Letterbox is enabled by default, and it is now also possible to start
xjadeo without an initial file. In short, a lot of defaults have been
updated to make xjadeo more topical (despite that fact the the menu for
the X11 variant is plain old toolkit-less Xlib :)
Statically linked binaries are available for GNU/Linux, OSX and Windows
from http://xjadeo.sourceforge.net/download.html as is the source code
in terms of the GPLv2.
xjadeo is developed and has been tested for accuracy with ffmpeg-2.2.5
it may or may not work properly with different versions, but compiles
with any version of ffmpeg >= 1.0 to date.
Many thanks to Chris Goddard who provided valuable feedback and spent
several weeks on quality assurance and polishing user interaction. We're
far from done on the quest to 1.0, yet 0.8.0 marks a major milestone in
the life of xjadeo.
Cheers!
robin
Hi Silvain
Thanks for trying it out!
On 8/21/14, F. Silvain <silvain(a)freeshell.de> wrote:
> Hey egor,
> thanks for the new tool. I just installed and ran into this error:
> *** cut ***
> Traceback (most recent call last):
> File "/usr/local/bin/hrec", line 19, in <module>
> from pyeca import *
> File "/usr/lib/python3.1/lib-dynload/pyeca.py", line 46, in <module>
> from ecacontrol import *
> File "/usr/lib/python3.1/lib-dynload/ecacontrol.py", line 77
> print 'c=' + I._cmd
> ^
> SyntaxError: invalid syntax
> *** end ***
>
> I just updated ecasound from git and made sure it compiled with python3.1 .
Yeah at the moment I can only confirm that hrec works when run with
Python2 explicitly. As I mentioned I had problems running it with
Python3. As far as I know, the pyeca python module, which I use as a
bridge between hrec and ecasound, doesn't play well with Python3.
Try installing hrec with python2 explicitly, on Arch Linux it would be
like this:
$ python2 setup.py install
Hi folks,
I've been working on and off for the last little while on a command
line recording utility. It's really basic and the code is pretty
ugly. It exists partly because in my lazy search I couldn't find
anything that satisfied my particular need, and also because it was
interesting. I just made a first release, so you could try it if you
like.
The program is called hrec, it's basically a curses front end to a
very limited subset of ecasound's functions.
Code is here:
https://sourceforge.net/projects/hrec/
AUR package is here:
https://aur.archlinux.org/packages/hrec/
I'm planning to do change it soon, but the basics will remain the
same. Mostly I plan to remove the playback functionality, as it's
almost useless -- plenty of other software to do it much better. Also
I need to migrate the code to Python 3, but I'm getting stuck on the
python ecasound bindings. In the couple of (admittedly not very
thorough) attemps I've made I ran into problems with the current pyeca
module. Any advice would be appreciated. Anyway I thought I might as
well release it now and get some feedback.
I hope someone finds hrec useful!
Criticism and insults are also welcome.
Thanks!
Hey everyone!
It's our great pleasure to announce Libre Music Production (
http://www.libremusicproduction.com). Libre Music Production (LMP) is a web
portal and resource aimed at helping you make music using free and open
source software. While the portal was initially developed by a small group
of people, the aim and ambition is that LMP becomes a community project
where we all help out in building a great resource for making music using
FLOSS software. At launch we have over 10k words worth of articles and
guides, as well as a little over 2 hours of recorded video material.
We've prepared a press release that details more about this project at:
http://libremusicproduction.com/pressrelease
We'd greatly appreciate any help in spreading the word about this resource,
as well as any help with contributing content and so on. Please feel free
to copy+paste the press release as you'd like, and thank you very much for
any help!
Any other feedback is also greatly appreciated. Please use the contact
forms on the website :)
Have a nice day!
Greetings Linux Audio Users and Developers !!!
I'm very happy to inform that we will launch the MOD Duo's Kickstarter
campaign in mid September.
The MOD Duo is our second model and we've been putting a lot of engineering
in it based on the feedback we had from the MOD Quadra experience.
We deeply hope it becomes a device that empowers the Linux Audio community,
bringing together developers and musicians.
A pre-campaign site was created to warm up the communication engines:
http://stepontothefuture.com.
Hope you all enjoy and spread the word
Kind regards
Gianfranco Ceccolini
The MOD Team
Silvet is a Vamp plugin for note transcription in polyphonic music.
http://code.soundsoftware.ac.uk/projects/silvet
** What does it do?
Silvet listens to audio recordings of music and tries to work out what
notes are being played.
To use it, you need a Vamp plugin host (such as Sonic Visualiser).
How to use the plugin will depend on the host you use, but in the case
of Sonic Visualiser, you should load an audio file and then run Silvet
Note Transcription from the Transform menu. This will add a note
layer to your session with the transcription in it, which you can
listen to or export as a MIDI file.
** How good is it?
Silvet performs well for some recordings, but the range of music that
works well is quite limited at this stage. Generally it works best
with piano or acoustic instruments in solo or small-ensemble music.
Silvet does not transcribe percussion and has a limited range of
instrument support. It does not technically support vocals, although
it will sometimes transcribe them anyway.
You can usually expect the output to be reasonably informative and to
bear some audible relationship to the actual notes, but you shouldn't
expect to get something that can be directly converted to a readable
score. For much rock/pop music in particular the results will be, at
best, recognisable.
To summarise: try it and see.
** Can it be used live?
In theory it can, because the plugin is causal: it emits notes as it
hears the audio. But it has to operate on long blocks of audio with a
latency of many seconds, so although it will work with non-seekable
streams, it isn't in practice responsive enough to use live.
** How does it work?
Silvet uses the method described in "A Shift-Invariant Latent Variable
Model for Automatic Music Transcription" by Emmanouil Benetos and
Simon Dixon (Computer Music Journal, 2012).
It uses probablistic latent-variable estimation to decompose a
Constant-Q time-frequency matrix into note activations using a set of
spectral templates learned from recordings of solo instruments.
For a formal evaluation, please refer to the 2012 edition of MIREX,
the Music Information Retrieval Evaluation Exchange, where the basic
method implemented in Silvet formed the BD1, BD2 and BD3 submissions
in the Multiple F0 Tracking task:
http://www.music-ir.org/mirex/wiki/2012:Multiple_Fundamental_Frequency_Esti…
Announcing a new C++ library and Vamp plugin implementing the Constant-Q
transform of a time-domain signal.
https://code.soundsoftware.ac.uk/projects/constant-q-cpp
The Constant-Q transform is a time-to-frequency-domain transform related
to the short-time Fourier transform, but with output bins spaced
logarithmically in frequency, rather than linearly. The output bins are
therefore linearly spaced in terms of musical pitch. The Constant-Q is
useful as a preliminary transform in various other methods such as note
transcription and key estimation techniques.
This library provides:
* Forward transform: time-domain to complex Constant-Q bins
* Forward spectrogram: time-domain to interpolated Constant-Q magnitude
spectrogram
* Inverse transform: complex Constant-Q bins to time domain
The Vamp plugin provides:
* Constant-Q magnitude spectrogram with high and low frequency extents
defined in Hz
* Constant-Q magnitude spectrogram with high and low frequency extents
defined as MIDI pitch values
* Pitch chromagram obtained by folding a Constant-Q spectrogram around
into a single-octave range
The code is provided with full source under a liberal licence, and
plugin binaries are provided for Windows, OS/X, and Linux.
The method is drawn from Christian Schörkhuber and Anssi Klapuri,
"Constant-Q transform toolbox for music processing", SMC 2010. See the
file CITATION for details. If you use this code in research work, please
cite this paper.
Hi everyone,
i'm happy to announce the final release of Hydrogen 0.9.6. I'm pretty
sure that most of the users are already working with a 0.9.6-beta
version, but here again is a small list of the most important changes in
0.9.6:
* new build system (cmake)
* add undo for song/pattern editor
* jack-session support
* jack-midi support
* several bug fixes
* tabbed interface
* several small changes to the GUI
* improve ExportSong add use of TimeLineBPM,
RubberbandBatch processor and different types of resample
interpolation
The release can be downloaded via github:
https://github.com/hydrogen-music/hydrogen/archive/0.9.6.tar.gz
At this time, this is a linux-only release. Installers for Windows / OS
X may follow in the future.
Big thanks to all those wonderful people who made this release happen
and who are keeping this project alive!
Best regards,
Sebastian
amsynth 1.5.0 is now available, with a host of improvements and new
features;
- new filter modes: notch and bypass
- new filter controls for key-tracking and velocity sensitivity
- OSC2 octave range increased
- ring modulation can now be dialled in (instead of on/off)
- LFO can now be routed to OSC1, OSC2, or both
- fixes an audible click when using fast filter attack times
- note and controller events are now rendered sample-accurate when run
as a plugin or JACK, fixing MIDI timing jitter
- DSSI & LV2 plug-in builds can now load any preset by right-clicking
the UI background
- bank & preset loading is now faster
Source code:
https://github.com/nixxcode/amsynth/releases/download/release-1.5.0/amsynth…
If you find any problems, please file a bug report @
http://code.google.com/p/amsynth/issues/list
Thanks to all who have helped by providing suggestions and feedback.
Nick