Hey folks, what is the easiest way to deal with midi input in a jack app?
I'm confused by the difference in jack midi and alsa midi, because I have
two midi inputs, one is a usb input, so it appears at a low level as an
alsa device, but the other is the midi input on a firewire unit, and it
appears as a jack midi device. I'd like to make sure that whatever I do is
easy to port to other systems. Does it make sense to use portmidi or rtmidi
to get input or should I stick to the jack api entirely?
thanks
Iain
Hey all,
I've been writing a scope the last while, and I'm intrested in how other
people have approached plotting the data.
Currently I'm taking every 50th sample, and drawing a line from the
previous sample to there, and so on. Not particulary neat.
So is there a resample, or smoothing of the samples, or how does one plot a
"smooth" waveform like Ardour / QTractor?
My other question is about RMS, is calculating it in its literal sense
best? Or perhaps only taking every 5th sample?
Resampling the signal from 44.1 or 48k to say 11025? I'm not really sure
which way to go.
Cheers, -Harry
Hi there,
I could use some advise.
You may or may not heard of replaygain. It's reasonably widely used in
consumer audio, but sometimes I wish it was available for video as well.
By this I mean I wish it was available for the audio part of the video.
Well, I need a programming project for a university course and this is
just one of my ideas that I want to propose to my teacher and
prospective teammates. In order to do this I'd like to narrow it down a
bit further and especially want to find out whether I have the right
idea of how it can be achieved.
Scanning/tagging
Since replaygain works on whole audio files I think I need to extract
the whole audio track from the container. How easily this can be
achieved I don't know. After that, the scanning process should work as
with any audio file. Afterwards the calculated replaygain values have to
be added to the metadata of the file. I have no idea how hard it is to
add new metadata fields to video formats.
Playback
Video players need to be aware of those tags, read the metadata and scale
the playback volume accordingly. This is probably not hard per se, but
there are many players out there. However, I plan to start with a single
player, even with a single file format, and go from there.
Question 1: Is there anything better than replaygain that should be used
instead?
Question 2: Which player would be easiest to hack to add such
functionality? Could it be a gstreamer plugin? mplayer?
Question 3: How much work would it be?
The project should be done in C++ if possible, otherwise C. Group size
2-4 Students, all rather new at C/C++ and rather inexperienced in
general.
Other ideas I have are in short:
- A CLI (using readline) connection manager for jack audio/midi and alsa
midi that can handle large numbers of ports. More detailed ideas exist
thanks to Julien Claasen.
- A simple but hopefully sane mplayer GUI
- A new GUI for ecasound
Another problem I might have is that most students in the course are
Windows users, not sure whether I can go solo.
Thanks for any advice,
regards,
Philipp
Hi everyone. I'm starting to write a simple filter and I want to expose it
as an lv2 plugin.
My development environment is very simple right now: vim editor, gcc
compiler, package the lv2 manually (will write a script for that in a day
or two) and then load the plugin in ardour to test it.
I've found this setup to be a bit unconfortable because once I load the
plugin in ardour I don't know how to get debug information from it (print
statements or breakpoints with gdb).
What does a lv2 development environment typically looks like? what are you
guys using?
Thanks!
--
Rafael Vega
email.rafa(a)gmail.com
Apologies for cross-posting.
======================
We are pleased to announce the release of version 5.15. The sources
are on the standard Sourceforge location
(https://sourceforge.net/projects/csound/files/csound5/csound5.15/)
as both zip and tar.gz
Platform packages will follow shortly, and the manual on Friday.
==John ffitch
------------------------------------------------------------------------
Notes for 5.15
==============
New parser has been subjected to a great deal of work. It now has
better checking of argument types and use, better diagnostics and
increased functionality. We have only reached this stage in the last
few days so we judge it prudent to leave the old parser as standard.
We would be pleased if more users tried the new and gave the
developers feedback.
A major reorganisation means that there are many fewer plugins and
most opcodes are in the base (about 1250 of them). A side effect of
that is that leaving old plugins from an earlier release is a
disaster, and so 5.15 will not load earlier plugins.
The multicore system is now safe (ie maintains semantics) when zak,
channels or table modification are made.
New Opcodes:
ftab2tab transfers between ftables and t-variables
tab2pvs tsig - pvs conversion
pvs2tab pvs - tsig conversion
cpumeter-- not really new but now available in OSX
(EXPERIMENTAL) ftresize and ftresizei allow resizing of
existing tables. These will be permanent if the
community feel they are useful.
minmax opcodes
hrtfearly, hrtfreverb opcodes
New Gen and Macros:
Code to allow GEN49 to be deferred [NB does not seem to work]
Modified Opcodes and Gens:
socksend and sockrecv no longer uses MTFU check and work on Windows
mpulse changed so if next event is at negative time use the absolute value
serial opcode now runs on Windows as will as Un*x
out, out2, outq, outh, outo outx and out32 are now identical
opcodes and will take up to as many arguments as nchnls.
This replaces the current remapping of opcodes
turnoff2 now polymorphic wrt S and k types (ie accepts instrumnet names)
Utilities
Bugs fixed:
GEN42 fixed
jacko: fixed a segfault removing the unused JackSessionID option
doppler memory leak fixed
transegr fixed in release mode when skipping most of envelope
FLPack now agrees with manual
max_k now agrees with manual
hrtfreverb fixed
atsa code now works on Windows in more cases
tabmorph bug fixed
fixed problem with user-defined opcodes having no outputs
Various fixes to * ... */ comments
System Changes:
Various licence issues sorted
Loris is no longer part of the Csound tree
Memory leaks fixed
If no score is given a dummy that runs for over 100 years is
created
All score processing takes place in memory without temporary
files
String memory now expandable and no size limitation
#if #else #end now in new parser
Adjustments to MIDI file precision in output
On OSX move from Coreaudio to AuHAL
Multicore now safe for ZAK, Channels and modifying tables
New coremidi module
Virtual Keyboard improved:
1) Dropdown for choosing base octave (the one that
starts with the virtual key mapped to physical key
Z). Default value is 5 which is backwards compatible.
2) Shift-X mappings which add two octaves to X
mappings for a total of 4 octaves playable from the
physical keyboard (starting from selected base octave).
3) Control-N / Control-Shift-N mappings to increment
/ decrement slider for control N.
4) Mouse wheel now controls sliders.
tsig type for vectors
tsigs and fsigs allowed as arguments in UDOs
API:
Minor version upped
Internal:
Very, very, very many!
Dr Victor Lazzarini
Senior Lecturer
Dept. of Music
NUI Maynooth Ireland
tel.: +353 1 708 3545
Victor dot Lazzarini AT nuim dot ie
Hi, I'm sure others have tackled this and have some wisdom to share. My
project is principally a monosynth step sequencer. This is nice an simple
to do in real time because resolution is very limited and there can be only
one note per track. So step sequenced note data is stored in simple
multi-dimensional arrays, making reading and writing very easy, and
messaging simple between audio and gui threads.
However, I would like to add the ability for the user to send a message and
have it get executed later, where later gets figured out by the engine ( ie
on the top of the next 8 bar phrase ). To do this, I need some way of
storing deferred events and having the engine check on each step whether
there were any deferred events stored for 'now'. I can think of a few ways
to do this, and all of them raise red flags for a real time thread.
- I could use a hash table, hashed by time, with a linked list of all the
events for that time. The engine looks up the current time and gets all the
events. I don't know much about hashing so I'd prob just use Boost, is that
a bad idea?
- I could make a linked list of all deferred events and iterate through
them checking if time is now. There wouldn't be any hashing, but maybe this
list would be really big.
Anyone have any suggestions for how to safely do the above or some better
alternative?
thanks!
iain
On , Tristan Matthews <le.businessman(a)gmail.com> wrote:
> You might find some inspiration in sndpeek:
> http://soundlab.cs.princeton.edu/software/sndpeek/
Definatly! Thanks, got linux-alsa compiling, but linux-jack is segfaulting
in RtAudio::startStream()... debuggin atm :)
Cheers, -Harry
Hi Experts.
I wanna gotta some info about my (mainly) sound files.
Samplerate, Bitrate, VBR, Channels, Bits, Samples, Tag info ...
and put em all in my C program variables or structure.
Seems that MediaInfo support many formats, and it is really good.
Can somebody gimme examples how to access sound/media files with MediaInfo API ?
What exact #include's must be used ?
How to compile ?
Evtl. what include and lib dirs should be specified in GCC command line ?
gcc -I... -L... -lmediainfo
What is difference between
/usr/include/MediaInfo and
/usr/include/MediaInfoDLL ??
If i wanna use MediaInfo API, but not MediaInfo itself, is it enough when i
emerge media-libs/libmediainfo
or anyway i must emerge media-libs/libmediainfo AND media-video/mediainfo ?
eix mediainfo
[I] media-libs/libmediainfo
Available versions: 0.7.45 ~0.7.48-r1 ~0.7.49 ~0.7.50 {curl doc mms static-libs}
Installed versions: 0.7.45(12:05:57 AM 12/04/2011)(-curl -doc -mms -static-libs)
Homepage: http://mediainfo.sourceforge.net/
Description: MediaInfo libraries
[I] media-video/mediainfo
Available versions: 0.7.45 ~0.7.48 ~0.7.49 ~0.7.50 {curl mms wxwidgets}
Installed versions: 0.7.45(12:08:18 AM 12/04/2011)(-curl -mms -wxwidgets)
Homepage: http://mediainfo.sourceforge.net
Description: MediaInfo supplies technical and tag information about media files
I fond this, but it does not help :(
http://mediainfo.sourceforge.net/de/Support/SDK/Quick_Start#Example
Tnx in advance @ all
----
Hi all,
Just a friendly reminder that JANUARY 11 is the deadline for all submissions to the Linux Audio Conference (LAC 2012), which will take place at CCRMA (Stanford, California) in April 2012!http://lac.linuxaudio.org/2012/
Santa LACus wishes a great paper-and-music-submitting holiday to all!
Ho, ho.
Bruno
- - - - - - - - -
LAC 2012: the Linux Audio Conference - Call for Participation
April 12-15, 2012 @ CCRMA, Stanford University
http://lac.linuxaudio.org/2012/
[Apologies for cross-postings] [Please distribute]
Online submission of papers, music, installations and workshops is now
open! On the website you will find up-to-date instructions, as well as
important information about deadlines, travel, lodging, and so on. Read
on for more details!
We invite submissions of papers addressing all areas of audio processing
based on Linux and open source software. Papers can focus on technical,
artistic or scientific issues and can target developers or users. We are
also looking for music that has been produced or composed entirely or
mostly using Linux and other Open Source music software.
The Deadline for all submissions is January 11th, 2012
The Linux Audio Conference (LAC) is an international conference that
brings together musicians, sound artists, software developers and
researchers, working with Linux as an open, stable, professional
platform for audio and media research and music production. LAC includes
paper sessions, workshops, and a diverse program of electronic music.
The upcoming 2012 conference will be hosted at CCRMA, Stanford
University, on April 12-15. The Center for Computer Research in Music
and Acoustics (CCRMA) at Stanford University is a multi-disciplinary
facility where composers and researchers work together using
computer-based technology both as an artistic medium and as a research
tool. CCRMA has been using and developing Linux as an audio platform
since 1997.
http://ccrma.stanford.edu
Stanford University is located in the heart of Silicon Valley, about one
hour south of San Francisco, California. This is the first time LAC will
take place in the United States.
http://www.stanford.edu
We look forward to seeing you at Stanford in April!
Sincerely,
The LAC 2012 Organizing Team
pd-faust is my latest stab at making the integration of Pd and Faust as
simple and painless as possible. For those of you who've used my
utilities for Faust and Pd before, pd-faust integrates the functionality
of faust2pd and pure-faust into a collection of Pd objects written in
the Pure programming language. It also sports the following major
improvements over faust2pd:
- Reload Faust modules at runtime and have the Pd GUI of the Faust dsp
regenerated automatically and instantly.
- The metadata in Faust programs is interpreted to adjust the GUI layout
in a faust2pd-compatible fashion.
- MIDI/OSC controller mappings are provided for the 'midi' and 'osc'
metadata tags in the Faust source.
- Built-in MIDI sequencer and OSC recorder which syncs MIDI and OSC
playback and provides an OSC-based controller automation facility for
all Faust dsps in a Pd patch.
So in other words it's the Swiss army knife for Faust development in Pd.
;-) If you're into Faust and Pd, I hope that you'll find it useful. Bug
reports and other feedback are appreciated.
A brief overview is available here:
http://code.google.com/p/pure-lang/wiki/Addons#pd-faust
The obligatory screenshot:
http://wiki.pure-lang.googlecode.com/hg/pd-faust.png
Detailed documentation (including installation information):
http://docs.pure-lang.googlecode.com/hg/pd-faust.html
pd-faust is compiled to a native Pd object library which can be loaded
with Pd's -lib option as usual. Note that besides Pd, Faust and pd-faust
itself you'll also need the Pure interpreter and a couple of Pure addon
packages to build and run this software. Please check the documentation
linked to above for details. All the Pure-related downloads can be found
on the Pure website:
http://pure-lang.googlecode.com
For your convenience, here are the direct download links for the
required packages from the Pure project (source tarballs):
http://pure-lang.googlecode.com/files/pure-0.50.tar.gzhttp://pure-lang.googlecode.com/files/pd-faust-0.1.tar.gzhttp://pure-lang.googlecode.com/files/pd-pure-0.15.tar.gzhttp://pure-lang.googlecode.com/files/pure-faust-0.6.tar.gzhttp://pure-lang.googlecode.com/files/pure-stldict-0.2.tar.gz
You'll also need a recent version of Pd (0.43 has been tested) and Faust
from git (0.9.45 and 2.0.a3 are both known to work fine).
Happy holidays,
Albert
P.S.: Sorry for the excessive cross-posting, but the nature of this
project which interfaces between three different environments, each with
their own communities, made this seem appropriate.
--
Dr. Albert Gr"af
Dept. of Music-Informatics, University of Mainz, Germany
Email: Dr.Graef(a)t-online.de, ag(a)muwiinfa.geschichte.uni-mainz.de
WWW: http://www.musikinformatik.uni-mainz.de/ag