Hi,
a few years ago I made a drum machine with scrubbing, and showed it to
some people at the LAD booth at LinuxTag 2003.
Recently (at linux.conf.au 2006) I realized I've sat on the code and not
released it.
This is not a "release", but a challenge; here's some code, and some
instructions for building it:
http://trac.metadecks.org/wiki/BeatfishInstall
Beatfish is built on libremix (which is also not-quite-released; I'd
like to freeze the API sometime this year though ;-) and Evas (a very
high performance graphics canvas built for Enlightenment 17). The plan
is to make a way of developing cute music machines, using DSSI and all
that's good. For now, beatfish is a pretty basic Jack toy.
enjoy :)
Conrad.
hello,
yes, peak/overview files and metadata is two different
things. metadata should be stored in a open and
extensible format such as XML. the problem with peak
files is, different programmes rely on different
resolution and representation. to give an example, in
eisenkraut i use four peak files in four decimation
scales ; only the highest resolution (1:256 (?)) is
saved permanently, the others are created on the fly.
since i want to support floating point files, i
decided to store peak waveform in float32 format,
which is rather big ; other sound editors will decide
to not do this. also i store both peak and RMS
information, others wish to store only peak, or peak
and spectral focus or whatever. so unless two
programmes are rather similiar, i wouldn't be too
optimistic about using a uniform peak display.
also i wouldn't deal with commercial software,
honestly. if two software companies, say bias (for
peak) and emagic/apple (for logic) are too dumb to
agree on one format, i see no point why open source
software should go and beg those companies to share
their format. rather, if there are a few de facto
standard open source softwares, it would make sense
that they define a standard that they share.
for eisenkraut i use normal AIFF files als overviews,
i think that's pretty straight format, and since most
programmes will store some form of PCM data, it
shouldn't be too difficult to sync two -- if they work
in a similar resolution and representation.
there are some standards already, like SDIF for
spectral information for example.
best, -sciss-
____________________________________________________
Do you Yahoo!?
Never miss an Instant Message - Yahoo! Messenger for SMS
http://au.mobile.yahoo.com/mweb/index.html
Chris Cannam wrote:
>On Tuesday 24 Jan 2006 13:17, Emanuel Rumpf wrote:
>
>
>>The problem occures on all files, that have dssi-plugins assigned.
>>Here is a simple test file with a track "bass" that has the "Less
>>trivial synth" applied.
>>If I open and play it, there is no sound.
>>
>>
>
>I can't reproduce this at all. The example file works fine for me. That's
>rather troubling.
>
>Did you build this from CVS yourself? Are you in a position to build it again
>with some debug output enabled? If so, I'd like to see what is printed by
>the sequencer process when the two lines
>
>#define DEBUG_DSSI 1
>#define DEBUG_DSSI_PROCESS 1
>
>at the top of sound/DSSIPluginInstance.cpp are uncommented.
>
>
>Chris
>
>
>
Did the output I've sent help somehow?
Has anyone else reported a similar problem?
If this is related to my system only, do you have any idea why/how this
could be?
Dssi is working on my system, also in rosegarden, only when loading a
new file, the
tracks with dssi-plugins are muted somehow... then I have to re-assign
the plugins for each track for the sound to come back.
Emanuel
I mailed Paul the link to fetch the whole LAD web tree and files, about
5.7GB ( the content that was on www.linuxdj.com/audio )
next step should be deciding wheter to put the content on a nicer
domain. ( and having linuxdj.com/audio redirect to that domain/site
so that search engine and website links get redirected to the correct
place).
I'll wait for Paul's reply how to proceed (redirect linuxdj.com etc).
cheers,
Benno
Paul Davis wrote:
>On Mon, 2006-01-30 at 17:44 -0500, Paul Davis wrote:
>
>
>>On Tue, 2006-01-31 at 01:10 +0100, Esben Stien wrote:
>>
>>
>>>Benno Senoner <sbenno(a)gardena.net> writes:
>>>
>>>
>>>
>>>>consumes so much bandwidth
>>>>
>>>>
>>>Why don't we put these videos on archive.org?.
>>>
>>>
>>I currently have 1870GB/month bandwidth, and it climbs by 16GB/week
>>right now. Last month, I used 0.1% of my bandwidth. Its not an issue.
>>
>>
>
>lets get this transfer started ...
>
>
>
>
Hi LADers,
During the last months the LAD website (
http://www.linuxdj.com/audio/lad ) was hosted on the lionstracs.com server.
Domenico from Lionstracs told me that he does not want do host the LAD
site anymore since it consumes so much bandwidth,
200 GB in January, see here.
http://www.linuxdj.com/webstat1/
its probably due to the audio/video material from the conferences
(100MB a pop).
Anyone willing to host the site ? As I did in the last years, I'll pay
the for the linuxdj.com domain fees.
cheers,
Benno
Hi.
Is there anything planned for the DSSI standard which would allow
DSSI hosts to launch GUIs just by sending a OSC message to some kind
of GUI launcher? I am asking because I'd like to write a whysynth
frontend in SuperCollider Language. That should be all doable, since
the whole GUI->DSP->GUI process happens over OSC. Now there is
only the session initiation left. I know its pretty trivial to
write a wrapper WhySynth_osc which just sends off its argv to
SCLang via OSC, but this feels kind of clumsy to me.
In fact, for what I'd like to do, it would be quite pretty if jack-dssi-host
just had an additional command-line argument which would tell its startGUI
routine not to call exec, rather send the 4 strings to the specified
osc url.
$ jack-dssi-host -o osc.udp://localhost:57110/dssRequestedI/ whysynth.so
How does this sound to you, worth a patch?
Or should this really be done with wrappers, ugly as they may be?
--
CYa,
Mario
On 1/30/06 Paul Davis wrote:
>i have the space and capacity to host this at dreamhost under my
>account. start the ball rolling.
I think this would be the best solution - it will keep Paul with us for a long time to come. :)
>btw, i happen to think that the current website is ugly and a mess. it
>would nice if some people could spare several hours and clean it up to
>make a really useful resource for people interested in linux audio
>development.
Agreed.
-Maluvia
i know design by committee can be horrible but these situations usually utilize vastly similar yet incompatible formats, so its sort of biting off something small, i hope.. :)
(1) Peak Files
some of my favorite wav files have 10 metafiles each. peaks generated for peak, spark, wavelab, soundforge, cubase, ableton live, samplitude, rezound, sweep, ardour, plus a dir for "Apple Loops" data etc.
it would be great if each time an audio file enters a new app, the user wasnt greeted with a 30 second burst of disk activity as peak files were generated yet again..what exactly is needed? here are some thoughts
- average amplitude per time-slice to generate the waveform overview
* what granularity is useful? peak files seem to run a few % of filesize..
- spectral centroid for comparisonics/freesound style colorization
- annotations (OPML etc)
- timing (tempo, cue points, beat markers)
rather than invent some new arbitrary plaintext (or XML) format, i'm interested in using OSC (as described at http://www.cnmat.berkeley.edu/OpenSoundControl/OSC-spec-examples.html ) to encapsulate this data, at which point this is simply an exercise in selecting a schema/namespace...
beside faster load-times (eg one could pregenerate this data before a performance or composition session via a recursive shell command and 'sox'), a commonly understood format would enable easier sharing of CC-licensed material among a variety of users and apps without useful metadata being 'lost in translation'. additionally web and other interfaces could be developed using the metadata hints (see archive.org, NI's KORE)
(2) Instruments
compatibility and reuse of sample-based instruments between chionic, specimen, DSSI samplers, PD samplers, LinuxSampler, and seperation between editor and engine allowing more highly specialized apps - the 'nix way.
- get rid of arbitrary region / bank / instrument boundaries which seem derived from MIDI (the amount of times you see 1-16 and 0-127 in modern software instruments is appalling)
- sample regions pointing to audio files or groups
- grouping (nesting / tags)
- volume / filter / lfo stuff
once again i am thinking OSC could be suited to this..
(3) 'project' components
monolithic binary files still seem to remain the norm, eliminating all hope of reuse without tedious exporting of settings or components, things like:
- pointers to regions (audio files, control streams, other projects)
- note and controller-data streams
- instrument / filter settings
ive thought about this a bit and am leaning towards using a directory on disk, with a format for each of the above, which would enable revision tracking via SVN or darcs.. note/control data will likely be OSC (and MIDI encapsulated in OSC where necessary), instrument/filter settings would be closely aligned with what is fed to LADSPA/DSSI modules, and the pointer 'glue' file linking others and assigning filter and routing data to tracks, channels, regions..\\\
Has anyone managed to successfully get jackd talking to dmix?
My dmix setup works fix (i run VoIP stuff, artsd, and other assorted junk
through it daily).
jackd works great when connect to hw:0,0 or to plughw:0,0 (with the warning),
but if I point it at my dmix'd "default" it sits there with 100% CPU usage,
and doesn't otherwise operate.
Does anyone have this combination of jack and dmix working?
(i'm running alsa 1.0.11rc3 on 2.6.15.1, i've tried alsa 1.0.10 and jack
0.100.0 too)
>jackd -v -d alsa -d default -r 48000 -S -n 8 -p 1024 -P
jackd 0.100.7
Copyright 2001-2005 Paul Davis and others.
jackd comes with ABSOLUTELY NO WARRANTY
This is free software, and you are welcome to redistribute it
under certain conditions; see the file COPYING for details
JACK compiled with System V SHM support.
server `default' registered
loading driver ..
registered builtin port type 32 bit float mono audio
new client: alsa_pcm, id = 1 type 1 @ 0x8056a48 fd = -1
apparent rate = 48000
creating alsa driver ... default|-|1024|8|48000|0|0|nomon|swmeter|-|16bit
configuring for 48000Hz, period = 1024 frames, buffer = 8 periods
You appear to be using the ALSA software "plug" layer, probably
a result of using the "default" ALSA device. This is less
efficient than it could be. Consider using a hardware device
instead rather than using the plug layer. Usually the name of the
hardware device that corresponds to the first soun
nperiods = 8 for playback
new buffer size 1024
registered port alsa_pcm:playback_1, offset = 0
registered port alsa_pcm:playback_2, offset = 0
++ jack_rechain_graph():
client alsa_pcm: internal client, execution_order=0.
-- jack_rechain_graph()
7129 waiting for signals
load = 1.1672 max usecs: 498.000, spare = 20835.000
load = 1.3102 max usecs: 310.000, spare = 21023.000
load = 1.3793 max usecs: 309.000, spare = 21024.000
load = 1.4162 max usecs: 310.000, spare = 21023.000
....
etc
....
Hi all !
I have installed a RME DIGI9652 on a Debian Sarge and it seems to work great
with Ardour ! However my two expansion boards AEB8-I [1] and AEB8-O [2] plugged
respectively on the "CD IN" input and "ADAT 1" output of the main board - as it
is mentionned in the documentation - don't get any signal, even when I change
existing interrupts in kmix, gamix, etc.. I would then ask to anybody who knows
the chipset/driver if there are special routing interrupts to be switched with
alsactl or something...
Thanks a lot for your help and useful work !
p--g
`
Parisson, Paris
(sorry for the wrong 1st Re: mail)