On Thursday 24 February 2011 12:09:19 David Robillard wrote:
> This is, of course, a big problem in terms of our greater mission
> to provide software that caters to the needs of precisely nobody while
> irritating everybody else.
>
> To resolve this situation, we now have an exciting new Clippy inspired
> assistant that hops around your screen begging you to add MIDI tracks
> constantly.
>
> If, after 20 minutes, you have still not created a MIDI track, Ardour
> will overwrite every .waf file found in your home directory with a 4/4
> electronic kick drum loop, then shut down.
>
> -dr
Dude!
This is just what I have been wanting but am too much of a newb to have even
thought of undertaking such a complex project on my own!
Hence my starting my soundwall project which I newbily thought was going to be
a simple little app.
I have some code inspired by some "anon" silence that could perhaps be
included in this project. wwnnsnmsnm.
cage-ily,
drew
On Tue, Feb 22, 2011 at 3:49 PM, Philipp Überbacher
<hollunder(a)lavabit.com> wrote:
> The rest sounds nice, and it might well be that X has become old, but I
> don't see the big improvement coming up. Windows are called surfaces
> now, can have different shapes and are more flexible, compositing,
> transformations, I got that bit, but I don't see the UI improvement.
> I've seen the demos with shapes flying around the desktop, I've seen
> the conventional compositing window managers and wayland will probably
> do all that and more, but I don't see the improvement in User
> Interfaces.
what its going to do,i think, is two-fold:
1) promote more and more toolkit design that makes everything just a
compositing stack. GTK has already moved significantly in this
direction, but could go a lot further. Qt is in a similar position.
the more this happens, the easier it is to reason and create new GUI
widgets that do cool things, easily and simply, because its all part
of a very simple model: you draw to your surface, it will be
composited onto the screen in ways that you don't have to worry about.
sounds a bit like X ... except that X is explicitly *not* a
compositing model. for a simpler explanation of the kind of thing i
mean, consider the difference in ardour between the main "tracks" area
of the editing window and all the widgets around it. its fundamentally
impossible to implement the tracks with widgets - it uses a "canvas"
object instead which embodies idea like z-axis stacking, transparency
and so forth. but likewise at present it would be a lot of work to
implement all the widgets as canvas "items". now fast forward a few
years, and find a spot where the drawing model for the canvas, the
button widgets, the tree/listviews, for everything *inside* the
program is the same as the model for everything *outside* the program.
drawing a particular "thing" on any other thing becomes identical,
whether the other thing is a "window", a "button", a cell of a
listview, etc, etc.
2) more and more apps able to take advantage of v-blank sync to reduce
computational load due to unnecessary redraws. instead, the whole
system will be a lot like a video-framebuffer version of JACK: the
vblank interrupt arrives. everything with a surface gets a chance to
redraw if it needs to, the surfaces are composited together, and boom,
its on the display. no more guessing how often to redraw stuff, no
more wierd ass hacks to get smooth animation, etc. if you think this
sounds like special effects, i suggest a few minutes playing with a
relevant iPod/iPhone/iPad app where these smooth transformations of
what is on the screen is a central metaphor in how the UI's work.
Oh, well, since GUIs are this week's flavor in the list, I might as well
throw FLAM's alpha in the mix.
I am preparing FLAM's (Front-ends for Linux Audio Modules) first
release. Among other things, FLAM intends to allow programmers and
non-programmers alike to create their own (external) GUIs for audio
plugins. At this moment only Rosegarden as a host and LADSPA as plugin
type are supported, but this is hopefully just a first step.
Project page:
http://vagar.org/code/projects/flam
Tutorial with a few screenshots:
http://vagar.org/asciidoc/flam/primer/primer.html
Source code repository:
git clone http://vagar.org/git/flam
I'd welcome any feedback, here or in the FLAM forums (registration
required):
http://vagar.org/code/projects/flam/boards
Thanks!
Luis
do any of the jack client examples show playing a file from disk?
if so, which?
if not, any links to simple code that does this?
c++ or c?
thanks in advance for any pointers.
drew
--
http://freemusicpush.blogspot.com/
Hi all,
This meditation isn't about gray hair, sagging flesh
or receding libido. :-)
In debugging Nama's new audio editing functions,
I'm noticing that I have more and more code dealing
with time. I am concerned seeing a multiplicity of
method names like adjusted_region_start_time
or unadjusted_mark_time.
I'm wondering if perhaps I can centralize
or at least systematize this functionality.
Nama deals with several kinds of time:
Ecasound time. Positions in seconds or samples from
the perspective of the Ecasound audio engine
WAV time. Displacements in audio files.
Track / Region time. Positions in a track or region
Mark time. Nama currently has only one type of mark:
marks anchored to an absolute project timeline.
I think it also needs marks for positions in a tracks WAV files,
which the user may trim or offset.
Edit / offset-run time. This is what started the entire
issue. To record a fix for a note at time T in a WAV file W,
I use Ecasound's select object to offset all the WAV files
in a project to start at time T. The fix, W', then gets
placed at T using playat.
MIDI time. There are plenty of references on this,
and it's a subject of its own. Nama needs at least to
know enough to work across the various systems for bridging
between ALSA, JACK and MIDI.
So do I need some Big Abstraction(tm), or shall I just
continue to work incrementally?
How do you think about time?
I don't expect a simple answer or an answer at all,
but it can help to formulate the question.
Regards,
Joel
--
Joel Roth
hi all,
for some time, i have been working on nova-simd, a cross-platform SIMD library
library written in c++. it abstracts the platform-specific APIs and provides a
`vec' class, which maps to a SIMD floating-point vector on the target machines
and which can be used to formulate more complex algorithms. for some commonly
used vector functionality, helper functions are provided, which are built using
the vec class.
features:
- same interface for different SIMD architectures
- supported backends: sse-family, avx, altivec, arm/neon
- some backends include vectorized implementations of libm functions
- extensively used in supercollider
- header-only c++ (no runtime dependencies, composable), no dependencies
canveats:
- little documentations/examples
- no release, no tarball, just a git repository [1] and a web interface [2]
- header-only c++ (no c support)
maybe it is useful for other people as well...
cheers, tim
[1] git://tim.klingt.org/nova-simd.git
[2] http://tim.klingt.org/git?p=nova-simd.git;a=summary
--
tim(a)klingt.org
http://tim.klingt.org
Art is either a complaint or do something else
John Cage quoting Jasper Johns
> From: Fons Adriaensen <fons(a)linuxaudio.org>
> On Fri, Feb 18, 2011 at 07:36:44AM +1300, Jeff McClintock wrote:
>
> > With a RMS VU Meter you measure a 1KHz tone as a reference.
>
> A contradiction... A VU does not measure RMS, whatever does measure
> RMS is not a VU.
Isn't a VU Meter a standard root-mean-square function followed by a 300ms
integration to give it some 'weight'? ...calibrated against a 1kHz tone?
Hi Experts.
I wanna normalize my sound stream by loudness (energy / pressure /
intensity) , not by peaks.
How i do it ?
Is available Jack plugin for so what ?
What is (we hear as) "loudness" ?
RMS or +(average) or something else ?
Is somewhere available examples how to calculate RMS ?
Is it done simlpe by :
 int i, n;  double sums, rms;
 sums=0.0;  n=10;  rms=0;
 for(i=0; i<n;i++)
 {   sums = sums + ( (double)i * (double)i );  }
 rms = sqrt(sums / n);
 printf("         
rms = %12.12fnn", rms );
Is so sipmple algo good enough for frequencies > 10 kHz ?
How to calculate RMS with hi-precision for frequencies > 10 kHz ?
With inerpolation or so ?
What is reference ( 0dB ) RMS for example for 16bit PCM signal 1024
samples ?
  sqrt( 1024*(0x7FFF ^ 2) / 1024 )  ==  sqrt( (0x7FFF
^ 2)  ) == 0x7FFF 
Is it simple  0x7FFF (32000 dec) ?
How to calc RMS for stereo signal ?
So , or somehow else ?
for(i=0; i<n;i++)
 {  ....
   sums = sampleL/2 + sampleR/2;
 }
In case operating with float point, what should be a bit
faster,  / 2  or * 0.5 ?
How to gotta RMS value in dB ?
   20 x log10(Measured/Reference0_dB) 
or 10 x log10(Measured/Reference0_dB)  ??
Im just physics student and im new in DSP,
so pleaz no angree about simple or stuppid questions.
Any examples and hints welcomed.
Many Tnx in advance.
Alf
----
Paul Davis:
>
> On Thu, Feb 17, 2011 at 4:53 PM, Robin Gareus <robin(a)gareus.org> wrote:
>
>>> Aside of that, what about locks? I've many times been told that one mustn't do
>>> anything that could block in a realtime thread. What are the consequences of
>>> that? Could a malicious app freeze the system by blocking in a realtime thread?
>
> it poses no risks to anything except itself if it does that. blocking
> in an RT thread matters to the thread, not to anything else.
>
> to demote RT threads that are doing too much you'd need a user-space
> watchdog like das_watchdog
>
Actually, das_watchdog is not very useful anymore after the kernel
developers implemented a scheme to avoid a process to take over
the machine. This built-in scheme is also a watchdog, but much
more fine graided than das_watchdog. And it is also (more often
than not) useless, so one has to press the reboot button anyway.
The sad thing is that this built-in watchdog in newer kernels fools
das_watchdog into thinking that the system is operational.
(@#$@#%%!#$$!!!)
I should look into it though, it might not be impossible to
tune das_watchdog to work again.