Hi,
I'm wondering if there exists any scripts for generating menus for
audio users such as ourselves :-)
Of course though this question enters my mind after I set out to write
my own BASH (4) script.
To begin with my script will generate a menu for Fluxbox.
The file ~/fluxbox/init should be edited to specify ~/.fluxbox/my-menu
as the session.menuFile.
The script will generate a basic my-menu consisting of items such as
xterm, firefox, audio sub-menu, htop, nedit, alsamixer, system sub
menu, and logout and shutdown.
The audio sub menu is a separate file ~/.fluxbox/my-audio-menu
generated by searching a list of audio applications and adds those
found on the system running the script. Other sub menus should easily
be added (but are hard coded for now).
The system sub-menu is a separate file ~/.fluxbox/menu and is
generated by (currently) mmaker if it exists, or fluxbox-generate_menu
as a fallback.
The list of audio applications currently only covers those I use^d^d^d
intend to use^d^d^d^d^d^d^d have installed/compiled with the intention
of using.
I'd imagine there should already be a script out there to do this
which possibly works with a number of WMs/DEs?
But, I also imagine it would be quite difficult to make it work easily
with all. I think *box and xfce should be manageable.
Any opinions/thoughts/suggestions/etc?
Cheers,
James.
--
_
: http://jwm-art.net/
-audio/image/text/code/
Hi,
I tried to install the LinuxDSP Pro Channel processors and use them in
Ardour (2.8.11). I put them in the Lv2 folder wich is in /usr/lib64 on my
machine. Looked in Ardour plugin manager but no Lv2 at all only Ladspa. I'm
confused shouldn't I see them? What did I do wrong?
Hello Quirq!
Well if you go on writing like this, just waisting lines of diminishment, I
might just plonk you. :-)
It's a lovely piece.I love the intensity of it, there's drama in there (four
stage theatrics perhaps :-) ). I love the dense composition of the "sound
cloud", complemented by the tinkly windchimes, harp and synth FX, it makes a
gorgeous whole.
Keep them coming the "insubstantial bits" and parts of ideas. I'm waiting
for them all to final fit into the bigger pictures. :-)
TTFN
Julien
--------
Music was my first love and it will be my last (John Miles)
======== FIND MY WEB-PROJECT AT: ========
http://ltsb.sourceforge.net
the Linux TextBased Studio guide
======= AND MY PERSONAL PAGES AT: =======
http://www.juliencoder.de
Hello everyone
I must admit, I hesitate to post this as it's really all a bit of
something and nothing, with the emphasis on the nothing.
The basic idea came to me last year when I was noodling around on a new
bit of gear and I thought that with a little work it would make a nice,
short, insubstantial bridging section between two major parts of a piece
I'm working on.
I've now got around to that "bit of work" so in some ways it's an epic
fail, because it sucked up a substantial amount of time and effort to
get right :-)
On the other hand, I still think it's not really much of anything, which
is why I hesitated about posting it. But as it can almost stand on it's
own outside of the context it was intended for, what the hell... publish
and be damned, or ignored, or something.
I do prog rock, so I suppose that's what this is, but it's also quite
synthy and a little bit atmospheric and definitely part of the IDK
genre, because I don't know what the hell it is :-D
It's called Vale Fanfare and is a miniscule 1.55 short:
FLAC (11.3 MB): http://www.quirq.ukfsn.org/Quirq_Vale-Fanfare_17-01-11.flac
OGG 10 (5.0 MB):
http://www.quirq.ukfsn.org/Quirq_Vale-Fanfare_17-01-11.ogg
MP3 320 kbps (4.4 MB):
http://www.quirq.ukfsn.org/Quirq_Vale-Fanfare_17-01-11.mp3
I just did a bit of light limiting on the mixdown to raise the overall
level, but other than that there's no post-processing/"mastering".
All recorded and mixed in Ardour with lots of lovely LV2 and Ladspa
plugins, a tiny bit of midi-programming in Rosegarden, Wine/wine-asio
for non-native VSTi's, a few bits of hardware and Fantasia/JSampler for
divers alarums and excursions.
Cheers
Q
PS And thanks to an anonymous friend for comments and a couple of little
ideas which made a difference.
--
A musical collaborator: "Lethargy, hm really? Then it will be time for
me to get over there and just accidental idea the $h!t out of you"
Greetings,
I've kept the Audio Plugins page alive at linux-sound.org. It aims to be
a complete list of Linux audio/MIDI plugins, and I've updated it again
recently. Please advise if there are any bad links or other errors.
Also, please advise if there are other plugins that should be on the
list. I *think* I know where to find them all, but you never know who
has an unknown project going on somewhere.
http://linux-sound.org/plugins.html
Best,
dp
>
> <rosea.grammostola(a)gmail.com> wrote:
>> On 01/21/2011 02:09 PM, Sampo Savolainen wrote:
>>>
>>> On Fri, Jan 21, 2011 at 1:24 PM, rosea.grammostola
>>> <rosea.grammostola(a)gmail.com> ?wrote:
>>>>
>>>> Thanks Sampo!
>>>>
>>>> With the discussion about plugins in mind, how did you manage to build
>>>> JACK
>>>> clients for Linux, Windows and OSX and a LV2 plugin?
>>>
>>> Sorry, I haven't really followed that discussion. Writing a plugin
>>> isn't that difficult. The real work for this release has been the OS X
>>> packaging (done by Robin) and the badly behaving mingw compiler.
>>
>> Hmm others seems to have more difficulties making such a crossplatform
>> plugin / Jack client.
>
> The problems with crossplatform Jack is that there's little
> infrastructure in Windows and OS X for Jack. The current release for
> Windows has a version of qjackctl which makes setting up jack very
> difficult. And I haven't been able to figure out how to select the
> midi driver...
add -X winmme in Qjackctl command line.
>
> That goes for the OS X version as well: jackpilot gui doesn't let the
> user choose the coremidi (the only sane midi driver) for jack. This
> means that It's pretty easy to get the crossplatform client going, but
> you can't really play it.
I'll add that in next JackPilot version....
Stéphane
when I connect the output of Audacity to Timemachine _by hand_ using
Patchage
there is a delay in the stereo sound file between the start of audio in
ch1 & 2
which is equal to the time at which the connections are physically made
to timemachine
in other words:
- I create a 10 sec file of 440Hz sine in Audacity
- play it back on loop
- go to Patchage and connect ch1 from PortAudio (which is actually
Audacity) to Timemachine wait 10 seconds then connect ch2 to the other
input of Timemachine
- hit record on Timemachine wait a couple of seconds
- stop record on Timemachine
- stop playback of 440 sine in Audacity
- open the sound file recorded by Timemachine in Audacity
- look at the start of the sound file
- there is a delay between channels seen in the sound file equal to the
time I waited to connect ch1 & ch2
side question: why do the outputs of Audacity appear as PortAudio in Jack?
is it a reference to virtual ports being instantiated when playback
occurs in Audacity?
hi all,
i'm looking for an application which will let me capture sysex dumps
from my hardware synths, so the correct patch names show up in
rosegarden, amongst other things. can anyone suggest anything? i tried
searching the ubuntu repositories, but nothing came up.
back when i was in windows land there were loads of these - i wonder
if i am somehow asking the wrong question
cheers
--
robin
http://tangleball.org.nz/ - Auckland's Creative Space
http://bumblepuppy.org/blog/
Hi,
I am not able to save and reload the 'state' of yoshimi 0.58 apparently.
Let me ask first how I should save the state and how to reload it.
Thanx
\r
Excerpts from S. Massy's message of 2011-01-19 20:23:40 +0100:
> On Wed, Jan 19, 2011 at 10:52:55AM +0100, Philipp ??berbacher wrote:
> > Excerpts from S. Massy's message of 2011-01-18 21:05:40 +0100:
> > > On Tue, Jan 18, 2011 at 07:40:50PM +0100, Philipp ??berbacher wrote:
> > > > So I guess what it really takes is a host that provides a good CLI-UI.
> > > Yes, or perhaps a host allowing interaction with the plugin through a
> > > TCP port like LinuxSampler does.
> > So that one can write a CLI-UI as well as another UI to control the
> > host? Similar to ecasounds net-eci?
> > I'm not sure that would work well for GUIs because some plugins give
> > fancy feedback in realtime which would need tighter coupling of backend
> > and GUI than TCP can provide, I think. In my opinion it's wiser to focus
> > on what's needed.
> Ah, yes, I had overlooked that.
> [...]
If we manage to write a really good host then a second version or a fork
with a GUI could still be written, but I don't expect anything like
that. The host will need at very least an rt-thread and a UI-thread
anyway, so some separation will be there in any case.
> > I've never seen a braille display, let alone used one, so it's hard for
> > me to imagine the braille-specific problems. I'll definitely look at
> Think of one line of (often) 40 characters which you move around on the
> screen. One of the most important factor is focus-tracking, i.e whether
> the braille terminal application (brltty on Linux) can do a good job of
> following the cursor around.
I've read that braille displays have between 18 and 84 characters, 40 is
the most common variant?
> > bristol. Can you provide me with examples of CLI-driven programs that
> > get it wrong and why they get it wrong?
> Well, as far as getting it dead wrong... I recently was interested in
> trying a mail client called "sup" but, either there is something faulty
> with its use of the curses library, or something is wrong with the ruby
> implementation of curses, but my display just won't track the
> highlighted message, thus making it next to unusable.
Funny enough this is the mail client I use :)
It certainly doesn't limit line length, but I think braille displays
have a way of dealing with this Problem. I have no idea why it doesn't
work with sup, but sup has a bunch of problems anyway..
> But, coming back to audio software, I think it's more a matter of
> convenience/inconvenience, rather than right/wrong. Taking the example
> of Nama (which BTW is an absolutely awesome godsend to the text-based
> audio world), it uses a readline, command-driven interface, which works
> great for things like adding tracks, creating and assigning busses,
> adjusting pan and volume, and so forth. However, when it comes to
> fine-tuning the parameters for a compressor, this same approach can be
> tedious, because usually, even if one has a good idea of what the
> starting parameters should be, one still needs to fine-tune things by
> ear, and writing things like
> mfx GH 3 + 5
> mfx GH 5 - 0.2
> ...then realising you overcompensated and doing
> mfx GH 3 - 1.5
> ...can be tedious, compared to, say, using up/down arrow and then shift
> up/down to fine-tune parameter values. Which is why a vi-like interface
> allowing both a visual and a command mode is an excellent way of having
> one's cake and eating it. :)
I see your point. I know nama but I haven't really use the text frontend
and the GUI isn't nearly as powerful.
> On the other hand, I remember, many years ago, someone was trying to
> develop a text front-end for ardour, but it was entirely keystroke-based
> and I found it very difficult to use compared to ecasound's interactive
> mode...
Also good to know, but I doubt it even builds these days.
> > What I'd be interested in is a reliable host that can be controlled
> > by using the keyboard. Fancy GUIs can't be controlled with the
> > keyboard in a comfortable way, CLI-programs can. If the program is
> > comfortable to use for the kings of CLI it's as good as it can get :)
> >
> > Maybe we can write it together, but mind you, while Rui calls himself
> > the uber-procrastinator I AM the uber-procrastinator (Julien, I'll sing
> > 'Enemy Aprils fool', hopefully before April). I'm also only a novice
> > programmer. However, I think we could do it if we take one step at a
> > time. You have the UI-expertise, we all have a little programming
> > expertise and I think we could ask more experienced programmers if we're
> > really stuck. So we have expertise and a goal, what remains is getting
> > it off the ground, effort and time.
> Sounds good to me. The fact that UI and DSP are indeed separate meants
> that it's at least likely to be simpler than I feared it might be.
UI and DSP need to be at least in separate threads. I was about to
create a wiki for this project to collect ideas when I realised that a
wiki is probably not very comfortable for blind users, probably better
than a forum, but I guess keeping it at mails is better for now.
So what I have so far:
the DSP part, the backend needs to be rt-capable, jack for audio, maybe
midi at some point and of course capable of hosting LV2 plugins,
including extensions that make sense. The language will need to be C or
maybe C++.
The frontend could use ncurses, which is a C library but has bindings to
many languages. I have no idea yet whether it really is hard to write
UIs in C, but I'd prefer taking the native language over bindings in
general.
Those are all just programming aspects, and I'm not familiar with any of
that, so it would take me quite some time to get somewhere at all.
The probably even harder part is figuring out what the program should do
and how the user should be able to control it. A program that isn't a
pleasure to use won't be used. I only have a few vague ideas so far and
I won't tell them just yet so your fantasy can roam freely.
Regards,
Philipp