Hello,
Does anyone know of a good plugin that will generate subharmonics?
I would like to put a little more low frequency "oomph" into my bass
track. Preferrable LADSPA, but VST would work, too.
Thanks for any help!
-TimH
Thank you MR Hawaii, I forgot I had installed that, but looking in its man-page,
makes references to non-distructive editing. If I understand the concept, I
would think editing out portions of sound would certainly be destroying an
orriginal. Meanwhil, if I run nana -t and a file name, I get the following
error:
Found config file: /home/chime/.namarc
YAML::Tiny found bad indenting in line ' consumer:' at
/usr/share/perl5/Audio/Nama/Assign.pm line 283.
So please, how do I fix this-and-would NAMA be an interactive editor? Thanks in
advance
Hart
sorry goofed this the first time.
Hi folks,
> there is a Yamaha model of which I am thinking, but know it is not the only
> one.
> Thoughts about the efficiency of using such in Linux?
> since many connect via USB for example, does it make transfer of files
> easier?
> Any disadvantages?
> thanks,
> Karen
>
>
>
Research tells me that QSynth seems to be the only currently
available/usable GUI for FluidSynth, but I get big xruns whenever I try and
use it. FluidSynth itself doesn't cause me problems (I know because I'm able
to use the FluidSynth-DSSI plugin fine in Rosegarden etc). The problem is
that I want to use FluidSynth with Ardour3, but Ardour3 doesn't support DSSI
plugins yet. So the only solution I have is to find a standalone interface
for FluidSynth and then to link up using Jack. I looked at the old GUI
'FluidGUI' but it seems to be so old that it won't properly install on
recent versions of Ubuntu.
So does anyone know of:
1) A GUI for FluidSynth other than QSynth and FluidGUI?... or
2) An application other than the above 2 which would allow me to load
soundfonts?
Thanks in advance.
- Dan
I don't think I've ever posted this on here.
Inspired and conceived in the early 1960s by the sight of a highly detailed
model sailing ship made entirely from clear drawn and blown glass.
Composed in the 1980s when I first had access to a half-decent keyboard
First recorded in the 1990s using an Acorn Archimedes and a Yamaha SY22
Transferred in the 2000s to Rosegarden with SY35, ZynAddSubFX, Hydrogen
Re-imaged a couple of years ago with Rosegarden and Yoshimi only
Recently added an extra counter melody and a bit of polish
http://www.musically.me.uk/music/The_Crystal_Ship.ogg
Enjoy :)
--
Will J Godfrey
http://www.musically.me.uk
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.
Hello dear list,
Here is a litle tune written and rendered with musescore for the piano,
It is called Novembre (i.e: November in english) and has a simple
crescendo progression with a few harmonic changes. Enjoy at :
http://brouits.free.fr/music/various/Novembre.ogg
have a good and peaceful week-end,
- Benoît
Hello Linux Audio Users,
I'm working on a program to drive LED lights based on music playback on a
Linux system. My application creates Jack input ports for frequency
analysis which I'm currently connecting to the Jack monitor ports, this
frequency analysis info is then used for controlling some RGB LED lights.
The application I'm currently using for music playback is Clementine, which
has a Jack sink option.
All this works fine. However, the frequency analysis of the input audio
incurs unwanted latency, so there is a slight delay between changes in the
music and lighting changes. What I want to do is delay the audio playback
to match the full system latency to match the lighting updates as close to
real time as possible.
My current thinking is to route the incoming Jack audio in my application
back to the Jack playback ports, with a fixed delay buffer.
The problem is getting applications, such as Clementine, to connect to my
application's ports instead of the physical output ports. My understanding
of Clementine is that it is using gstreamer, and thus the jackaudiosink
plugin. This plugin appears to have some properties for changing the
default ports it connects to ("connect" and "port-pattern"). Clementine
does not seem to offer a way to specify what ports it connects to though.
What complicates it more is that the ports are disconnected/recreated for
each song.
I suppose I could look into modifying the Clementine code to allow for a
value for the "port-pattern" property to be specified, but I thought
someone on this list might have some better ideas.
Some other thoughts I had:
Is it possible to modify the default system wide Jack playback ports to be
an application? From what I have seen of code that auto connects to the
playback ports, is that it looks for the first input port marked as
Physical.
Is it possible to change the default system wide setting "port-pattern" for
jackaudiosink?
Is it possible to modify jackaudiosink settings on a per application basis
(say with an environment variable or config file) without having to modify
the client program's source.
Thanks in advance for any help on this and cheers!
Element Green
Hello,
This one dates from 2006. The date is easy to remember since it is in
Wikipedia. September 13th 2006. The Dawson College shooting in
Montreal. It could have been any other at any other place. That
evening I made this and almost uploaded it on the killer's web page
which was still active at that time, to share the feelings to anyone
touched by this. It is about sadness, expressed against a busy
backdrop of events.
This is entirely done using Zynaddsubfx synthesizer. The sequences
are made of 1 short clip each copy/pasted for the duration of the 1:48
piece. The resonance line and chords were played over. What was done
this month was to restore the sequence of clips from an old Ardour
session that would not load as it was, by copy/pasting and aligning
each clip carefully one after the other. Then Robin Gareus' 4-band EQ
was applied to all tracks including master. A touch of reverb was
added, as well as echo at one place. The original piece was on-going
all the same from the start. This new version has a break near the
end that surfaces what would be the expression of sad emptiness before
the 'business' restarts. Automation was used on EQ and echo as well
as here and there for slight volume control.
https://soundcloud.com/nominal6/killvidegill
I have a PC with an AMD APU chip, and I'm trying to send sound to my
Audiolab 8200AP processor over HDMI.
The system is 'Ubuntu Studio 15.10'.
I believe the hardware works - I can boot into Windows 10 and send audio
from jriver.
With Linux, however:
- aplay -L does list the hdmi:Generic device, and pulse is using it
- Pulse Audio volume control does detect whether the HDMI cable is
plugged in at both ends
- speaker_test -c 8 -D pulse loops continuously and thinks its playing
- Pulse Audio volume control shows the ALSA plug-in [speaker-test] stream
However, the Audiolab is stubbornly said the channel is 'silent' on its
display.
And then I used 'Settings/Display' to select the 'IAG 6"' and select
'use this display' and to mirror the displays - the computer is actually
displaying on an attached VGA monitor.
And that made things start to work.
Ideally I would only access the system over xrdp - but you can't enable
the extra monitor without being on the console.
Quite possibly I would not have had the issue if either the display was
running through HDMI or the VGA monitor was not plugged in - but it is
worth bearing in mind if you are having issues.
I guess it would be handy if the display on the volume control could
indicate 'plugged in but output is inactive' - or (better still) if the
HDMI output were active if either the display or sound system wants to
use it, and the display is just an unchanged or black background.
Hi all
Advanced Gtk+ Sequencer release 0.6.23 has defenitily not to be
missed. Fixes done so far:
* fixed allocation of AgsDial
* fixed focus in ags_dial.c
* fixed _File mnemonic in menubar
* fixed recover of GSequencer project as doing properties
* fixed SIGINT while reading XML files including AgsRecallLadspa
see ChangeLog for indept view. Further on http://gsequencer.org is an
empty project with one drum and two synths connected by a mixer with
output panel. So you just have to fill in the gaps.
Further the mixer is using LADSPA caps plugins:
* drum -> 10 band equaliser
* matrix0 -> Mono Phaser
* matrix1 -> Noise Gate
Would be great to hear from you ...
$ wget -c http://gsequencer.org/ags_drum_and_synth.xml
$ gsequencer --filename ags_drum_and_synth.xml
cheers,
Joël