Greetings,
This piece is a longer one that develops a bit slowly. I made up a
tag for this: AcousticWaveRock. The instruments are acoustic guitar
and acoustic bass guitar, to which a few synths sounds were added as
well as drums and percussion, sampled from a Korg Microstation
JazzBrush Kit.
What's new in this project, as I progressively learn, is the use of
what is called 'multing' tracks, eg. taking bits of a track and
copying/moving them to another track. This was used to move some
guitar parts to another track where a delay, for instance, was
applied. It could have been done with automation although using
another track is simpler IMHO, especially when a different EQ is
chosen. No automation curves to make.
Also got recently a headphone amp (Behringer HA400) that's connected
to jack's 3 and 4 playback outputs (1 and 2 still remain connected to
the M-Audio studiophone speakers), the audio card being a 1010LT. It
is now possible to listen in stereo ! Previously the headphone jack
from the M-Audio speakers was used, and it did not give much of a
stereo effect. The switch from one set of playback device to another
is done using Ardour's monitor output choices.
Also, there's the use now of two mics for each of the acoustic
instruments, each mic recording to its own track. There's an AT2020
in omni mode, no cut, and a M-Audio Pulsar (not Pulsar II).
This piece 'goes forward' in the sense that themes only occurs
once. When they occur they are repeated, but they do not come back
later. The supporting structure stays the same though.
Cheers.
https://soundcloud.com/nominal6/c2015-19a
Hello,
I have written generative composition with a chomsky grammar parsed by
and *old* program called gramophone2. Unfortunately, my tests
composition have multitrack but gramophone2 generates midi type 0 files
(1 merged track).
So, all midi sequencers i tried does only import track #1 while there
are 3 or 4 in the midi file with different speeds (i'm playing with yet
another phasing music). It may be a grammophone2 bug, but before to dive
into the code (if i find it) how could i convert the generated midi type
0 into a type 1 so that tracks speeds can be aligned ?
With the hope the question is well asked and not out of topic.
- Benoît
On Mon, Feb 15, 2016 at 5:52 PM, Takashi Sakamoto <o-takashi(a)sakamocchi.jp>
wrote:
> Why did you add additional comments to the fixed-released bug? It's not
> within my intension to mention about the bug. I think your behaviour is
> unwelcome to developers. You should have used button of 'This bug affects
> you' or something similar...
>
I'm not sure what the point of your lecture was/is. But thank you for
initially pointing me to the bug-report where my "inappropriate" comments
got this bug correctly resolved. Furthermore, hopefully this means people
updating to 16.04LTS in the near future will still have use of their Envy24
sound cards. The end result is that the problem was acknowledged, a new
build was done, and the issue was resolved.
See
https://bugs.launchpad.net/ubuntu/xenial/+source/mudita24/+bug/1534647/comm…https://bugs.launchpad.net/ubuntu/xenial/+source/mudita24/+bug/1534647/comm…https://bugs.launchpad.net/ubuntu/xenial/+source/mudita24/+bug/1534647/comm…https://bugs.launchpad.net/ubuntu/xenial/+source/mudita24/+bug/1534647/comm…
Specifically, I can now confirm working ICE1712 cards, and associated
mudita24(1) application, on
4.4.0-6-generic #21-Ubuntu SMP Tue Feb 16 20:32:27 UTC (*).
Working Audio:
card 0: DMX6Fire [TerraTec DMX6Fire], device 0: ICE1712 multi [ICE1712
multi]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 3: M66 [M Audio Delta 66], device 0: ICE1712 multi [ICE1712 multi]
Subdevices: 1/1
Subdevice #0: subdevice #0
Working Midi:
16:0 TerraTec DMX6Fire MIDI-Front DMX6fire 0
16:32 TerraTec DMX6Fire Wavetable DMX6fire 0
(*): 4.4.0-6.21 kernel retrieved and manually installed onto Ubuntu
14.04LTS Skylake-based system:
https://launchpad.net/ubuntu/xenial/amd64/linux-headers-4.4.0-6/4.4.0-6.21https://launchpad.net/ubuntu/xenial/amd64/linux-headers-4.4.0-6-generic/4.4…https://launchpad.net/ubuntu/xenial/amd64/linux-image-4.4.0-6-generic/4.4.0…https://launchpad.net/ubuntu/xenial/amd64/linux-image-extra-4.4.0-6-generic…
........
Well, if you improve something you use, it's better to follow each rules of
> development.
>
> In your case, at first, you should watch release schedule of Ubuntu 16.04.
> https://wiki.ubuntu.com/XenialXerus/ReleaseSchedule
>
> You can see date of KernelFreeze is Apr. 7th 2016. In my opinion, till
> then, what you're expected is:
> - Read wiki page about Ubuntu kernel development cycle and understand it
> - https://wiki.ubuntu.com/Kernel
> - Test with daily build image
> - http://cdimages.ubuntu.com/daily-live/current/
> - When you still find your bug, seek it in launchpad.net
> - https://launchpad.net/bugs
> - When you find similar bug, watch it by subscibing or something like it.
> - When you cannot find similar bugs, register it as new one.
> - but you should keep more days to investigate duplicated bug
>
> At least, additional comments without enough consideration seems not to be
> helpful to yourself, against your expectation. And you should not test with
> packages in PPA. PPA is just Private Package Archive. The packages in PPA
> do not always go for official release.
>
> Although I said some lectures, I wish this bug will be fixed in Ubuntu
> 16.04 release.
>
Instead of wishing, I'd rather make things happen and get things fixed.
--Niels.
http://www.nielsmayer.com
Hi,
after an off-list correspondence with Hermann, I noticed something,
that might explain the difficulties those new to jconvolver have got.
[rocketmouse@archlinux ~]$ pacman -Ql jconvolver
jconvolver /usr/
jconvolver /usr/bin/
jconvolver /usr/bin/fconvolver
jconvolver /usr/bin/jconvolver
jconvolver /usr/bin/makemulti
"Ralf_Mardorf commented on 2016-02-20 16:22
Hi,
as it turns out by two LAU thread
http://lists.linuxaudio.org/pipermail/linux-audio-user/2016-February/104030…http://lists.linuxaudio.org/pipermail/linux-audio-user/2016-February/104075…
some users experienced difficulties using jconvolver.
I couldn't understand why, since I've got examples how to do this, but
then I noticed that the AUR package doesn't install the examples from
the tarball.
Consider to add those examples.
Regards,
Ralf" - https://aur.archlinux.org/packages/jconvolver/#news
This might be an issue for other distro's packages too.
Regards,
Ralf
--
Diese Nachricht wurde von meinem Android Mobiltelefon mit WEB.DE Mail gesendet.
Hermann Meyer <brummer-(a)web.de> schrieb:
What you talk about is so called multi file support, and thru, that isn't implemented, but, that didn't mean it is not stereo. Just use a stereo IR file, and you have stereo reverb. Note, that the marity of Reverb files comes as Stereo files.
Jc-gui may be usefull to get some working config files for jconv, they could give you a Hint how to wrote them, and you could start with edit them, to create your own, with multi file support, when you need it.
--
Diese Nachricht wurde von meinem Android Mobiltelefon mit WEB.DE Mail gesendet.
Ralf Mardorf <ralf.mardorf(a)alice-dsl.net> schrieb:
On Sat, 20 Feb 2016 08:54:40 +0100, Hermann Meyer wrote:
>@david
>There is the old jc-gui, which generate config files for jconvolver.
>I didn't know if it still works, I ain't use it any-more, but, you
>could try it if you wish.
>
>https://github.com/zzzzrrr/jcgui
One issue I already know from at least the old guitarix, didn't test if
this is still an issue for the new guitarix, is an issue when using
$ Jc_Gui -v
Jc_Gui version 0.8
too. It's _not_ really stereo, since there seems to be no way to use a
left.wav and a right.wav. So assumed you like to use jconvolver as
reverb, it will not sound that fantastic, as it sounds when writing a
config with an text editor.
_____________________________________________
Linux-audio-user mailing list
Linux-audio-user(a)lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-user
Hi,
if I chose "Audio System: JACK" I can start a new session, but if I
chose "Audio System: ALSA", I get "Failed to open audio device".
Perhaps I make a mistake.
There's no special reason that I want to use ALSA instead of JACK, I
just want to test it. I never tried to do it before, with any other
version of Ardour.
$ pacman -Q ardour
ardour 4.7-1
$ grep jack /var/cache/aur/current/ardour-4.7-1-PKGBUILD
--with-backends="jack,alsa" \
--libjack=weak \
$ amidi -l;aplay -l;arecord -l
Dir Device Name
IO hw:0,0 HDSPMx579bcc MIDI 1
**** List of PLAYBACK Hardware Devices ****
card 0: HDSPMx579bcc [RME AIO_579bcc], device 0: RME AIO [RME AIO]
Subdevices: 1/1
Subdevice #0: subdevice #0
**** List of CAPTURE Hardware Devices ****
card 0: HDSPMx579bcc [RME AIO_579bcc], device 0: RME AIO [RME AIO]
Subdevices: 1/1
Subdevice #0: subdevice #0
Regards,
Ralf
Hello,
I'm new to all of this.
I'm trying to make FIR convolution on a Raspberry Pi 2.
I have succeeded in making Jack2 working (Playback only, no full duplex).
My alsa player can connect to Jack2 and jack2 to the hw I2S sound card,
Fine.
Now, this is what I'm trying to acheive :
Start Jack2 (with jconvolver as input and I2S soundcard as output)
Start jconvolver and Connect it to Jack2
Start Alsaplayer and Connect it to jconvolver as the audio input
Here is my asound.conf
/# convert alsa API to jack API
# use it with:
# % aplay foo.wav
# pcm type jack
pcm.rawjack {
type jack
playback_ports {
0 system:jconv_1
1 system:jconv_2
}
capture_ports {
0 system:capture_1
1 system:capture_2
}
}
# jackplug
pcm.jack {
type plug
slave { pcm "rawjack" }
hint {
description "JACK Audio Connection Kit"
}
}
# use following peripherical by defaut with alsa:
pcm.!default {
type plug
slave { pcm "rawjack" }
}/
Here is my jconvolver config file: jconvolver filter-44100.conf
/# Replace by whatever required...
#
/cd /root/folve/filter
#
#
# in out partition maxsize density
# --------------------------------------------------------
/convolver/new 2 2 256 204800 0.5
#
#
# num port name connect to
# -----------------------------------------------
/input/name 1 jconvolver
/input/name 2 jconvolver
#
/output/name 1 jconv_1
/output/name 2 jconv_2
#
#
# in out partition maxsize
# ---------------------------------------------------------------
/convolver/new 2 2 1024 65536
# in out gain delay offset length chan file
# --------------------------------------------------------------------------
/impulse/read 1 1 0.75 0 0 0 1
T-Monacor_SPH30X.wav
/impulse/read 2 2 0.75 0 0 0 1
T-Monacor_SPH30X.wav
/
When I start jconvolver, I get the following error :
*Can't initialise convolution engine.*
Thank you all for any tweak,
Jean
--
View this message in context: http://linux-audio.4202.n7.nabble.com/Jconvolver-on-Raspberry-Pi-tp98952.ht…
Sent from the linux-audio-user mailing list archive at Nabble.com.
Dear developers and users,
the "Open Source Audio Meeting Cologne" takes places monthly since June
2014 in Cologne, Germany.
It is a community meeting: linux and open source audio enthusiasts,
user, musicians and developers connect, share, discuss and help each other.
But the meeting is also a monthly "mini linux audio conference". While
not really a formal conference we have talks and demonstrations each
time that deal with musical, technical or scientific aspects of
everything music and audio with free and open source software and hardware.
Regular attendance is close to 10 people for regular meetings and more
for special meetings, for example when we have invited speakers and
"guest-stars".
I would like any of you to be such an invited star. If you are in the
area (or willing to make a trip) it would be fantastic to have you here
and give us a talk or demonstration about YOUR topic. Be it your music,
your software or anything that you work with or on.
You can answer me in private or publicly, your choice.
If you are interested here is some more condensed information:
-The time frame for a talk or demo is from 30 minutes to 2 hours for
such an event.
-Dates for 2016 (all Wednesdays, all 7pm): March 16th, April 20th, Mai
18th, June 15th, July 20th, August 17th, September 21st, October 19th,
November 16th
-The language can be English or German
-The place is Heliostrasse 6a, 50825 Cologne Germany and has very good
public transportation nearby and is therefore easy to get to.
-While we have no money to offer you can get a place to sleep for the
night and food.
-We provide the option to record your talk on video and upload it (or
simply send it to you, if you prefer)
Website: http://cologne.linuxaudio.org
It would be fantastic to hear from you!
Yours,
Nils Gey
I do not know much about soundcloud. I know how to upload and I used
to know how to enable download of files (I'll have to recheck what Set
and/or others have written here about this :)
For instance, I do not know the context much. How people scan for
files, how people try to sell stuff like services or such (I have the
impression that some attention a file may gather has nothing to do with
the music).
So on my latest upload someone has put a picture/comment on the track,
to the effect that 'this is a nice picture where I would like to be'.
When NOT logged into the account I clicked on this to see what it is
about then I clicked on the picture itself. At that point Konqueror
froze completely and the desktop went a bit jerky. So I killed -9
konqueror. I did not pursue any investigation as to why it happened.
Just to let you know. It might turn out that everyone knows much more
about soundcloud than I do and are aware of possible mishaps.
Cheers.
Hello,
In learning more about Ardour, recording and mixing in general...
Ardour has recently added the capability for the monitor to also have
plugins just like any regular track. What would be use cases for
wanting to have a different audio specific to the monitor ?
Cheers.