Hey hey,
the classification as ballad was probably unnecessary, with that title. Here
it is and some (remarkable) remarks below:
http://juliencoder.de/nama/kisses_dont_lie.ogghttp://juliencoder.de/nama/kisses_dont_lie.mp3
This is only an instrumental, should you be female with a good strong voice
and willing to give it a try: please get in touch.
This piece actually came to me in a dream. Well, I dreamt its chorus. It was
so strong in my mind when I awoke that I wanted to make it the perfect example
of a powerful love ballad. The dream title had been "Chocolate don't lie", as
sweet as that might be, I decided the kisses would make more sense. :) The
rest of the song came easily, perhaps partly remembered from the dream, partly
obvious (to me), seeing that I wanted to create my personal clichee or essence
of that type of song.
Technically this song uses Yoshimi, LinuxSampler with a self-compiled acoustic
drumkit from parts of the Salamander kit, the AVLinux kit and one or two
extras. Also in LinuxSampler a proprietary acoustic piano and SSO strings.
This was further augmented by additional hardware synths of all kinds. The
electric piano is the real DX7 full-tines on a DX7. :)
Of course it needed tons of LADSPA and LV2 plugins, since everything was
recorded completely dry. Fons Adriaensen's G2Verb, the TAP plate, Invada and
Calf mixing tools and Fons' great four band parametric EQ.
Guitars were played and recorded by Joy Bausch on a non-Linux platform. Very
many thanks to him for such a wonderful and to-the-point interpretation of my
intentions.
If you have comments or feedback - particularlynice one :) -, please let me
know. But also don't hesitate to give the other kind of comment.
Best wishes and enjoy,
Jeanette
--------
* Website: http://juliencoder.de - for summer is a state of sound
* SoundCloud: https://soundcloud.com/jeanette_c
* Youtube: https://www.youtube.com/channel/UCMS4rfGrTwz8W7jhC1Jnv7g
* GitHub: https://github.com/jeanette-c
* Twitter: https://twitter.com/jeanette_c_s
I'm so curious, what do you think of me <3
(Britney Spears)
Hello.
MOTU has release a new firmware for some AVB cards.
https://motu.com/proaudio/index.html#avb-download-additional-resources
Tested on a 624 AVB with a Gnu/Linux Debian 9.4
Install with no trouble - how great is this embedded interface, so easy
to control the card and updates with Gnu/Linux.
Previous firware on the card was the v1.3.1+81 (Release Date
2017-06-21) i've never installed the v1.3.2+102 (Release Date
2017-12-13)
Some changes in the interface - Touch control menu added, i do not use
it.
So far 0 sound/linux troubles here. Tested playback/recording from
analog or guitar input and mic input.
Changelog.
v1.3.4+202 (Release Date 2018-06-20)
Introduces new Touch Console™
Fixes an intermittent issue that could cause audio glitch on USB input
Improves audio startup timing
-
All the best.
Hello all,
The jacktools packages (to be presented at LAC2018) are available now at
<http://kokkinizita.linuxaudio.org/linuxaudio/downloads/index.html>
You will need (from the libraries section)
- zita-convolver
- zita-resampler
- zita-jclient
- zita-audiotools
- zita-jacktools
and install those in that order.
and of course python, numpy, matplotlib, fftw3,...
Comments and feedback on LAU or LAD.
Greetings from sunny Berlin,
--
FA
On Sun, June 17, 2018 3:16 am, Benny Alexandar wrote:
> The user who is listening to it should not notice the switching, and this
> switching happens when the quality of one audio is degraded compared to
> other.
Is like the digital radio schemes where a digital program and an analog FM
signal are both broadcast, and if the reception changes such that the
digital signal cannot be received the audio is switched to the analog
signal?
> Yes delay estimation is required as the delay is not known upfront.
Is that acceptable to require user intervention, e.g. adjust delay until
it sounds correct, or are you looking for automatic delay estimation? If
you want automatic delay estimation it is unlikely you will find anything
off the shelf that does what you want. You would need to check the
auto-correlation value as a function of delay and find the delay at which
the signals are most correlated.
> In addition to re-sampling stretching also required.
You have not adequately explained why either resampling or stretching
would be required. If the two streams are from the same source but one
path has a delay, then presumably a fixed delay would be all you need.
--
Chris Caudle
Hi all,
With xoscope, each time I add a trace, it disconnect its input ports
and reconnect to some default. The strange thing is than it seam to be
ALSA that do that. I startx xoscope with its default input connection,
which is ALSA default. As jack is running all the time, I defined ALSA
default to use the jack ALSA plugin in my asoundrc:
pcm.rawjack {
type jack
playback_ports {
0 system:playback_1
1 system:playback_2
}
capture_ports {
0 system:capture_1
1 system:capture_2
}
}
pcm.jack {
type plug
slave { pcm "rawjack" }
hint {
description "JACK Audio Connection Kit"
}
}
pcm.!default {
type plug
slave { pcm "rawjack" }
}
ctl.!default {
type plug
slave { pcm "rawjack" }
}
pcm.dsp {
type plug
slave { pcm "rawjack" }
}
pcm.dsp0 {
type plug
slave { pcm "rawjack" }
}
pcm.dsp1 {
type plug
slave { pcm "rawjack" }
}
# for ameter:
pcm_scope.ameter {
type ameter
}
pcm_scope_type.ameter {
lib /usr/lib64/libameter.so.0.0.0
}
pcm.ameter {
type meter
slave.pcm 'hw:4,0' # can be hw or hw:0,1 etc...
scopes.0 ameter
}
pcm.dsp4 ameter
##
At startup, xoscope get connected to system:playback_1 and
system:playback_2
If I disconnect it and reconnect it to some other audio source using
catia, xoscope wotk fine, but as soon I do various operations including
adding a trace, it disconnect and reconnect as explained. Is it
something I can do to avoid that and get xoscope to stay connected as
I connect it with catia?
Cheers,
Dominique
--
If you have a problem and you are not doing anything to fix it, you are
at the heart of the problem.
Hey hey,
I know this is off-topic, but from remarks made by several members over time,
I gather that quite a few people here enjoy music beyond the mainstream.
The offerings are so generous these days that it can be very hard finding
one's own new music. So I'm looking for a podcast or youtube-show recommending
new synth-based music. So what am I looking for?
I'm looking for something focused on songs, closed units of three to ten
minutes, with or without vocals, hopefully nice independent sound design. I'm
looking for melodic and/or harmonic. So not your average modular-synth stuff
nor typically highly experimental sound art, as our community would see it -
from time to time - from Csound and similar. Neither am I looking for
no-dynamics-at-all dance music. I like that too, but it's easy to find. :)
If I were to mention artists, I
could only think of Imogen Heap and this rather obscure piece by Rod Morris,
who himself appears to be a little obscure:
youtu.be/tJr_LXxxOVI
On the long'ish side, but it has interesting sounds and a strong atmosphere.
Any recommendations? Perhaps genre names to particularly take into account? Or
other search directions/starting points? I have tried but failed so far.
Best wishes and sorry for the OT,
Jeanette
--------
* Website: http://juliencoder.de - for summer is a state of sound
* SoundCloud: https://soundcloud.com/jeanette_c
* Youtube: https://www.youtube.com/channel/UCMS4rfGrTwz8W7jhC1Jnv7g
* GitHub: https://github.com/jeanette-c
* Twitter: https://twitter.com/jeanette_c_s
You should take me as I am
'Cause I can promise you
Baby, what you see is what ytou get <3
(Britney Spears)
Hey hey,
I have an issue - again with Yoshimi and the commandline MIDI sequencer
Midish. Yoshimi will not receive any notes from Midish, though the connection
is made, as seen by a message on the Yoshimi commandline.
Midish uses the ALSA Sequencer API, but only offers the port when it is
running.
Midish does work fine with LinuxSampler, hardware and setBfree. I don't know
which of the two, Yoshimi or Midish, does something special to prevent them
from working together.
Will, if you'd like to test it, I have seen the midish is available as a
package in Debian and Ubuntu. On Arch it is in AUR.
Here's the simplest test:
bash prompt # rmidish
[within midish]
dnew 0 "name of your keyboard under aconnect -li" ro
dnew 1 "yoshimi" wo
inew keyboard {0 0}
onew yosh {1 0}
tnew test_track
fnew test_filt
fmap {any keyboard} {any yosh}
tsetf test_filt
r
[play notes on your keyboard]
s
[stops recording]
p
[should play the notes back]
s
exit
[press enter twice, if Midish doesn't quit immediately]
Sorry, it's a long list of commands, but you should be able to copy them
verbatim, except for the alsa SEQ name of your input device.
Any idea what might be happening? I can see the connects and disconnects, but
nothing more.
Best wishes,
Jeanette
--------
* Website: http://juliencoder.de - for summer is a state of sound
* SoundCloud: https://soundcloud.com/jeanette_c
* Youtube: https://www.youtube.com/channel/UCMS4rfGrTwz8W7jhC1Jnv7g
* GitHub: https://github.com/jeanette-c
* Twitter: https://twitter.com/jeanette_c_s
... About some useless information,
Supposed to fire my imagination <3
(Britney Spears)
Hello,
Here's a short piece, barely over 2 minutes, featuring a bunch of
rhinoceros hopping to safety, fleeing from a certain danger.
I think they made it.
https://soundcloud.com/nominal6/escape-of-the-hopping-rhinos
Comments welcomed !
Cheers.