Does anybody know of a tool that I could feed audio in to, and have it output an "interpreted" video? Not necessarily a "waveform" of the audio, more something abstract, like an acid-tripped audio fractal or something.
Thanks!
I've just push my latest work to github, It's a pure, nasty growling
bass fuzz pedal, with bold out-front presence, and cutting articulation.
It's for those who love those good old transistor based fuzz pedals.
https://github.com/brummer10/Rumor <https://github.com/brummer10/Rumor>
rooowwwaaa a a a a a a a
Hey hey,
I had the great opportunity to remix a song by Oscillator (Staffan Melin).
Here's the Linux musician's thread with the original:
https://linuxmusicians.com/./viewtopic.php?f=9&t=23954
Here's my remix:
https://youtu.be/yPYko4vuvEk
and a direct OGG version:
https://www.dropbox.com/s/n05lejzwj0851t1/pandora_mix2.ogg
This song uses samples from Missa pro defunctis - 1. Requiem by permission (CC
BY 3.0), see
https://musopen.org/music/44142-missa-pro-defunctis/
Vocals performed by Kajsa Olsson
This remix uses a few of the arpeggio rhythm sounds from the original,
re-recordings of pads, bass and some arpeggio/sequence style sounds to allow
for slightly adapted harmonies. :) The drums are completely new.
Besides some hardware for the monophonic sounds, there is a lot of Yoshimi for
the three pads and two choirs. There is also Aeolus. I always wanted a good
place for Aeolus in a popular production. None better!
The drums are a mix of samples loaded into LinuxSampler, partly sampled from
hardware, partly synthesized in Csound.
The processing is rather heavy duty, but quite conventional. Lots of use for
SWH (Barry's Satan maximzer! :) ), CAPS, TAP, Calf, Invada and Fons' g2verb,
zita-reverb and parametric 4-band filter.
Enjoy and best wishes,
Jeanette
--
* Website: http://juliencoder.de - for summer is a state of sound
* Youtube: https://www.youtube.com/channel/UCMS4rfGrTwz8W7jhC1Jnv7g
* Audiobombs: https://www.audiobombs.com/users/jeanette_c
* GitHub: https://github.com/jeanette-c
Don't, don't let me be the last to know
Don't hold back, just let it go <3
(Britney Spears)
Dear list,
I'm mainly using Pianoteq and setBfree for live performance. To quickly adapt my
sound volume of both SoftSynths together I'm using a foot pedal to send Midi
volume commands to both applications. Unfortunately that is not working very
well for both programs are using the Midi volume values (0-127) differently.
Pianoteq adjusts its output volume according to a more logarithmic manner while
setBfree seems to handle it linearly. Thus if I adjust both volumes to a similar
SPL for Midi value = 127 the organ is way to loud at lower levels compared to
Pianoteq.
Meanwhile I have found the location in the source of setBfree where I could make
adjustments. I have measured the logarithmic dependency of Pianotec's Midi to
volume characteristic and found a formula to be used in setBfree to get a
similar performance. It would be nice if this could find its way into the
original source.
Now I have the following questions:
Does anyone of you know if the original swell pedal of the B3 is changing the
volume in a linear or logarithmic way? Maybe if the original B3 uses a linear
characteristic we should not change setBfree's code.
The function I'm using is mimicking Pianotec's characteristic. Maybe there is a
more global approach for changing from linear to logarithmic behavior over Midi
volume values? Surely for me it's ideal when both Synths have same characteristic.
Do you know a source where I can find more general info about lin<=> log
conversion of Midi volume in software? How is it handled in other professional
programs? I know that every amplifier uses a logarithmic pot for volume control
to follow more the human loudness perception.
Do you think its worth the effort to send a patch to github's repository (I've
never done such a thing so far)?
Thank you for your time
Gerhard
Dear group,
I would like to use alsaplayer via jack to play 3-channel audio files. These audio files will be routed into a real-time capable signal processing platform the open master hearing aid (openMHA). However, I cannot route the third channel to jack. I try to start alsaplayer as a daemon with the following command
alsaplayer -i daemon -s PHL -o jack -d MHA:in_1,MHA:in_2,MHA:in_3 -F 16000
However alsaplayer returns and error saying
cannot connect output port 2 (MHA:in_2,MHA:in_3)
Actually MHA:in_2 should be the output port 2 and MHA:in_3 is ment to be the output port 3.
I first start the openMHA software so that it establishes 4 input and 4 output connections via jack with the sound hardware. I can see the connections on qjackctl connections window.
I have a sound card that has 8 input and output channels and the openMHA is configured to expect 4 input channels and produces 2 output channels and already mentioned. So, the problem is neither on the MHA side nor on the hardware. I am afraid that I cannot configure the alsaplayer correctly that it expects three (or more) output ports. I would appreciate any help from your side. Thank you in advance.
Best regards,
Kamil
Hi all,
Industrializer generates synthesized percussion sounds using physical
modelling. The range of sounds possible include but is not limited to
cymbal sounds, metallic noises, bubbly sounds, and chimes. After a sound
is rendered, it can be played and then saved to a .WAV file.
I think Power Station Industrializer v0.2.7 is matured enough to be
released :-) It contained only few minor changes compared to the latest
pre-release, but is tested well.
You can download it here:
https://sourceforge.net/projects/industrializer/files/
0.2.7 release contains the following main new features (compared with
v0.2.6):
- Discretization rate selection for both playback and WAV export
- Improved accuracy of setting some parameters
- Rendering and playback can be interrupted. Playback can be retrigged
at any time
- Both actuation and sampling nodes are made selectable. This facility
allows you to vary somehow the timbre of the sound and even create
stereo samples with some phasing effects. Although Industrializes cannot
directly deal with stereo samples, you can first render and save a
sample, then change the sampling node without touching the other
parameters and render another sample, after that use these samples as
left and right stereo channels.
Regards,
Yury.
I'm pleased to announce the release of guitarix2-0.43.1
A virtual guitar amplifier for Linux running with jack (Jack Audio
Connection Kit) released under the
GNU General Public License as published by the Free Software Foundation;
either version 2 of the License, or (at your option) any later version.
This is a quick bug fix release.
Changelog:
* Fix Install metainfo in prefix (by Hubert Figuière)
* Fix GxAmplifierX produces weird noise after buffer size changes
Release tarball:
https://github.com/brummer10/guitarix/releases/download/V0.43.1/guitarix2-0…
Project Page on github:
https://github.com/brummer10/guitarix
Project Page on SourceForge:
https://sourceforge.net/projects/guitarix/
Hi,
I am currently giving linux-show-player a try but the following problem
might happen with other software as well.
In linux-show-player audio files can be played back from a cue list.
Each cue is implemented as a separate jack client. I am having the
problem that starting a cue, while another one is already playing, is
causing an audible drop out in the audio processing. When running the
software under alsa, this does not seem to happen. I suspect that
creating and connecting new jack clients causes this to happen. No xruns
happen though. I can not reproduce this exactly, although it happens at
about 30% of cue starts. Increasing jackd's buffer size does not help so
far. How can I go about to debug this further?
Thank you!
Peter