Hello,
I am trying to combine the audio from my microphone (either built in
microphone from my laptop or bluetooth headset) with music playing
from Rhythmbox Music Player running on Ubuntu 16.04 and pipe that as
the audio input to video conference services such as Jitsi.
I don't know much about Linux audio internals and wasn't sure how to
do this. Can I do this completely via the operating system using
Pulse Audio? Do I need to do something with Jack? Or do I need
specialized software such as OBS, Ardour or Reaper?
Any help or suggestions on pointing me in the right direction or how
to get started would be greatly appreciated. I'm hoping that I can do
this completely through software without external hardware such as a
mixer or something.
Thanks in advance.
Samir
Greetings !
https://www.youtube.com/watch?v=BsIfrkBvRSM
Another VCV Rack drone piece, this one with a machine-improvised melody.
Someone called this one "spooky", I suppose it is a bit dark.
Best regards,
Dave Phillips
Hi folks,
I have a Delta 1010 (not LT) that works on one computer, but not another.
Working means that when I use mudita24 (on LinuxMint 19.3) then all of the
line inputs work according to the meters, and follow the input signal.
Not working means that when I use envy24control (on Fedora 27) then all of
the meters are pegged at the top. envy24control is the predecessor of
mudita24. envy24control has worked for me in the past, but on a computer
that doesn't exist anymore.
Each computer has its own PCI card for the 1010. But, the experiment was
conducted with the same 1010 taking turns on each computer.
Anyone got some debugging ideas for this? I suppose someone's going to tell
me to swap the cards between the computers. Yes, I'll do that. I was hoping
to find some setting that automatically works on Mint that's not set the
same on Fedora.
Thanks all!
I've been using TimeMachine to record late night music radio programmes.
I'm pretty sure I didn't do anything different, but one recording is
significantly quieter than the others and the quiet one shows up as
of type 'Unknown (model/x.stl-binary)' in Nemo file manager, whereas
the good ones are of type 'Binary (application/octet-stream)'.
'file' says they are all Sony Wave64 RIFF data, WAVE 64 audio, stereo 48k
sox (-n stat) seems to show significant differences:
Good Quiet
Samples read: 676507136 666574336
Length (seconds): 7046.949333 6943.482667
Scaled by: 2147483647.0 2147483647.0
Maximum amplitude: 0.999969 0.999969
Minimum amplitude: -0.920116 -0.999969
Midline amplitude: 0.039927 0.000000
Mean norm: 0.154645 0.051673
Mean amplitude: -0.000065 -0.000022
RMS amplitude: 0.199364 0.066460
Maximum delta: 1.421344 1.754726
Minimum delta: 0.000000 0.000000
Mean delta: 0.090457 0.034210
RMS delta: 0.134921 0.048614
Rough frequency: 5170 5588
Volume adjustment: 1.000 1.000
Any idea what could have gone wrong?
What would be the right way to calculate the value to use with sox -v
to get the volume level somewhere near that of the 'Good' recording?
--
Thanks, John.
On Tue, May 5, 2020 2:43 pm, Christoph Kuhr wrote:
> Well, actually using the same ISP in the same city does not
> make a difference. My mate and me have both
> Unitymedia/Vodafone cable network access. We had the same RTT
> as with the other one at another ISP with ADSL.
I have been looking for information on the latency associated with the
cable modem. So far this is the only information I found, but it is not
clear whether this refers just to the latency on the cable side, or also
includes the latency of converting from the Ethernet side to the cable
side:
"The average DOCSIS upstream latency for best effort traffic has been
measured to be 11-15 ms with the potential of a significantly higher
maximum latency up to 50 ms under medium to heavy channel utilization."
So already a significant latency, and potentially high latency variation,
before even getting to the first router.
It is impressive that you have found a way to play live together with that
connection.
--
Chris Caudle
Hi,
What are the possibilities for me if I want to sent midi data trough a
crossover-cable in a local network?
Play a synth from a midi sequencer on host A, sent it to a synth on host B.
Linux - Linux and maybe Linux - OSX.
I think I tried QmidiNet once, but it gave me quite some latency iirc.
Regards,
\r
Hey hey,
I'm not sure whether Tristate could be called industrial, it certainly has a
very factory and machine component to it:
https://youtu.be/lHMH28gicpc
I have wanted to do this for years, but either lacked the tools or had
intermittently forgotten about it. It's good that I waited so long. I'm quite
happy with the result.
On the Linux side this includes the usual suspects: Yoshimi for one bass and
chord stabs, LinuxSampler for the anvil and a few other percussive sounds and
of course Midish and Nama for sequencing, arranging and mixing the piece. If
any pllugin springs to mind it's the LADSPA transient mangler for the kick
drum, it shaped the sound beautifully.
For a while now, I wasn't able to even attempt a computer graphic, due to eye
troubles, this is the first - very simple - way back into that.
I hope you enjoy it, feedback is welcome.
Best wishes,
Jeanette
--
* Website: http://juliencoder.de - for summer is a state of sound
* Youtube: https://www.youtube.com/channel/UCMS4rfGrTwz8W7jhC1Jnv7g
* SoundCloud: https://soundcloud.com/jeanette_c
* Twitter: https://twitter.com/jeanette_c_s
* Audiobombs: https://www.audiobombs.com/users/jeanette_c
* GitHub: https://github.com/jeanette-c
Skip on the drinks Head to the floor
Makin' my way Past the show
My body's taken over And I want some more <3
(Britney Spears)