I've been trying to get DOS midi applications to talk to hardware in
dosemu, and since I normally use native Alsa for everything, I'm not
that familiar with midi through the OSS emulation.
My USB midi interface has three subdevices (in/out pairs) which show up
in Alsa separately, but I have only one /dev/midi3 (the one that maps to
my device), and one /dev/snd/midiC3D0. Is there a way to access
individual subdevices through the OSS emulation?
--
- Brent Busby + ===============================================
+ "The introduction of a new kind of music must
-- Studio -- + be shunned as imperiling the whole state, for
-- Amadeus/ -- + styles of music are never disturbed without
-- Keycorner -- + without affecting the most important political
-- Recording -- + institutions." --Plato, "Republic"
----------------+ ===============================================
Hello,
This is a simple, upbeat and straightforward track, largely based on
synths, with a touch of acoustic guitar. No angst in this one, or
maybe just a little touch of drama.
The synth solo was made using Repro-1 seconded by Monique. The piano
parts were pianoteq flanked by an electric piano from Discovery Pro.
The sequenced foundation was made using Bazille. The guitar part was
played on a Shiraki guitar, with some fair bit of treatment. As always,
created in Bitwig, mixed and masterized in Mixbus32C. Enjoy. Comments
of all kinds welcomed.
https://soundcloud.com/nominal6/spring-theme
Cheers.
I am looking for response files for ir.lv2 - the only convolution reverb
so far I have found working within ardour. Looking for natural, long,
lush reverbs and thought to have found a good starting point with
openairlib.net.
However, all big reverbs I've found so far, do produce lots of kind of
rythmic clicking (or audible transients) during playback, thus rendering
them unusable. The shorter ones partly work, but I just happen to be
looking for large spaces. Cathedrals and the like. Why not the whole
universe?
Any sources or hints, what I may have to look out for? Wrong file
format? Or what I may be doing wrong otherwise? It does not seem to be a
CPU or load issue at all. And the settings I have been playing with did
so far not affect the issue notably.
Any ideas?
Thanks
Hi all.
On Wednesday the 4th of April the monthly Berlin meeting is taking place at c-
base. I'll as usual be in the mainhall from 20:00.
See you there! :-)
Cheers
/Daniel
While most issues with computers slowing down are well known, and pretty much a
solved problem. There is one that is only slowly becoming noticeable as systems
get progressively faster.
This is the problem of memory slowing down as it ages. Every time a bit is
flipped the underlying structures are stressed and an infinitesimal change
takes place that makes it slightly more difficult for the next change. We are
of course talking incredibly tiny amounts here so it's hardly surprising that
it's not really been noticed up to now.
However, it is something that has interested our most {cough} mature {cough}
developer, Mary (she of ion trap fame). Once Mary gets interested in something
there is simply no stopping her, so we just left her too it, and concentrated
on our latest software build problem that only appears on the latest 'felt
cowboy hat' distribution.
Our Mary is a really methodical person, and she had this weird setup where she
was rapidly switching RAM in various bit patterns and then every few hours,
stopping, letting it rest then measuring the switching speed. This was fully
under software control of course in an Automated Recursive Sequencing
Environment.
Once she had several sticks showing measurable speed drops (takes several
months) she put half of them to one side as a control, then tried out every
idea she could think of to get the others to 'relax'.
The first was quite obviously thermal - the Applied Heat Neutralising Optimiser
- but was inconclusive, and applied long enough it produced dead cells in the
RAM.
Then it was mechanical - her Vibrating Axis Guided Uniform Exciter idea. Nope,
simply shook things apart.
A sort of combination of both of these was to attack the problem with a
Selective Acoustic Wavefront - quite ineffective unfortunately.
Of course, some electronics can be altered with light, so Mary tried using
Phased Optical Resonance Notching, although that raised a few eyebrows in the
workshop.
With no real breakthrough Mary moved on to trying software methods.
Unfortunately I'm not permitted to explain how these work, so you'll have to
try to work it out from their names.
The first was Branched Asynchronous Selective Insert Code.
Then there was the Binary Indexed Latency Extractor.
The last of these was Consolidated Ram Accretion Pruner.
Almost a year had passed now and Mary was showing visible signs of desperation
(something quite shocking to witness). Almost in panic she tried combining
hardware and software in just about every configuration.
This morning she suddenly got up and almost ran out of the workshop without a
word. She'd left her last scribbled note out though - it read:
V.A.G.U.E Hardware OK, A.H.N.O C.R.A.P software tests A.R.S.E.
We just hope she'll calm down by lunchtime.
--
Will J Godfrey
http://www.musically.me.uk
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.
Hello! I am a long time Linux user and I used to participate in the LAU
group many years ago. A while ago, I decided to do my music "out of the
box" with mostly analog gear.
I am returning once again to LAU. My goal here is to edit several cameras
videos together using different angles, into one video. And use the WAV
generated from the F8 as the audio. Actually I plan on having some
professional mastering done on the tracks from the F8. But I am doing the
video editing part.
I need to sync videos from multiple Android phone cameras that are
recording some "live" music instrument performances in my home studio. The
videos contain CH1 as LTC audio (sounds nasty!) , and a scratch track on
CH2 - both coming from a Zoom F8 outputs. The phone camera inputs are from
an Irig DUO over USB. The F8 is recording the performances via its 8
inputs, to a BWF WAV file on a removable SD that contains all 8 tracks.
I can see the timecode metadata of the WAV using bwftool. I can see all the
tracks in Ardour when importing the WAV (compiled latest release from
source on OpenSuse LEAP 42.2) and also Audacity.
About the LTC and timecode: Lots of people use "Plural Eyes" and closed
source software to sync audio and video. I believe LAU will understand why
that is not an option for me. Really though I have a personal interest in
timecode, and some equipment that uses it. I want to try it.
Nice cameras like REDD can accept timecode on a BNC input, or generate it.
Phone cameras and even many Cannon cameras AFAIK do not have that
capability. While not ideal, it is common I believe to record the timecode
as LTC audio on one of the stereo tracks on the camera. These performances
are mostly under 5 minutes and I don't expect drift to be an issue - if so
I will need a better camera later on.
My idea was to startup XJadeo with the "-l" (LTC) option, and receive
events via "mplayer -ao jack F8.WAV" .
I have it backwards though: The MP4 loaded into Xjadeo has the LTC audio
track. The WAV has the timecode metadata. And I have several MP4 files with
LTC. Ardour doesn't read MP4 files. I am stuck on the next step here. I am
a little weak in my video skills, I am trying to use this project to learn
more.
Any advice? Well, besides skipping timecode ... I got that a lot already on
the "gearslutz" forum :-) .
I've recently pinned down a number of oddities concerning these, and thought
what I've learned would be interesting to anyone working on their own
instrument patches.
The first thing to keep in mind is that amplitude envelopes (particularly
release time) set the point at which a note ceases. Frequency/filter envelopes
can be shorter, so their effect stops part way, but if they are longer, the
last part will be ineffective.
Across all three engines, and kits (if kit mode is active) it is whichever is
the longest that sets the overall time of the note, and you may well hear
others stop if the times are sufficiently different.
Also, within AddSynth itself, it is which ever voice has the longest envelope
that sets the overall voice time, and if you set voices with very different
characteristics you can hear the shorter ones finish before the overall sound
stops. Bear in mind, that each voice can also have a start delay set, so you
can get a late sound pickup that is then the last bit you hear, even if it's
quite short. However, if the start time of one voice is after all the others
others have finished it will never sound.
This sort of idea works best with 'Forced Release' disabled.
An unexpected twist to this, is that taking the combined voice envelope time
against the main AddSynth envelope, although attack and decay times follow the
above pattern, it is which ever has the *shortest* release that sets the
AddSynth time as a whole. This can really catch you out!
With regard to the Modulator amplitude envelopes. They don't change the overall
time, but if they are shorter than their voice length (or any voice that the
modulator is slaved to) the modulation may end a bit strangely. If they are
longer, then part of their action will be missed.
Finally, there is what I think is a bug (that goes back to Zyn 2.2.1). If an
AddSynth voice is enabled, it's amplitude envelope time is active, even if the
envelope is apparently deactivated and not editable. Oh, and by default all the
voice times are quite long, so again you could be puzzled as to why a sound is
longer than you expected. This has always been there, so I don't believe it
should be changed. To do so would quite likely alter many existing instrument
patches, but do keep it in mind.
In the latest Yoshimi commit, there is a new instrument in my 'Companion' bank
called 'AddSynth Morph' that demonstrates some of these points - I think it
sounds nice too :)
--
Will J Godfrey
http://www.musically.me.uk
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.
Has anyone got any experience of these?
Are they in fact any good?
I would imagine that running one at 96k, 16bit should give a good enough
bandwidth and resolution for checking most audio kit.
Which rather begs the question, why are almost all digital scopes only 8bit...
unless you spend a fortune on them?
--
Will J Godfrey
http://www.musically.me.uk
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.
Hello,
For a university project we're building a custom audio system with our own
input and amplifier.
We will most likely use an FPGA that communicates sound data over SPI to a
Raspberry Pi.
On the Raspberry Pi the sound can be further processed by for example Sonic
Pi.
Sonic Pi uses SuperCollider which uses JACK which uses ALSA.
At some point in this chain we need to be able to interface with our FPGA.
Initially I thought it would be easy to write a JACK client, and it is.
The problem with that seems to be that JACK is in control of the sampling
rate.
So if I read data from the FPGA into a buffer and the clocks drift, I get
overruns or underruns.
I found a few potential solutions.
What alsa_in and alsa_out do is resample between the two clocks. Maybe a
bit of work, but definitely works.
There is some business about clockmaster in JACK, which seems to be
something different, but maybe I don't understand it.
There is a freerunning mode, which makes it OK to do IO in the callback.
I'm not sure if this plays well with SuperCollider. It seems that in this
case the processing is directly driven by how fast I get data from the
FPGA, which is what I want.
If all of the above turns out to be bad ideas, I need to look at a
different location in the chain.
It would make sense to write an ALSA driver for what is pretty much a
custom sound card.
However, it seems that writing an ALSA driver is orders of magnitudes more
complex than registering a callback with JACK.
Any ideas what would be the easiest way to get sound from our FPGA into
SuperCollider and back?
Regards,
Pepijn de Vos