Hi all,
(Sorry for the cross-posting.)
Here come some DANCE & DISCO patterns put into the TECHNO book:
http://philippe.hezaine.free.fr/spip.php?article50
And there the first part of ROCK book:
http://philippe.hezaine.free.fr/spip.php?article42
Compared with the previous releases you'll find:
- For the .ly files: the bar numbers which make easier the reading.
- For the midi files: You'll be able to see the instrument names printed
in each track of the sequencer.
Before a general upgrade of the existing work, if you have some remarks,
criticisms or suggestions i will be happy to hear them.
Cheers.
--
Phil.
Superbonus-Project (Site principal) <http://superbonus.project.free.fr>
Superbonus-Project (Plate-forme d'échange):
<http://philippe.hezaine.free.fr>
Hello Group -
I'm really confused about what's gone wrong with my sound. Total
Silence even though it was working less than two weeks ago after an
upgrade to 8.10. User error?, or a bad patch maybe...can you help?
I'm running ubuntu 8.10 on a 64-bit machine
_______________________________________________________________________
I have done some research and have been following the steps I found at
this posting:
http://ubuntuforums.org/showthread.php?t=205449&highlight=comprehensive+sou…
First step is this:
results appear normal.
reid@linux-rv:~$ aplay -l
**** List of PLAYBACK Hardware Devices ****
card 0: V8237 [VIA 8237], device 0: VIA 8237 [VIA 8237]
Subdevices: 4/4
Subdevice #0: subdevice #0
Subdevice #1: subdevice #1
Subdevice #2: subdevice #2
Subdevice #3: subdevice #3
card 0: V8237 [VIA 8237], device 1: VIA 8237 [VIA 8237]
Subdevices: 1/1
Subdevice #0: subdevice #0
reid@linux-rv:~$
_____________________________________________________________________________
Second step:
reid@linux-rv:~$ lspci -v
:
:
00:11.5 Multimedia audio controller: VIA Technologies, Inc.
VT8233/A/8235/8237 AC97 Audio Controller (rev 60)
Subsystem: ASUSTeK Computer Inc. Device 80b0
Flags: medium devsel, IRQ 22
I/O ports at c800 [size=256]
Capabilities: <access denied>
Kernel driver in use: VIA 82xx Audio
Kernel modules: snd-via82xx
Looks Ok, I think
______________________________________________________________________
Third step:
reid@linux-rv:~$ sudo modprobe snd-via-82xx
[sudo] password for reid:
FATAL: Module snd_via_82xx not found.
reid@linux-rv:~$
That's not good, or expected.
Fourth step (sorry about the screwy formatting):
reid@linux-rv:~$ lsmod|grep snd
snd_via82xx 36904 3
snd_via82xx_modem 21900 0
gameport 21776 1 snd_via82xx
snd_seq_dummy 11524 0
snd_ac97_codec 133080 2 snd_via82xx,snd_via82xx_modem
ac97_bus 10368 1 snd_ac97_codec
snd_pcm_oss 52608 0
snd_mixer_oss 25088 1 snd_pcm_oss
snd_pcm 99208 4
snd_via82xx,snd_via82xx_modem,snd_ac97_codec,snd_pcm_oss
snd_seq_oss 42368 0
snd_mpu401_uart 16768 1 snd_via82xx
snd_seq_midi 15872 0
snd_rawmidi 34176 2 snd_mpu401_uart,snd_seq_midi
snd_seq_midi_event 16768 2 snd_seq_oss,snd_seq_midi
snd_seq 67168 6
snd_seq_dummy,snd_seq_oss,snd_seq_midi,snd_seq_midi_event
snd_timer 34320 2 snd_pcm,snd_seq
snd_seq_device 16404 5
snd_seq_dummy,snd_seq_oss,snd_seq_midi,snd_rawmidi,snd_seq
snd 79432 18
snd_via82xx,snd_via82xx_modem,snd_ac97_codec,snd_pcm_oss,snd_mixer_oss,snd_pcm,snd_seq_oss,snd_mpu401_uart,snd_rawmidi,snd_seq,snd_timer,snd_seq_device
snd_page_alloc 17680 3 snd_via82xx,snd_via82xx_modem,snd_pcm
soundcore 16800 1 snd
So...I'm not sure what to look for here....
Any thoughts would be appreciated.
Reid
Ok, this might be a bit of curly question, and as i don't know if this is
possible, either valid or not.
The subject is placement, and pertains to orchestral recording. (My own work
composed within the box with linuxsampler, from midi in RG, and recorded in
Ardour.)
I'd like to place my instruments as close as possible to an orchestral
setup, in terms of recorded sound. That is, once i've recorded, i'd like to
use convolution and other tools to 'correctly' place instruments within the
overall soundscape.
example:
With the listener sitting 10 metres back from the stage, and facing the
conductor (central) my 1st violins are on the listener's left. Those first
violins occupy a portion of the overall soundscape from a point
approximately 2 metres to the left of the conductor, to an outside left
position, approximately 10 metres from the conductor, and with 8 desks (2
players per desk) about 4 metres deep at the section's deepest point, in
the shape of a wedge, more or less. That's the pan width of the section.
Now as i understand it, a metre represents approximately 3ms, so calculating
the leading edge of the section across the stage as 'zero', the first violin
players the furthest in depth from the front of the stage, should, in
theory, (and i know this is approximate only, as i sat as a player in
orchestras for some years, and understand the instinctive timing
compensation that goes on) play about 12ms later than those at the front.
Using the ears, and experimenting, this actually translates as about 6ms,
before the sound becomes unrealistic, using layered violin samples, both
small section and solo. (highly subjective i know, but i only have my own
experience as a player and composer to fall back on here.)
A violin has it's own unique characteristics in distribution of sound
emanating from the instrument. The player sits facing the conductor, and the
bulk of the overall sound goes up, at an angle, at more or less 30degrees
towards the ceiling to a 'point' equivalent to almost directly over the
listener's right shoulder. Naturally the listener 'hears' the direct sound
most prominently, (both with ears, and the 'visual perception' he gains from
listening with his eyes.) Secondly, the violin also sounds, to a lesser
degree, downwards, and in varying proportions, in a reasonably 'spherical'
sound creation model, with the possible exception of the sound hitting the
player's body, and those in his immediate vicinity. (and other objects, like
stands, sheet music, etc, all playing a part too.)
I've experimented with this quite a bit, and the best result seems to come
from a somewhat inadequate, but acceptable, computational model based on
using, you guessed it, the orchestral experience ears.
So i take one 'hall' impulse, and apply it to varying degrees, mixed with as
precise a pan model as possible (and i use multiple desks to layer with,more
or less, so there's a reasonably accurate depiction of a pan placed section,
instead of the usual pan sample model of either shifting the section with a
stereo pan, or the inadequate right channel down, left channel up method.)
to make this more complicated (not by intent, i assure you), i'm attempting
to add a degree of pseudo mike bleed, from my 1st violins, into the cellos
sitting deeper on the stage, and in reduced amounts to the violas and second
violins sitting on the other side of the digital stage.
All of this is with the intent of getting as as lifelike a sound as possible
from my digital orchestra.
The questions:
In terms of convolution, , can i 'split' a convolution impulse with some
sort of software device, as to emulate the varying degrees of spherical
sound from instruments as described above?
So, one impulse (I use Jconv by default, as it does a great job, far better
than most gui bloated offerings in the commercial world) that can be, by way
of sends, and returns, be 'split' or manipulated not only in terms of length
of impulse, but fed as 'panned' so as to put more impulse 'up', less impulse
'down' and just a twitch of impulse 'forward' of the player, with near
enough to none on the sound going back into the player.
I've written this rather clumsily, but i hope some of you experts may
understand what i'm trying to achieve here.
Can the impulse be split down it's middle, separating left from right,
aurally speaking, and if this is possible, can i split the impulse into
'wedges' emulating that sphere i wrote of, more or less?
if there's a way to do this, then i'm all ears, as my mike bleed experiments
suffer from a 'generic' impulse per section affecting everything to the same
degree, including the instruments bled in. I should note here, this is not
about gain, but a wedge of impulse, cut out of the overall chunk, that
represents a 'window' or pan section of the whole.
This might seem somewhat pedantic in recording terms, but i'm trying to
build a model, per 'hall' and create half a dozen templates to represent
different size ensembles in different halls.
So my small string section hall template might be a small church or
performance room, like a library, or dancing hall. I would then only need to
be creative with where i put my 'human' soloists, as emulators of slightly
different interpretations of where note start and finish, velocity
variation, etc....., and not have to reconstruct a convoluted model each
time, for......my particular orchestra. (tongue firmly in cheek here.)
Any help would be appreciated. I'm open to suggestions of placement
convolution models, etc, but would prefer to use Jconv, as it works all day
every day, and would be a constant across all templates.
I suppose an analogy for the chunk of impulse idea would be to stretch a
ribbon across a stage, and cut a metre out of the middle. That metre would
be the bit i'd use, as a portion of the whole, in an aural soundscape, to
manipulate, or place, instruments, to a finer degree, in the attempt to
create a more realistic '3d' effect for the listener. That metre along with
other cut out sections of the impulse soundscape could help me introduce a
more....'human' element to a layered instrument section.
I still rue the day orchestral sample devs decided to record sections at a
time. This would have been so much easier if they'd recorded a number of
desks for each section instead.
Alex.
p.s. I'm having a modest degree of success using Ardour sends, as a mike
bleed template. Lot of work, and a lot of sends, but it's slowly coming
together, and has reached the stage where i no longer think it's
impossible.....
lv2dynparam is LV2 extension for dynamic parameters.
The extension consists of a header describing the extension interface
and libraries, one for plugins and one for hosts, to expose
functionality in more usable, from programmer point of view, interface.
Changes since version 1:
* host library: API is refactored, the new API is NOT compatible with
the version 1 API
* host library: support for dynparam automation
* host library: support for dynparam parameter save/restore
Project homepage:
http://home.gna.org/lv2dynparam/
Get tarball from here:
https://gna.org/files/?group=lv2dynparam
--
Nedko Arnaudov <GnuPG KeyID: DE1716B0>
Got a bit of a challenge, that is sure to be user error somewhere.
I'm building Jconv from source, and get the following error when trying to
'make':
impdata.cc: In member function 'int Impdata::sf_open_read(const char*)':
impdata.cc:264: error: 'SFC_WAVEX_GET_AMBISONIC' was not declared in this
scope
impdata.cc:264: error: 'SF_AMBISONIC_B_FORMAT' was not declared in this
scope
impdata.cc: In member function 'int Impdata::sf_open_write(const char*)':
impdata.cc:306: error: 'SFC_WAVEX_SET_AMBISONIC' was not declared in this
scope
impdata.cc:306: error: 'SF_AMBISONIC_B_FORMAT' was not declared in this
scope
make: *** [impdata.o] Error 1
Any clues?
Alex.
2009/1/19 Tapani Sysimetsä <tapani.sysimetsa(a)rokki.net>:
>
> Here's some quick notes on Equal Dreams:
>
Thank you Tapani.
I've extended the page with the info you provided.
The wiki is public. In case you'd like to
add or change anything press the edit button.
-
Emanuel
Hello all,
We're pleased to announce the release of a new version of jackctlmmc
and a new Qt based graphical version called QJackMMC. The main page
including download links and documentation is here:
http://jackctlmmc.sourceforge.net/
In brief, QJackMMC is a Qt based program that can connect to a device
or program that emits Midi Machine Code (MMC) and allow it to drive
JACK transport, which in turn can control other programs. JackCtlMMC
is a slightly simpler command-line version of QJackMMC. You might need
such a tool if you have hard-disk recorders (HDRs) or other external
MIDI compliant devices that are capable of sending out MMC to keep
other devices in sync. You might have a multi-track recorder and you
want to be able to start, stop, or fast-forward JACK-based programs
such as Rosegarden, Hydrogen, and Ardour.
Enjoy,
-- Alex
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Good news everyone...
how could I be blind enough to ignore the fantastic, cool and
promising-a-great-future-for-free-FX-plugins CALF-Collection??
The Calf Vintage Delay completely qualifies as a perfect fullfillment
for all the wishes I issued here regarding a simple-to-work-with, good
sounding and configurable delay with BPM-parameter.
Now I'm only searching for a way to send the promised bounty to
Krzysztof Foltman and Thor Harald Johansen. They have no donation-button
on their site and I am not sure about sourceforges system so I left a
message at sf...
If anybody from the Calf-people is tracking this list, please let me
know :-)
best regs
HZN
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.9 (GNU/Linux)
Comment: Using GnuPG with SUSE - http://enigmail.mozdev.org
iEYEARECAAYFAklzwEMACgkQ1Aecwva1SWNLTgCfZryTlSDde11he4+toWxV90s4
WPwAnAn418QjdGxGsnQdE0SSHVUmESjP
=gAq2
-----END PGP SIGNATURE-----
Sorry for the double post, but this question is sort of directly in
between the two mailling lists, pd and linux audio users....
Anyone been able to get seq24 to start from pd? I've tried numerous
times to use seq24 for sequencing harmonies and whatnot, but it is
always a pain getting it to start at the same time that I want my
things to start in pd. In seq24, the settings are 'jack transport'
and 'live mode' enabled. I then try to start the jack transport from
the pd external or in qjackctl and seq24 doesn't do anything.
On Sat, Sep 29, 2007 at 3:57 PM, Tim Blechmann <tim(a)klingt.org> wrote:
> i once write a small external to query jacktransport ... maybe it is
> useful for your purposes?
>
> it is in the pd cvs under externals/tb/jacktransport
>
> best, tim
>
> On Sat, 2007-09-29 at 13:20 +0200, Mysth-R wrote:
>> Hi,
>> I am new on this list. I tried to find some informations about syncing
>> PD and Jack transport/BPM in the archive list but I am not
>> satisfied :D
>> I am making an Audio sequencer with PD, and I would like to use it
>> with other linux softwares (seq24, hydrogens, freewheeling,...).
>> So I need to synchronise PD and Jack with MIDI or OSC;
>> I saw on the archive list some module like midirealtimein but it
>> doesn't seems to work for me. Only receive zeros.
>> Today I am trying to use jack.tools, to synchronise with OSC
>> ( jack.osc). I can send /start /stop messages, but I don't understand
>> how I can send transport informations, bpm ...
>>
>> Can you help me / give me some informations ?
>> Thank you for your help
>> cheers,
>>
>> Erwan
>>
>> --
>> {^_^} Mysth-R {^_^}
>>
>> http://myspace.com/mysthr
>> http://myspace.com/aideauditive
>> _______________________________________________
>> PD-list(a)iem.at mailing list
>> UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
> --
> tim(a)klingt.org ICQ: 96771783
> http://tim.klingt.org
>
> The aim of education is the knowledge, not of facts, but of values
> William S. Burroughs
>
> _______________________________________________
> PD-list(a)iem.at mailing list
> UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
>
>