La version française suit la version anglaise
Call for participation in documenting Montreal's Soundscape
http://cessa.music.concordia.ca/soundmap
The Montreal Sound Map is ready for submissions. We are accepting all
(unprocessed) audio recordings (past and present) of Montreal's sound
environments. Our goal is to gather and display as many recordings as
possible from all over the island. As we gather submissions, we will
be placing them into a browseable tagging system based on criteria
including:
Sound Source (specific, characteristics)
Location (borough, neighbourhood, municipality)
Date (time of day, month, day of week, year, season)
Environment Type (park, metro, indoors, outdoors, etc...)
Equipment (recording device, microphone, etc...)
Files can be uploaded through the form on the website. For more
information, please visit the about page.
~
Participez à la complétion de la carte sonographique de Montréal.
http://cessa.music.concordia.ca/soundmap/fr
Il vous est maintenant possible d'envoyer des fichiers audio
(enregistrements récents ou non, non-édités) pour participer à la
complétion de la carte sonographique de Montréal.
L'objectif est de rassembler le plus grand nombre d'enregistrements
provenant de partout sur l'Ile.
Les documents reçus sont incorporés dans une base de données
accessible à tous. Les critères de recherches sont :
Source sonores (caractéristiques)
Lieu de l'enregistrement (banlieue, quartier, municipalité)
Date de l'enregistrement (heure du jour, mois, jour de la semaine,
année, saison)
Caractéristiques du lieu d'enregistrement (parc, metro, intérieur,
extérieur, etc.)
Équipement utilisé (outil pour l'enregistrement, micros, etc.)
Les fichiers peuvent être envoyés en remplissant le formulaire en
ligne. Pour plus de renseignements, cliquer sur le lien "à propos".
Best,
Max
--
CESSA
www.cessa.ca
The Concordia Electroacoustic Studies Student Association (CESSA) was
established Fall of 2007. Our focus is primarily in spreading
awareness of electroacoustic matters throughout Concordia and Montréal
via projects and events, expressing particular interests in health,
environmental, and social issues pertaining to sound.
Ok, things have settled down, and i've tweaked a little here and there.
Seems to be running nicely now, and fairly stable.
A screenshot. of a generic setup.
http://shup.com/Shup/81262/patchage3.png
Alex.
lpatchage
jackdbus
rosegarden
linuxsampler
ardour2
jconv
On Tue, Nov 11, 2008 at 10:35 PM, alex stone <compose59(a)gmail.com> wrote:
> Nedko,
>
> This is what i get when i try, in the messages window of lpatchage, when i
> try to connect linuxsampler audio out:
>
> [JACKDBUS] ConnectPortsByName() failed.
>
> jackdbus log is attached. (I've renamed a copy for your perusal)
>
> Alex.
>
>
>
>
> On Tue, Nov 11, 2008 at 8:55 PM, Nedko Arnaudov <nedko(a)arnaudov.name>wrote:
>
>> "alex stone" <compose59(a)gmail.com> writes:
>>
>> > But i'm still at a loss as to why i can't connect LS audio out, to
>> Ardour
>> > audio in, in lpatchage, visibly.
>> > It works in Qjackctl, but stubbornly refuses to connect in lpatchage,
>> even
>> > though the actual connections are made in Ardour, and most importantly,
>> > work.
>>
>> Do you get any errors in jackdbus log file when you are trying to
>> connect using lpatchage?
>>
>> --
>> Nedko Arnaudov <GnuPG KeyID: DE1716B0>
>>
>
>
release candidate 2 has some important fixes:
* Fix for #46 - on first save of newly appeared clients, their state
was not correcttly recorded as being saved and thus was not being
restored on project load afterwards.
* Memory corruption fixes caused by bug in stdout/stderr handling
code. Was happening when lash client outputs lot of data to stdout or
stderr
* Improved handling of repeating lines sent to stdout/stderr
I would like to ask LASH beleivers and other interested parties to test
the 0.6.0 release candidate. Juuso Alasuutari and me have been doing
some major changes to the lash code. We have done lot of work, we've
fixed several big implementation issues and we need stable point before
doing more changes (0.6.1 and 1.0 milestones).
In the tarball there is simple lash_control script. One can also control
LASH through patchage-0.4.2 and through lpatchage (availabe through
git).
User visible changes since 0.5.4:
* Use jack D-Bus interface instead of libjack, enabled by default, can
be disabled. Ticket #1
* Allow controlling LASH through D-Bus. Ticket #2
* Use D-Bus autolaunching instead of old mechanism. Ticket #3
* Log file (~/.log/lash/lash.log) for LASH daemon. Ticket #4
* Client stdout/stderr are logged to lash.log, when clients are
launched by LASH daemon (project restore). Ticket #5
* Improved handling of misbehaved clients. Ticket #45
* Projects now can have comment and notes associated. Ticket #13
Download:
http://download.savannah.gnu.org/releases/lash/lash-0.6.0~rc2.tar.bz2http://download.savannah.gnu.org/releases/lash/lash-0.6.0~rc2.tar.bz2.sig
--
Nedko Arnaudov <GnuPG KeyID: DE1716B0>
For a new audio application I need to code a JACK client with C++. So
far I did it only with C and have a problem with giving the pointer to
the callback process function, which is a method now. So what is the
best performing solution? Is a delegate function a good idea, being
static and triggering the method in the objectinstance?
Cheers,
Malte
--
----
media art + development
http://www.block4.com
current events:
exhibition spame-moi La Motte-Servolex, France 17.10.-20.12.2008
Hi all,
I need to use a microphone input as a trigger. In other words my idea is
to connect a switch to the microphone input. In this way, when the
switch is turned on it generates a spike in the captured track.
I would like to create a program that trigger an event every spike it
receives.
I succeed in capturing the mic input through a simple program that uses
alsa driver, but I don't know how to "parse" the raw data to search for
the spikes. Any hints?
Second question: on a "full duplex" sound card, can I capture at 8 bit,
mono, 22.050 bit/s , and on the same time playback at 16 bit, stereo,
44.100 ?
Thank you!
Lorenzo
I'm a home studio enthusiast and also a former JAVA programmer.
I'm looking at combining these interests and contributing to a
project, but most apps seem to be written in C++.
Any suggestions? Anyone involved in any good JAVA projects?
--
Cheers, Craig
http://craiglawton.infohttp://romansandals.wordpress.com
Hi,
I have a soundcard M Audio Fast Track Pro 4x4 and I'm trying use four outputs at the same time, but I have
no sucess.
I tried configure with the tips of alsa-project page and here its instructions don't work correctly.
Any suggestions?
Guilherme Bertissolo
Novos endereços, o Yahoo! que você conhece. Crie um email novo com a sua cara @ymail.com ou @rocketmail.com.
http://br.new.mail.yahoo.com/addresses
Hi,
I wrote a little rant today, that I published on my blog:
http://propirate.net/oracle/archives/2008/11/05/alsa-headaches-erm-headphon…
I send it to the Linux Audio Developers List, in the hope that it will
be read by the right people, or by someone who knows who to talk to.
Also I would like to encourage a little discussion and brainstorming
about source and solution of the problem I am describing:
------------------------------------------
I own a laptop, that I run with Linux, using the ALSA for controlling
the laptops built in sound card. The laptop comes with built in
speakers and microphone, as well as two plugs for headphones and a
microphone respectively. Those plugs would come in quite useful, for
attaching a headset to talk on VoIP Internet telephony software.
However, there is a problem with this Laptops audio system.
What I would expect is that as soon as I drive the plug into the
headphone jack, that the built in speakers remain silent, while the
audio is routed into my headphones. However, this does not happen. The
tunes from my music player software happily chung along on both the
internal speakers and the headphones. Does this behavior make any
sense?
What if I'd like to use the headphones in an environment where I ought
to make no noise? Then the speakers continuing to transmit the audio
would be quite offending.
Ok, so it does not work automatically. Every self respecting Linux
distribution today comes with an extensive audio mixer control panel.
Turning my attention to that control panel, I tried to achieve the
required circumstances for making use of my headphones, I was
disappointed severely. While there are controls that are labeled
headphones, those actually do not act as expected, in fact they do
nothing. Playing around with other control items with cumbersome
names, I was unable to resolve the situation to my satisfaction.
So I am asking myself, why does this happen, and turn my eyes to the
available ALSA support channels. What I find there suggests that other
users experience the exact same grief, however, without resolution,
unfortunately.
Being a techno-geek of course, it is quite clear to me why this
happens. There are a couple of standardized audio chips, used by
various manufactures. ALSA recognizes those chips and exposes the
available mixer controls in its control panels. The manufactures using
those chips in their hardware, are likely not wiring them up in
identical matters to other manufacturers. For example, if a laptop
that uses that chip has no headphone plugs, the chip will still have
the mixer controls built in, yet there is no wire attached to those
chip pins. Und careless manufacturers, might even wire them up wrongly
accidentally. Like connecting the headphone pins to the speaker wires?
So the ALSA people are in some kind of predicament here, they cannot
know how the chip is wired up, and they quite obviously cannot own all
the available laptops and computers to test them with their individual
configuration.
So how could a solution to this situation look like?
I am not an ALSA programmer, so I don't know whether that is feasible,
but what I would suggest is the following:
Create a simple and friendly software tool, that guides a not so
technical user to the process of identifying and testing all the
different mixer configurations, and asks the right question to test
whether all of them work as intended. The tool would collect that
information, along with an identifier that allows to recognize the
model of the laptop used, and send that information back to the ALSA
developers. Those could then integrate this information into the
project, and whenever a future user starts ALSA on an identical
machine, it would already know the perfect configuration for this
machines individual mixer control setup.
What do you think?
Cheers
-Richard
Hey gang, is there any good documentation on how to use libsox
on the net? i've been googling but nothing except the ubuntu man
pages cropped up?
------- -.-
1/f ))) --.
------- ...
http://www.algomantra.com
* zynjacku is updated to latest state of lv2 art
* lv2rack - a lv2 effect rack (jack) is created (reuses lot of zynjacku
code).
* zynjacku development moved to git: http://repo.or.cz/w/zynjacku.git
Testers welcome ;)
For lv2 midi event port synths (the new ones),
you will need slv2 svn r1698 (at least)
Short term plans:
* Cooperate with Krzysztof Foltman on calfwidgets+lv2rack/zynjacku
* Ubuntu packages for lv2zynadd, zynjacku and lv2rack
* Listen to feedback :D
--
Nedko Arnaudov <GnuPG KeyID: DE1716B0>