Hello all,
I was a Smartmusic (http://www.smartmusic.com) user until that program
stopped working with wine. For those of you who don't know it, it is a
program that basically downloads finale files (score + accompaniment)
from a centralized server for musical instrument practice. When you
play along it uses either MIDI input or a microphone to rate your
perfomance (which notes you played in tune, and so on).
As far as I know, there is no native Linux or open source alternative.
My questions are: Do you think it would be hard to code a plugin for
musescore (http://www.musescore.org) to do the same pitch detection
and comparison with the score? Or maybe would it be better to fork the
code and make a stand-alone app, keeping it as compatible as posible?
Any experience with pitch to midi? aubio (http://www.aubio.org) could
be an option, but I haven't tried it.
I think the music repository part could be addressed later...
I don't have much time right now, but I'd like to ponder the
posibility of working on it in the future. Maybe start experimenting
in the meanwhile.
Just an idea,
Greetigs,
Camilo
Eric Kampman <erickampman(a)me.com> wrote:
> Hello,
>
> I'm writing a synth module on top of jack and I'm starting to contemplate
> stereo.
>
> I looked up "pan law" and
...
This reference:
M.A. Gerzon, "Panpot Laws for Multispeaker
Stereo", Preprint 3309 of the 92nd Audio
Engineering Society Convention, Vienna
(March 1992)
looks at panning laws and psychoacoustics.
The paper is concerned mostly with 3- and
4-speaker frontal stage stereo, but 2-speaker
stereo is analysed as an introduction.
Regards,
Martin
--
Martin J Leese
E-mail: martin.leese stanfordalumni.org
Web: http://members.tripod.com/martin_leese/
-------- Forwarded Message --------
From: Ralf Mardorf
To: Kris Calabio
Subject: Re: [LAD] What do you do for a living?
Date: Thu, 11 Nov 2010 16:23:46 +0100
On Wed, 2010-11-10 at 15:06 -0800, Kris Calabio wrote:
> Hmm, lot's of anti Google sentiment here? I'm afraid that I was sort
> of in the afterglow of positive testimonials when I wrote that
> original email. What exactly does Google do that is unethical? Sure,
> they're a huge corporation (bureucracy blah blah), but they are
> companies that are a lot worse.
>
> It's great (and refreshing) to hear that a lot of us do what we do in
> spite of money. community > capitalism
> > Yeah, I'd like to
> > work for Google, but who doesn't right? :)
>
> Not me...
>
> Gordon MM0YEQ
>
>
>
>
>
They cooperated with China, than there are trackers such as Google
Analytics and read Fon's first reply.
Off cause there are also some good sides of Google.
- Ralf
They're a corporation. Of course they're meant to make money. I know that already. Be more polite, please. ;)
I'm
afraid you're drew the wrong implication about myself from that
statement, but perhaps I should have been more clear. I meant that
Google likes to see open source in a resume. If they see that you
developed for open source projects and talk about it in an interview (if
you get one), they will be more likely to hire you. The Google
representativs at the panel emphasized this point many times. I hope
you understand where I was coming from now.
Best,
Kris
--- On Wed, 10/11/10, fons(a)kokkinizita.net <fons(a)kokkinizita.net> wrote:
From: fons(a)kokkinizita.net
<fons(a)kokkinizita.net>
Subject: Re: [LAD] What do you do for a living?
To: "Kris C" <cpczk(a)yahoo.com>
Cc: "Linux Audio Developers Mailing List" <linux-audio-dev(a)lists.linuxaudio.org>
Date: Wednesday, 10 November, 2010, 2:44 PM
On Tue, Nov 09, 2010 at 07:17:24PM -0800, Kris C wrote:
> Google just loooooves open source and those involved.
Quite naive. Google doesn't want to share its profits with
Microsoft, that's all. For the the rest, it's and advertising
machine meant to make money. The rest is just an means to this
end. Wake up to reality, please.
Ciao,
--
FA
There are three of them, and Alleline.
Hey guys,
I've recently been working on a small MIDI automation line editor, and I've
managed to get
it into a working condition. Its a C++ & GTK jobbie, see this blog post for
more info:
http://harryhaaren.blogspot.com/2010/10/automate-another-stage-along-way.ht…
I've not announced this as usable yet, as I am aware that there are some
serious enough errors in it:
1. Drawing the GUI is not optimized and hogs resources. It always redraws
the whole graph, and also redraws when its not necessary.
2. I'm not at all experience with threaded programming, and hence am pretty
sure there's some threading possibilities to be explored.
I've worked on this project on my own so far, and while it has taught me a
lot, I think now I need to ask
for some assistance with regards getting it polished for "everyday-use"..
:-)
If anybody has interest and want's to pull the repo to find out if it works
for you, please provide feedback!
(I'm 50% sure the waf script will need some tweaking, and maybe some other
things too...)
Cheers for reading, -Harry
Hi list,
I am seeing what appears to be a 4 - 7 usec context switch time on a 3
GHz Core 2 Duo machine with the 2.6 kernel, and 10 - 15 usecs on a 1.66
GHz Atom. Is that reasonable? Does anyone have any tips on how to speed
that up, if possible?
The background is that I have two apps --- one a straight Linux app and
the other a Wine one. The Linux app is doing the mixing and the Wine app
is hosting Windows VSTs.
During audio processing I use a sem_t to signal from the mixer app to
the vst app. Once the vst app is done processing, it signals back to the
mixer app over another sem_t.
I believe this is roughly the approach taken by other linux audio apps
to host Wine vsts. And is also something like how Jack does IPC.
But it takes about 4-7 usecs on the faster machine to signal another app
and have it wake up again. I assume this is the time needed for a
context switch.
Since this has to happen twice per buffer, at a 32 sample buffer size
and with 10 VSTs, overhead accounts for about 14% of the CPU. On the
slower Atom machine, it would account for 33% of the CPU.
Any hints on how to reduce that time, or an alternate IPC design to look
in to?
Thanks for any help,
Michael Ost
Muse Research, Inc.
Seeking Qt4.7 (esp. QtQuick/QML) and Linux multimedia experts to join
a new project -- http://ytd-meego.googlecode.com -- to contribute
code/planning/ideas for a port of http://ytd-android.googlecode.com/
to Meego running on http://en.wikipedia.org/wiki/Nokia_N900 and
equivalent mobile computing platforms.
FYI, the youtube direct application allows integrated
capture/cataloging/uploading of video (or audio) to youtube captured
directly from the handheld. See the following articles to understand
how it is being used:
http://gigaom.com/video/youtube-direct-abc7/http://www.digitaltrends.com/computing/youtube-direct-is-helping-media-find…
---------- Forwarded message ----------
From: Niels Mayer <nielsmayer(a)gmail.com>
Date: Sun, Nov 7, 2010 at 9:02 PM
Subject: YTD-Meego on Googlecode - Planning Youttube direct uploading app
To: qt-qml(a)trolltech.com
I've created http://code.google.com/p/ytd-meego/ for the port of
ytd-android to Meego using QtQuick/QML. YTD-Meego is to be a
feature-equivalent port of the Youtube-direct application for Android
( http://ytd-android.googlecode.com/ ).
Initially, the purpose of the googlecode project will be for
issue-tracking and planning features and their implementation in
QtQuick. The subversion repository will eventually contain working
snippets of QtQuick test code that will evolve into an application
with the help of the community. Please send mail if you want to be
added as a project contributor/committer for contributing ideas, code
snippets and helping developing this application openly.
-- Niels
http://nielsmayer.com
I send the following to the LAA :
This is a bug fix release.
A big thanks to all that send fixes to me, without you, the AlsaPlayer will not
be living.
See the ChangeLog for the details and the names of the contributors :
http://alsaplayer.svn.sourceforge.net/viewvc/alsaplayer/trunk/alsaplayer/Ch…
I will also stress you to contribute to the AlsaPlayer. At least 2 things need
to be fixed. The first one is the jack output plugin that is using deprecated
functions. The second one is the resampling. For that, libsamplerate will give
a better quality.
Enjoy the AlsaPlayer,
# # # # #
To quote Fons (he do have a better English than mine) :
While it's a nice player, it has some serious audio quality
issues.
- Resampling 44.1 -> 48 kHz (for jack) sounds horrible...
- The sndfile input plugin reduces everything to 16 bits.
This is really absurd, even if your files and your
sound card are 24 bit you only get 16.
Floating point wav files apparently aren't read at all
(they load but produce silence when played).
All of this could be solved by using a good resampler
lib, and making the internal format floating point
rather than short.
# # # # #
This change can be made using libsample rate. This is a must have feature for
me, and it must be implemented before any other change because it will ease
further development (it is a lot of free re-usable audio code in float).
I am just an admin and don't have the knowledge to make the needed code.
Anybody that can contribute such a great feature for AP will be welcomed. If
you are interested, please take contact with me, on this list or privately.
Ciao,
Dominique Michel
--
"We have the heroes we deserve."
Hello,
For most of the last week I've been stuck trying to configure my
silicon peripheral to generate Normal I2S audio data to mostly _bad_,
results and had a couple of questions (below).
Briefly, my setup is as follows: SqueezeServer on a Windows host PC,
Squeezeslave on my embedded Linux device sending audio samples through
the kernel (2.6.28) down to my driver which lives in
sound/soc/pxa.I've been trying to configure my device for I2S format.
Outside the CPU, the I2S-formatted PCM data is sent to an external FM
transmitter chip so I should be able to hear the audio on my FM
receiver. Since the only "new" stuff here is my driver and the output
hardware, I've been focussing my attention on them until now, but
haven't had much success hearing anything.
I've actually heard _some_ recognizable audio - I could tell it was
the song I had selected, but it sounded very distorted (sounded
horrible) and I'm certain it wasn't because of a weak FM transmitter
signal. I'm guessing it was a mismatch in the formatting of the I2S
data between what I was sending and what the FM Tx chip was expecting
and that has formed the basis of my troubleshooting effort. I've run
out of ideas for things to try at the bottom end and now I want to
make sure my top-end components are working properly. This is weird
because I've stopped being able to hear anything lately, but a scope
confirms my I2S lines are all lit up.
Here are a couple of questions...
1) The CPU supports a packed mode write to the Synchronous Serial Port
(SSP) FIFO, meaning that two 16 bit samples (one left channel and one
right) can be written at the same time, both being packed into a
single 32 bit FIFO write. My driver enables this mode, but my question
is, where in the kernel would the samples be combined into a 32 bit
chunk for writing? I'm using Squeezeslave on my embedded device as the
player and I've checked the sources and it doesn't look like it's
happening in there. Makes more sense to be somewhere further down in
the stack so players don't have to care about the details of the
hardware they are running on. I was wondering if anyone knew where
this packing might take place.
2) Are their any useful tools out there for debugging/narrowing down
where problems in the audio path might lie? My player is an embedded
platform and I've only ported Squeezeslave to it, but for all I know
there could be a problem anywhere from SqueezeServer, through
Squeezeslave, down into the stack, my PCM driver or even the FM
transmitter. To eliminate the latter as a problem I'm looking for
another device that spits out known good I2S audio and I'll feed that
into the FM Tx and hopefully eliminate that. But there's a lot of code
from the SSP back and it would be great if I had some simple tone
generator application (for instance) that was easily portable to an
ARM9 platform (kernel 2.6.28) that I knew was sending correct data
down the stack.
3) My experience with Linux and audio is just beginning and so far
it's been right down at the driver level, so a question about audio
playing software: when a player produces a PCM stream from, say, an
MP3 file, does it automatically interleave the left channel and right
channel or does it produce two separate streams, one for left and one
for right? I can't tell from reading the Squeezeslave code, but it
looks like the audio data is sent in one continuous stream, so ...
interleaved?
4) For those of you experienced with I2S and other PCM formats, what
would a Normal I2S stream sound like on a DAC that thought it was
receiving Justified I2S? Would the audio still be intelligible or
would you hear nothing at all?
Thanks to all who read this post.
Cheers,
Rory
> When such an audio-gui standard and configuration is developed, I would love to
> participate. And use it in my apps. I think its a good idea, especially the
> idea of allowing the user to switch between circular and linear behaviour for
> round controls. And have that changes affect all apps supporting this
> "standard".
>
As far there is no such a standard and it seems it wouldn't come so fast. So I
have add a menu option to gx_head (guitarix successor) were the user can select
between radial or linear knob interaction. All is in a lib witch can be used
static or shared. The switch work with a bool operator, and it will be easy to
set it over getenv() for example.
More difficult it will be to comes to a conclusion in the free developer world
of how, why, if and would we really all use the same environment variable.
If we, it makes now matter witch lib or not one use to create the knob, only
the var needs to be read.
greats hermann