I am looking for a simple adaptive echo cancellation algorithm for a
project I'm working on. Does anybody know of one under a
GPL-compatible license? Preferably optimized for real time use? This
might be found in a VoIP application, although I want it for musical
purposes.
I couldn't find anything at freshmeat.net. Then again, I can never find
anything on that site.
Regards,
Mark
markrages(a)mlug.missouri.edu
--
To invent, you need a good imagination and a pile of junk. -Thomas Edison
BEAST/BSE version 0.5.5 is available for download at:
ftp://beast.gtk.org/pub/beast/v0.5
or
http://beast.gtk.org/beast-ftp/v0.5
BEAST (the Bedevilled Audio SysTem) is a graphical front-end to
BSE (the Bedevilled Sound Engine), a library for music composition,
audio synthesis, MIDI processing and sample manipulation.
The project is hosted at:
http://beast.gtk.org
This new development series of BEAST comes with a lot of
the internals redone, many new GUI features and a sound
generation back-end separated from all GUI activities.
The most outstanding new features are the demo song, the effect and
instrument management abilities, the track editor which allowes
for easy selection of synthesizers or samples as track sources, loop
support in songs and unlimited Undo/Redo capabilities.
Note, if you encounter problems with .bse files from previous BEAST
versions, this may indicate bugs at the compatibility layer.
A bug report accompanied by the problematic file can be send to the
mailing list and is likely to get you a fixed file in return.
Overview of Changes in BEAST/BSE 0.5.5:
* New (or ported) modules:
DavCanyonDelay - Canyon Echo by David A. Bartold
BseMidiInput - Monophonic MIDI Keyboard input module
BseBalance - Stereo panorama position module
ArtsCompressor - Mono and stereo compressor [Stefan Westerfeld]
* Added utility script to crop and duplicate parts [Stefan Westerfeld]
* Added "Party Monster" demo song [Stefan Westerfeld]
* Implemented ability to use sequencer as modulation source
* Added support for external MIDI events in song tracks
* Added .bse file playback facility to bsesh
* Added support for C++ Plugins
* Now installs bse-plugin-generator for simple creation of C++ Modules
* Added manual pages for installed executables
* Lots of small MIDI handling fixes
* Fixed MP3 loader
* Major GUI improvements
* Registered MIME types for .bse files, provided .desktop file
* Made search paths for various resources user configurable
* Added prototype support to IDL compiler [Stefan Westerfeld]
* Work around PTH poll() bug on NetBSD [Ben Collver, Tim Janik]
* Support NetBSD sound device names [Ben Collver]
* Added i18n infrastrukture for BEAST and BSE [Christian Neumair, Tim Janik]
* Added Azerbaijani translation [Metin Amiroff]
* Added Russian translation [Alexandre Prokoudine]
* Added Serbian translation [Danilo Segan]
* Added Swedish translation [Christian Rose]
* Added German translation [Christian Neumair]
* Added Czech translation [Miloslav Trmac]
* Added Dutch translation [Vincent van Adrighem]
* Lots of bug fixes
---
ciaoTJ
Hi,
I was wondering what's the correct way to handle the sustain pedal when
implementing a MIDI sound generating module.
from the MIDI specs:
-----------
Hold Pedal, controller number: 64:
When on, this holds (ie, sustains) notes that are playing, even if the
musician releases the notes. (ie, The Note Off effect is postponed until
the musician switches the Hold Pedal off). If a MultiTimbral device,
then each Part usually has its own Hold Pedal setting.
Note: When on, this also postpones any All Notes Off controller message
on the same channel.
Value Range: 0 (to 63) is off. 127 (to 64) is on.
--------------
My question is about ".... holds (ie, sustains) notes that are playing,
even if the musician releases the notes."
Assume I play a chord, press the hold pedal, which causes the notes to
be sustained. When I play new notes those are sustained too.
So far so good.
The question arises when I press the same key two times.
Assume no sustain pedal for now.
When I press C2 I hear the note. When I release it the sound does not
vanish immediately but takes a small amout of time to decay due to the
release envelope. If after releasing C2 I immediately press C2 again I
hear two C2 notes for a brief time.
Now same situation as above but with the sustain pedal pressed.
You hear the first C2, release it (the corresponding note-off is
postponed) and then press C2 again.
In that case is it correct that you must hear two sustained C2 notes.
Or must the first C2 be forced to get faded out / muted ?
If not (eg you hear two sustained C2 notes), how far can this go ?
Can there be 3, 4 etc sustained notes on the same key too ?
While I am not a piano player,common sense says me thatpiano has only
one string per key so IMHO it would sound unnatural to play two
notes on the same key.
As you might have guessed I ask this stuff because we want to add
support of sustain in linuxsampler.
thanks for your infos.
PS: a new CVS repository for linuxsampler is up: cvs.linuxsampler.org
interested developers and users please check it out and give us feedback
via our mailing list.
(subscription infos at http://www.linuxsampler.org ).
cheers,
Benno
http://www.linuxsampler.org
Hello, (I'm new to this list, so hi everyone!)
I'm rather stuck on the following: I'm writing an app that uses JACK for its
audio output. I now want to control this app using midi but I have trouble
figuring out how to synchronize the rendered sound to the incoming events.
The events, midi notes for example, come in with timestamps in one thread.
Another thread (the one entered by process()) renders the audio. In order to
render properly, it would need to calculate the exact sample at which the
incoming note should begin to take effect in the rendered output stream.
If you have an evenly spaced font, here's a graphical representation of the
problem:
|...e.....e|e....e....|...ee...e.|.....e.e.e|....e...e.| midi events
|..........|...rrr....|.rr.......|......rrr.|....rrrr..| rendering
|..........|..........|ssssssssss|ssssssssss|ssssssssss| sound
Here, the e's represent midi events (but could be gui events just as well).
The r's in the second bar represent the calls to the process function of my
app. During this time, the audio that will be played back during the next
cycle will be rendered. The s'es in the third bar represent the actual sound
as it was rendered during the previous block. The vertical bars represent
blocks of time equivalent to the buffer size.
The best I can think of now is that I have to record midi events during the
first block, process these into audio during the second block (because I
want to take into account all events that occured during the first block) so
it can be played back during the third. Now, all is fine, but time in the
event-bar is measured in seconds and fractions thereof, but time in the
third bar is measured in samples. How can I translate the time recorden in
the events (seconds) to time in samples? How can I know at which exact time
relative to the current playback time my process() method was called?
If I just measure time at the start of my application I'm afraid things will
drift. Is that correct? How have other people solved this problem? Hope
somebody can help!
Regards,
Denis
_________________________________________________________________
Help STOP SPAM with the new MSN 8 and get 2 months FREE*
http://join.msn.com/?page=features/junkmail
Hi list,
in a local music store in the "clearance corner" I found a Roland CR80
Rhythm Composer, built in 1991. It doesn't seem to have much similarity
with the classic CR78, but at 50 Euro, it sounded like a nice bargain.
Can anyone comment on it - sound quality, stability? MusicMachines etc
don't have much info on it, and I even can't find samples to download
anywhere..
Thanks,
Frank
Hi,
please excuse my stupidity, but:
Why can't ladccad daemon start up without jack server being running?
>From my point of view it would be right for ladccad to bring jack up.
So allowing for example multiple jack sessions with different jack settings.
horsh
Hi all,
I just thought some you you might be interested in how a small mnority
of the MacOSX world sees the fruits of our hard work.
Regards,
Erik
Begin forwarded message:
Date: Sun, 26 Oct 2003 18:20:39 +1100
From: Erik de Castro Lopo <nospam(a)mega-nerd.com>
To: undisclosed-recipients: ;
Newsgroups: comp.sys.mac.apps,gnu.misc.discuss
Subject: Open letter to Steve Dekorte
Dear Mr Dekorte,
This has been emailed directly to you as well as being posted to
Usenet in the groups gnu.misc.discuss and comp.sys.mac.apps.
I am writing to you in regard to your shareware application for
MacOSX available here:
http://www.dekorte.com/Software/OSX/SoundConverter/
Please also note that I am not charging you with contravening
anyone's software license. I am however charging you with
behaviour that is both morally repugnant and deceitful.
When I download the tarball you provide (for which you charge
US$10 for a full license) I find the following files:
SoundConverter.app/Contents/Resources/ffmpeg
SoundConverter.app/Contents/Resources/macconverter
SoundConverter.app/Contents/Resources/mppdec
SoundConverter.app/Contents/Resources/qt_export
SoundConverter.app/Contents/Resources/ringtonetools
SoundConverter.app/Contents/Resources/scm2wav
SoundConverter.app/Contents/Resources/sndfile-convert
SoundConverter.app/Contents/Resources/sox
Here is some information about these programs:
Program Size Author Licence
--------------------------------------------------------------
ffmpeg 1.46M Fabrice Bellard LGPL
macconverter 17k ? ?
mppdec 85k Frank Klemm GPL
qt_export 56k David Van Brink Lootware???
ringtonetools 94k Michael Kohn Non-comm. use only
scm2wav 15k Christoph Leuzinger MIT License
sndfile-convert 795k Erik de Castro Lopo GPL/LGPL
sox 3.12M various LGPL
Now compare this with the only part of this tarball actually
written by you, the SoundConverter binary which weighs in at
399k.
From the looks of this, your contribution to the total is
significantly less than 10%. I would also argue that your
contribution (a couple of hours with the MacOSX GUI builder)
is far less that the amount of time and effort put in by the
other people whose work you are using.
How can you possibly justify pocketing US$10 per license for
work to which you have contributed well less than 10% of the
total time and effort.
Furthermore, on the web site listed above, you display a credit
for the guy who designed the icon, while none of the people who
wrote the actual code get any credit whatsoever. Interestingly,
many of the licenses above (GPL, LGPL, MIT etc) were developed
to foster openness in the field of software development, much
like the openness of scientific research. In scientific research
circles, it is considered important to credit the people whose
work yours builds on. In this case, you have failed miserably
to do so. If you were a researcher you would be charged with
academic misconduct and fraud.
Now many people might think that you are just doing what Redhat,
Suse and the other Linux distributors are doing; bundling up
other people's software and selling it. However, I see a big
difference. Redhat and Suse make huge contributions to the Free
Software world; Redhat supporting GCC and GNU libc and Suse
suporting KDE and ALSA.
In light of all this, I am curious to know, what is your
contribution? If you aren't contributing I suggest that you
take all three of following three:
a) Immediately, release the source code to SoundConverter
under a suitable free license.
b) Donate all the money you have collected so far to the
Free Software Foundation or a recognised charity of your
choice.
c) Add some credits on your webpage to the people who did
the vast majority of the work.
I look forward to reading your response in one of the above
public newsgroups.
Regards,
Erik
--
+-----------------------------------------------------------+
Erik de Castro Lopo nospam(a)mega-nerd.com (Yes it's valid)
+-----------------------------------------------------------+
"He who writes the code gets to choose his license, and nobody
else gets to complain" -- Linus Torvalds
New Yorkers for Fair Use Action Alert:
--------------------------------------
Please send a comment to the FCC AGAIN, opposing the "Broadcast Flag"
Proposal
Tell the FCC to Serve the Public, Not Hollywood!
Okay, you folks understand this issue -- it's very important to send word to
the FCC in the next few days, that you OPPOSE the Notice of Proposed
Rulemaking #02-230. This rule would make it illegal for ordinary citizens
to own fully functional digital television devices. We've made it easy;
just follow the links below.
1) Please send in your comments to the FCC using the form provided below.
Tell them that the movie industry should not have a special privilege to own
fully-functional digital television devices. Read the alert below for
details.
2) Please forward this alert to any other interested parties that you know
of, who would understand and see the importance of this issue.
3) Volunteer to help us with this and other alerts related to your rights to
flexible information technology in the future. Two roles you can take up
are to become a Press Outreach Campaigner or a Commentator. Simply reply to
this email to show your interest.
New Yorkers for Fair Use Action Alert:
--------------------------------------
Tell the FCC to Serve the Public, Not Hollywood!
Send Public Comments to the FCC AGAIN to Stop the "Broadcast Flag"
Please follow these links to let the FCC know that the public's rights are
at stake:
http://www.nyfairuse.org/action/fcc.flag/
What's Going On:
The FCC is expected to decide this week that digital televisions will be
required to work only according to the rules set by Hollywood, through the
use of a "broadcast flag" assigned to digital TV broadcasts.
As a result of the deliberations of a group called the Broadcast Protection
Discussion Group, which has assiduously discounted the public's rights to
use flexible information technology, Hollywood and leading technology
players have devised a plan that would only allow "professionals" to have
fully-functional devices for processing digital broadcast materials.
Almost a year ago, you responded to our call to tell the FCC that they are
to serve the public, not Hollywood. You delivered more than 4000 comments
to the the FCC's public comments system in the space of the last week of
their public comments period for the broadcast flag proposal. As a result
of this, Congress took notice and called a hearing to question the FCC on
the issue. When they asked the FCC's representative whether he believed
they could make this copyright-related policy decision without stepping
beyond their bounds and into Congress's jurisdiction, they answered in one
word: "Yes."
Now, their period of considering the proposal is drawing to an end, and they
are expected to decide to mandate the broadcast flag in a matter of days, by
the end of this month. It's time to demonstrate AGAIN that the public's
interests take priority over the wishes of the MPAA.
The idea of the broadcast flag is to implement universal content control and
abolish the right of free citizens to own effective tools for employing
digital content in useful ways. Hollywood and content producers must not be
allowed to determine the rights of the public to use flexible information
technology. The broadcast flag is theft.
In the ongoing fight with old world content industries, the most essential
rights and interests in a free society are those of the public. Free
citizens are not mere consumers; they are not a separate group from
so-called "professionals." The stakeholders in a truly just information
policy in a free society are the public, not those who would reserve special
rights to control public uses of information technology.
Please let the FCC know that the public's rights are at stake:
http://www.nyfairuse.org/action/fcc.flag/.
Here is a page pulling together and summarizing the comments submitted after
the last comments campaign:
http://www.nyfairuse.org/bfpc/
Here is our Reply Comment:
http://www.nyfairuse.org/bfpc/extdoc/NPRM%2002-231%20Reply%20Comments.pdf
----
The following link is the FCC's "Notice of Proposed Rulemaking" for the
broadcast flag.
http://hraunfoss.fcc.gov/edocs_public/attachmatch/FCC-02-231A1.pdf
Hi all, I've tried contacting the maintainer of the project (Andy), but
he never bothered to reply, so now I am taking this question to all of
you out there that might have had exposure to this interesting library.
Does anyone know what is the current status of the whole project anyhow?
Any help on this matter is greatly appreciated! When answering any of
the given questions, please include the question in your reply. Thank
you very much!
So here's the excerpt from the letter:
I've recently decided to incorporate libalsaplayer-like functionality
the upcoming version of my app RTMix. However, with having a particular
feature in mind I am wondering whether libalsaplayer provides that and
if not, whether you'd be willing to add it (or let me add it, although
that might get messy since I am not too familiar with the inner workings
of the lib). So, here it goes:
Apart from all the wonderful features libalsaplayer offers, I am looking
for some additional ones:
I would like to be able to indefinitely loop specific "ranges" of a
particular soundfile (i.e. 2.2secs-4.12secs or whatever).
1. Is it possible to do this and have music continually loop even if one
changes the direction of playing so that when the alsaplayer runs out of
the looping material in each direction that it just jumps back onto the
other end of the loop and contiues on (kind of like a ring-buffer)?
2. Is it also possible to do this kind of looping on the whole soundfile
(the gui version of alsaplayer always stops if I let it play 'til the
end or the beginning, depending in which direction I am going).
3. Is it possible to define loop points by addressing a particular
sample number rather than giving time in seconds?
4. How stable is alsaplayer when looping really small chunks of sound
(i.e. like 5 sample loop)?
5. Is alsaplayer capable of ramping such loop points by attenuating
let's say ending 20 samples (it would be cool if this number could be
user-selectable) and then ramping up the beginning 20 samples in order
to alleviate the "pop" that happens when looping a sound where waveforms
at the beginning and the end do not align?
6. If the feature in question 5 does not exist is there a way to control
output level of a player on a per-sample basis via callback so that one
can implement that outside of the lib?
7. If neither 5 or 6 are possible, would you be willing to implement
such functionality (i.e. a toggle_ramp( bool ); and set_ramp_length( int
); callbacks or something similar.
I would greatly appreciate your feedback on this matter as that will
greatly assist me in determining how to go about implementing such
functionality in my app.
Thank you very much! Looking forward to hearing from you. Sincerely,
Ivica Ico Bukvic, composer & multimedia sculptor
http://meowing.ccm.uc.edu/~ico
P.S. How do you implement reading of a sound faster and slower than it
sounds in such a gradual fashion? Do you adjust the sampling rate of the
DSP and if so does that affect other streams coming from the
libalsaplayer, or is this something that is sound-specific (and if so,
how)?
Just a FYI. Those of you on ardour-dev will have parts of this list in
various "latest CVS commit" messages. People using old versions of
ardour should update. This new beta has lots of great stuff, with more
to come. The new Mantis bug reporting/tracking system is making life
way better for everyone.
--p
----------------------------------------------------------------------
* additions/updates to AUTHORS
* point people at ardour.sf.net/mantis for bug reporting
* new region selection model
- button1 press always selects
- shift-button1 press adds an unselected region,
unselects a selected region
- more comfortable "selected region" color
* new align_selection and align_relative commands
* alignment now uses edit cursor
* new commands to move playhead + edit cursor
to region starts, ends and sync points
* new zoom control interface, permitting arbitrary zoom
levels (not just power-of-2)
* new "add track/bus" dialog, allowing N to be added
at once
* mixer meters now support metering to +6dB, with new
pixmap so that only >0dB is red.
* initial implementation of 2-stage "clean up unusued audio files"
* fix for transport lockup
* fix for crashing bug when exporting at non-session frame rate
* better functionality when editing AudioClocks
* removed musical (BBT) time from Selection model
* snap to region start/end/sync
* snap to edit cursor
* avoid neon when picking new colors
* new transport pixmaps that are theme/skin resistant
* tempo/measure lines now extend past end of session
* hide editor mixer strip when clicking on current mixer track
* brush mode works again ("paint" a region onto snap points)
* added back "embed" functionality for referring to external
soundfiles without importing
* scroll playhead or edit cursors in the timeline rulers
* check all X keyboard modifiers at startup, warn about duplicates
* fixed crashing bug when marker was removed
* allowed command-line specification of JACK client name
* fixed mouse drags of torn off windows
* code cleanup and redesign