Hey, I thought about extending MP3FS, a user level file system that
shows flac files as mp3s to user-space programs (see mp3fs.sf.net), to
make it work with my 96kHz/24 bit music rips.
MP3FS uses liblame and libflac on the inside, but only converts
standard 44.1kHz/16bit files at the moment. Looking at the
not-to-well-documented lame library I (think) that Lame only support
sample rates up to 48kHz, so I would need to convert the samplerate
and bitrate through the use of an other library. I finally found
libsndfile today, but thought I might hear with the expertice (that's
you), if it should work (if it supports what I want to do),or if I
should use something else.
I would use libsndflac to convert 24 bit/96kHz and 16 bit/44.1kHz flac
files to 16 bit/44.1 kHz uncompressed audio, and then use liblame to
convert this to mp3.
Regarding the downsampling I would like to know if I would get any
funny artifacts when downsampling 96kHz material to 44.1kHz (not even
division). Would I be better of to convert to 48kHz for 96kHz
material?
I do not know much about lame, mp3 encoding, or audio development
apart from the basics, so guide me into safer waters if I have drifted
into unknown waters here ;)
br
Carl-Erik Kopseng
Dear All,
The following position may be of interest to you.
Please forward to anyone interested. Apologies for double posting.
==== About the Barcelona Media Audio Group ====
Fundacio Barcelona Media Universitat Pompeu Fabra is a research centre
created to foster the competitiveness of the Catalan and Spanish media
and communication industry through innovative research activities and
projects. BM promotes technology generation and development; research
and creativity; transfer of research results to industry; promotion of
the research results to society at large; training in all areas of
communication; and social awareness of the communication industry in a
culture of innovation.
The Audio Group research embraces the whole chain of audiovisual
productions, focusing specially on 3D surround sound technologies, from
capturing, to postproduction, to exhibition. Two main general goals are
to automatize the workflow (by automatic audio adaptation to given 3D
scenes), and to make it easily adaptable to any final exhibition system
(surround 5.1, 7.1, 22.2, binaural or 3D stereo, etc.).
One strong line of research of the group is the reproduction of acoustic
fields in 3D virtual environments, using computer simulations to predict
what any source would sound like in a given virtual world. The group
applies and improves the algorithms of Finite-Differences in Time-Domain
for low frequencies, Ray-Tracing for high frequencies. Such technologies
are then integrated in real-time, interactive multimedia systems.
Audio group home page: http://www.barcelonamedia.org/linies/10/en
==== Profile ====
We are looking for one or more experienced software developers. The
candidate should be self-motivated, results-oriented, and hard-working.
The candidate should preferably have a degree on Computer Science,
although other profiles might be taken into account.
==== Required skills ====
* Software-engineering techniques and methods for developing large
software systems (design patterns, agile methodologies, version
control systems, etc. ).
* Real-time programming techniques including lock-free and
multi-threading.
* Programming languages: C, C++, Python.
* Operating systems: GNU/Linux and Mac OS X.
* A keen sense for the aesthetics of code, documentation, and user
interfaces. Thoroughness in all aspects of software development.
* The ability and willingness to interact in a team, using agile
methodologies.
Not required, but valuable skills:
* Real-time multimedia environments (PureData, Max/MSP,
Supercollider, CLAM, etc.).
* Plugin architectures: LADSPA, LV2, VST, Audio Units, etc.
* Knowledge of common protocols such as OSC, MIDI, etc.
* Knowledge in digital signal processing and/or acoustics
* 3D modeling: Blender or Maya or 3D studio
* Qt
* Scons
==== What we offer ====
We offer an opportunity to work in creative projects in the field of 3D
audio for media productions, with applications ranging from 3D digital
cinema, to sports broadcasting, and videogames.
Side opportunities: perform strategic research in a new promising
domain, working in a small-medium multidisciplinary team, collaborate
with people from the industry and from other academic research groups,
to establish contacts with the international audio research community
through the attendance to international conferences...
==== How to apply ====
To apply, send email to jobs(a)barcelonamedia.org / cc:
toni.mateos(a)barcelonamedia.org, pau.arumi(a)barcelonamedia.org with the
subject "3D Audio Jobs"
* A brief presentation letter stating your interest in the offer.
* A CV
* Optionally, code samples (non open-source samples will be
treated as confidential)
==== More background about Barcelona Media ====
BM grew from the Communication Station set up by Universitat Pompeu
Fabra in 2001. It is a member of the Catalan and Spanish network of
Technology Centres, and is the only one devoted to the Media sector.
BM’s trustees are representatives of the Media industry, the Catalan
Government, Barcelona City and four universities. BM has an extremely
strong record in European collaborative R&D and Innovation projects,
both as partner and coordinator. BM is currently involved in 14 EU
funded research projects in information and communication technologies
with over 5 million € EC funding. BM was coordinator of an FP6 IP and 2
STREPs, including IP-RACINE which researched and developed digital
cinema technologies ‘from scene to screen’. It is now co-ordinating the
FP7 ICT IP 2020 3D Media, developing 3D digital cinema and home
entertainment. Other directly relevant projects are IP SALERO
(‘intelligent content’ objects with context-aware behaviours), SEMEDIA
(Search Environments for MEDIA) and FP5 SPEED-FX (very high resolution
real-time graphic interaction for digital cinema).
Hartmut Noack wrote:
> Cory K. schrieb:
>
> > * Shipping the -generic kernel with this 8.10 release of Ubuntu
> > Studio and let people compile their own -rt kernel.
>
> This could be done in any Distro so there would not be a real
> Ubuntustudio anymore.
I don't agree completely since our focus is not *just* audio, though we
know that's our biggest user segment.
> The major strength of UBuntustudio is its
> near-perfect integration of a audiosystem with a friendly desktop-distro.
> I can run VMWare and NVIDIA-Drivers easily with the UBuntu rt kernel
> would be a major p.i.t.a. do make stuff like that run with a self made
> kernel.
These could also be the very things you lose.
> > * Ship a out-of-sync 2.6.26-rt kernel, hoping for a Stable Update
> > Release in Intrepid with .27 later.
>
> This would be perfectly acceptable for me :-)
>
>
> best regs
> HZN
>
> BTW: what is so extremely important in .27? new hardwaresupport not
> achievable with .26?
Much better suspend/resume support along with many bug-fixes and device
support.
-Cory K.
Hello all. This is going out to a couple of lists so I can get as wide
an opinion as I can. I'll correct any inaccuracies as this discussion
progresses.
Quick intro: I'm Cory K. Lean on Ubuntu Studio. Hi :)
So here's the pickle we're in as I understand it.
The way the kernels are now managed in Ubuntu has changed radically in
this release. Thus causing our kernel guy *much* more work that ever
before. We've had to work very hard to get upstream -rt to support the
.26 kernel but, now mainline Ubuntu has moved to 2.6.27. Which upstream
-rt doesn't *look* to support yet.
So because of these, and other issues that don't matter to the question
I have we're looking at these options:
* Shipping the -generic kernel with this 8.10 release of Ubuntu
Studio and let people compile their own -rt kernel. With a latter
PPA release of -rt for testing as upstream support happens.
* Ship a out-of-sync 2.6.26-rt kernel, hoping for a Stable Update
Release in Intrepid with .27 later.
Of some combination of those.
Thoughts on what to do? What do users want? (please be mature)
-Cory K.
Kryz, your mail-severs seems to be down and when I checked out if you
had a running WWW, I got this message:
"Chwilowo nic tu nie ma" [We are not here yet]
So, back to the list ... This time corrected as per second mail:
On Tue, 2008-08-26 at 11:12 +0100, Krzysztof Foltman wrote:
> Jens M Andreasen wrote:
> > I am doing some preliminary testing of CUDA for audio, Version 2 (final)
> > has been out for a couple of days, and this is also what I am using.
>
> Does it require the proprietary drivers and/or Nvidia kernel module?
>
Yes, and not only that. The proprietary drivers distributed with say
Mandrake, Ubuntu et al won't work either. Uninstall that, change your X
setup to vesa (to stop recursive nvidia installer madness) and then get
your CUDA driver and compiler from:
http://www.nvidia.com/object/cuda_get.html
> What kind of things is the gfx card processor potentially capable of
> doing? Anything like multipoint interpolation for audio resampling
> purposes? Multiple delay lines in parallel? Biquads? Multichannel
> recording to VRAM?
Multichannel recording by itself would be a waste of perfectly good
floating point clock-cycles, but anything that you can map to a wide
vector (64 to 196 elements) is up for grabs. A 196 voice multi timbral
synthesizer perhaps or 64 channel-strips with basic filters and
compressor/noise-gate for remix. The five muladds needed for a single
biquad filter times the number of bands you need to equalize on fits the
optimal programming model quite well.
The linear 2D interpolator is also available and even cached. Perhaps
not the worlds most useful toy for audio-resampling, but could find its
way into some variation of wave-table synthesis. It can be set up to
wrap around at the edges, which I find kind of interesting.
Random access to main (device) memory is - generally speaking - a bitch
and a no-go if you cannot wrap your head around ways to load and use
very wide vectors.
There are some 8096 fp registers to load into though, so all is not
lost. Communication, permutation and exchange of data between vector
elements OTOH is then fairly straight forward and cheap by means of a
smallish shared memory on chip.
The more you can make your algorithm(s) look like infinitely brain-dead
parallel iterations of multiply/add, the better they will make use of
the hardware. The way I see it, the overall feel of your strategy should
be something like "The Marching Hammers" animation (from Pink Floyd: The
Wall.)
> Is it possible to confine all the audio stream transfer between gfx
> and audio cards to kernel layer and only implement control in user
> space? (to potentially reduce xruns, won't help for control latency
> but at least it's some improvement)
>
Yuo you mean something like DMA? Yes I would have thought so but this is
apparently not always the case, Especially not on this very card that I
have here. :-/
The CUDA program running on the device will have priority over X though.
So no blinking lights (nor printfs) before your calculation is done. For
real-time work, I reckon this as a GoodFeature (tm)!
Potentially this can also hang the system if you happen to implement the
infinite loop (so don't do that ...)
> Would it be possible to use a high level language like FAUST to
> generate CUDA code? (by adding CUDA-specific backend)
>
The problem would be to give Faust a good understanding of the memory
model and how to keep individual vector elements away from collectively
falling over each other.
But I must admit that I am not too familiar with what Faust is actually
doing? Would it be of any help to you with a library of common higher
level functionality like FFT and BLAS?
---8<---------------------------------------------
- "CUBLAS is an implementation of BLAS (Basic Linear Algebra
Subprograms) on top of the NVIDIA(r) CUDA(tm) (compute unified
device architecture) driver. It allows access to the computational
resources of NVIDIA GPUs. The library is self-contained at the API
level, that is, no direct interaction with the CUDA driver is
necessary."
------8<................................
But observe that:
... - "Currently, only a subset of the CUBLAS core functions is
implemented."
/j
> Krzysztof
>
Hi,
After the latest Intel announcment that they are indeed working
seriously on wireless electricity uses resonance to transmit hte energy
sources I am wondering if anyone has ideas on whether that would affect
audio quality as resonance is an important part of the ambience of live
sound.
Cheers.
--
Patrick Shirkey
Boost Hardware Ltd
Hi,
One of the other technologies demoed last weekend by Intel was so called
mind control hardware which is effectively a little cap that picks up
electrical signals from the brain and can be used to control
software/hardware depending on the thoughts/actions of the wearer. I
have seen video games where this is used to great affect to for example
light a fire by meditating and the deeper the meditation the
warmer/brighter the fire becomes.
Does anyone know of any development that is going on in Linux Audio
circles that applies this technology?
Cheers.
--
Patrick Shirkey
Boost Hardware Ltd.
I've been thinking about the same thing. Specifically I wanted to try and
use an OCZ NIA for drum input into hydrogen. I've seen a few videos of them
people demoing the unit, but don't personally know anyone that's tried one.
There is no linux support afik either for them. I'd be interested if you
make any progress to hear about it.
Nathanael