This is totally alphabetagamma = unfinished and unpolished software. Release
early - release often is my guide here ;)
Here's the summary:
ProcessPP is the "realisation" of a rather silly idea: A livecoding
environment for C++. As C++ is not interpreted, but compiled, this indeed
does sound like a really silly idea. But the twist is that while “true”
livecoding cannot be done, it can be at least made as easy as possible to try
out new code snippets.
ProcessPP consists of three distinct parts
- procespp: A host application that can load shared objects and execute the
code contained in them
- qprocesspp: A Qt GUI application which serves as an editor for hacking away
functions and to compile them easily into a shared object. Then it tells
processpp to load and run the shared object
- libprocesspp: A shared library which is the “glue” between processpp and
user generated shared object files. Right now it only containes code for the
shared object to find out stuff like samplerate, buffersize, number of
input/output channels, etc..
For illustrative purposes here are the steps involved in making noise:
- File->Open from Template (choose process.template.cc)
- Add some code in the inner loop to make noise
- Process->Run
- Hear the noise!!!
If the above don’t work, take a look at the log in the lower half of the main
window. Often some slight tweaks to the config file
(~/.config/Ugh!/process++.conf) need to be made.
Get it here: http://tapas.affenbande.org/wordpress/?page_id=91
Regards,
Flo
--
Palimm Palimm!
http://tapas.affenbande.org
Hello!
Sorry, for this, but my skills left me to unskilled... AGAIN... :-(
I'm working with pthreads and now I like to hand over mains argv to the
thread run_me.
int main(int argc, char *argv[]);
void *run_me(void *param);
How to properly use param as a char * [] inside run_me?
Thanks forany help!
Kindest regards
Julien
--------
Music was my first love and it will be my last (John Miles)
======== FIND MY WEB-PROJECT AT: ========
http://ltsb.sourceforge.net
the Linux TextBased Studio guide
======= AND MY PERSONAL PAGES AT: =======
http://www.juliencoder.de
Download from http://www.notam02.no/arkiv/src/?M=D
jack_capture
============
jack_capture is a program for recording soundfiles with jack. Its default
operation is to capture whatever sound is going out to your speakers into
a file. (But it can do a number of other operations as well...)
Note:
This version includes Hermann Meyer's jack_capture_gui2 program.
"jack_capture_gui2" is a nice graphical frontend
for for jack_capture with lots of options.
Many thanks to Herman for the contribution.
Changes 0.9.23 -> 0.9.30:
*Added Hermann Meyer's jack_capture_gui2 program.
jack_capture_gui2 is a nice graphical frontend
for for jack_capture with lots of options.
Many thanks to Herman for the contribution.
*Don't exit in case port is not found.
*Print runtime warning and error messages on top of the
console to avoid printing the console meter yet another time.
(it's much prettier also)
*Fixed a bug that could cause (and especially after the switch
from calloc to my_calloc apparantely) segfault when specifying
--port more than once. Thanks to Peder Hedlund for spotting
the bug.
*Print error instead of segfaulting when a specified jack port
does not exist.
*Removed -g option and changed -O0 to -O2. (Oops, don't know
how long that's been there)
*Make sure the stop semaphore is initialized before it might
be called.
*Changed the --recording-time / -d option to record exactly the
correct number of frames. (The format for the option is still
in seconds though). This fixes the problem where the wall
clock and the soundcard clock drifts apart.
*Always increase the buffer size with 2 seconds when more than
than half the buffer is used, unless maximum buffer size is reached.
*Added the --maxbufsize / -MB option which sets maximum buffer size.
Default value is 40 seconds.
*Decreased the default buffer size from 20 to 10 seconds.
*Changed internal data representation from lockless ringbuffer to
lockless lifo and fifo stacks. Unmodified lifo/fifo code taken
from midishare. (Copyright Grame 1999-2005)
Rollendurchmesserzeitsammler v0.0.7
------------------------------------
The Audio Rollendurchmesserzeitsammler is a conservative garbage
collector especially made for running inside an audio DSP thread.
0.0.5 -> 0.0.7
* Cleaned up source a bit.
* Fixed a bug in "tar_entering_audio_thread"
which caused it to return false if currently copying a different heap.
* Cleaned up the critical section handling between the DSP thread and
the sweep thread. (it was really messy)
I've been looking around for a library to read and write SFZ files,
which is an open sampler format released by Cakewalk:
http://www.cakewalk.com/DevXchange/sfz.asp
Finding none, I thought I might try my hand at writing a library for
this myself, as there is no embedded wave information like with Gig
files. SFZ is simply a text file to be parsed.
Now, I know about writing a good header file, and its associated class,
and all that, but I have no knowledge of how to write it as a dynamic
library. Google searches on every possible permutation have been
worthless to me as well.
I would prefer to write it in C++, as that's what I know, and even then,
not too well, hence why I thought I'd start with something simple like
parsing a text file. If anyone has any advice, recommendations, or
ideas, I'll happily listen and learn. I have yet to think too much about
how the data will be stored in the class, and what methods to make
available to access it, so if anyone knows any best practices there, I'd
really like to know. Consider this a feeler post.
I'd ultimately want this for a future project, which you can guess at by
now.
Thank you for the help!
Regards,
Darren Landrum
I am doing some preliminary testing of CUDA for audio, Version 2 (final)
has been out for a couple of days, and this is also what I am using.
In order to get anything done, one will always have to do something else
first, here: Get some data transferred to the board. Surprisingly this
appears to be ten times harder than getting the data back, at least for
very small data sets representative of a bunch of on/off events or a
milliseconds worth of samples.
For 1024 bytes the transfer will take about 0.2 ms. That is 20% of the
available time if we use a time granularity of a single millisecond.
OTOH this appears to be mostly an intial constant, (much) more data can
be transferred in the exact same amount of time if that is what is
needed.
Now my card (8400GS) is neither the latest nor greatest and I therefore
wonder if anybody with better equipment is experiencing the same
phenomenon? This card does not support aynchronous transfers, which
otherwise might have been the thing to use here.
mvh // Jens M Andreasen
hello,
i recently purchased a madiface with express card.
trying to compile a debian-source kernel 2.6.26-4, with replaced hdspm.c
and hdspm.h, following instructions on
http://wiki.linuxproaudio.org/index.php/Driver:hdspm
i get following compile error:
ERROR: "__muldf3" [sound/pci/rme9652/snd-hdspm.ko] undefined!
ERROR: "__divdf3" [sound/pci/rme9652/snd-hdspm.ko] undefined!
ERROR: "__fixdfsi" [sound/pci/rme9652/snd-hdspm.ko] undefined!
ERROR: "__adddf3" [sound/pci/rme9652/snd-hdspm.ko] undefined!
ERROR: "__floatsidf" [sound/pci/rme9652/snd-hdspm.ko] undefined!
i tried to get more info on the net about this errors but did not find
usefull hints. any help would be appreciated!
peter
Since returning from holidays (i.e. 10 days) I've been running
patchage as my visual interface to Jack. Never managed to install
it from source (there was always something missing or at least one
compile error), but recently discovered the CCRMA package (thanks
Fernando !).
For the sort of things I'm doing, usually involving a lot of
interconnected Jack clients, it provides a much clearer picture
than qjackctl, and I like that a lot.
Some suggestions / feature requests:
- Include the window scroll offsets in .patchagerc
- Make sure new apps are allways displayed in the
visible part of the window.
- Some way to select and connect groups of adjacent
ports (I'm using 64ch soundcards, and most connections
are multichannel).
- Optionally collapse such sets of ports into a single
visual object.
Long term:
- Multiple tabs, each one controlling a Jackd on a
remote headless machine. This will require splitting
the app into a server and client with a network
connection in between.
Ciao,
--
FA
Laboratorio di Acustica ed Elettroacustica
Parma, Italia
O tu, che porte, correndo si ?
E guerra e morte !
Hey, I thought about extending MP3FS, a user level file system that
shows flac files as mp3s to user-space programs (see mp3fs.sf.net), to
make it work with my 96kHz/24 bit music rips.
MP3FS uses liblame and libflac on the inside, but only converts
standard 44.1kHz/16bit files at the moment. Looking at the
not-to-well-documented lame library I (think) that Lame only support
sample rates up to 48kHz, so I would need to convert the samplerate
and bitrate through the use of an other library. I finally found
libsndfile today, but thought I might hear with the expertice (that's
you), if it should work (if it supports what I want to do),or if I
should use something else.
I would use libsndflac to convert 24 bit/96kHz and 16 bit/44.1kHz flac
files to 16 bit/44.1 kHz uncompressed audio, and then use liblame to
convert this to mp3.
Regarding the downsampling I would like to know if I would get any
funny artifacts when downsampling 96kHz material to 44.1kHz (not even
division). Would I be better of to convert to 48kHz for 96kHz
material?
I do not know much about lame, mp3 encoding, or audio development
apart from the basics, so guide me into safer waters if I have drifted
into unknown waters here ;)
br
Carl-Erik Kopseng
Dear All,
The following position may be of interest to you.
Please forward to anyone interested. Apologies for double posting.
==== About the Barcelona Media Audio Group ====
Fundacio Barcelona Media Universitat Pompeu Fabra is a research centre
created to foster the competitiveness of the Catalan and Spanish media
and communication industry through innovative research activities and
projects. BM promotes technology generation and development; research
and creativity; transfer of research results to industry; promotion of
the research results to society at large; training in all areas of
communication; and social awareness of the communication industry in a
culture of innovation.
The Audio Group research embraces the whole chain of audiovisual
productions, focusing specially on 3D surround sound technologies, from
capturing, to postproduction, to exhibition. Two main general goals are
to automatize the workflow (by automatic audio adaptation to given 3D
scenes), and to make it easily adaptable to any final exhibition system
(surround 5.1, 7.1, 22.2, binaural or 3D stereo, etc.).
One strong line of research of the group is the reproduction of acoustic
fields in 3D virtual environments, using computer simulations to predict
what any source would sound like in a given virtual world. The group
applies and improves the algorithms of Finite-Differences in Time-Domain
for low frequencies, Ray-Tracing for high frequencies. Such technologies
are then integrated in real-time, interactive multimedia systems.
Audio group home page: http://www.barcelonamedia.org/linies/10/en
==== Profile ====
We are looking for one or more experienced software developers. The
candidate should be self-motivated, results-oriented, and hard-working.
The candidate should preferably have a degree on Computer Science,
although other profiles might be taken into account.
==== Required skills ====
* Software-engineering techniques and methods for developing large
software systems (design patterns, agile methodologies, version
control systems, etc. ).
* Real-time programming techniques including lock-free and
multi-threading.
* Programming languages: C, C++, Python.
* Operating systems: GNU/Linux and Mac OS X.
* A keen sense for the aesthetics of code, documentation, and user
interfaces. Thoroughness in all aspects of software development.
* The ability and willingness to interact in a team, using agile
methodologies.
Not required, but valuable skills:
* Real-time multimedia environments (PureData, Max/MSP,
Supercollider, CLAM, etc.).
* Plugin architectures: LADSPA, LV2, VST, Audio Units, etc.
* Knowledge of common protocols such as OSC, MIDI, etc.
* Knowledge in digital signal processing and/or acoustics
* 3D modeling: Blender or Maya or 3D studio
* Qt
* Scons
==== What we offer ====
We offer an opportunity to work in creative projects in the field of 3D
audio for media productions, with applications ranging from 3D digital
cinema, to sports broadcasting, and videogames.
Side opportunities: perform strategic research in a new promising
domain, working in a small-medium multidisciplinary team, collaborate
with people from the industry and from other academic research groups,
to establish contacts with the international audio research community
through the attendance to international conferences...
==== How to apply ====
To apply, send email to jobs(a)barcelonamedia.org / cc:
toni.mateos(a)barcelonamedia.org, pau.arumi(a)barcelonamedia.org with the
subject "3D Audio Jobs"
* A brief presentation letter stating your interest in the offer.
* A CV
* Optionally, code samples (non open-source samples will be
treated as confidential)
==== More background about Barcelona Media ====
BM grew from the Communication Station set up by Universitat Pompeu
Fabra in 2001. It is a member of the Catalan and Spanish network of
Technology Centres, and is the only one devoted to the Media sector.
BM’s trustees are representatives of the Media industry, the Catalan
Government, Barcelona City and four universities. BM has an extremely
strong record in European collaborative R&D and Innovation projects,
both as partner and coordinator. BM is currently involved in 14 EU
funded research projects in information and communication technologies
with over 5 million € EC funding. BM was coordinator of an FP6 IP and 2
STREPs, including IP-RACINE which researched and developed digital
cinema technologies ‘from scene to screen’. It is now co-ordinating the
FP7 ICT IP 2020 3D Media, developing 3D digital cinema and home
entertainment. Other directly relevant projects are IP SALERO
(‘intelligent content’ objects with context-aware behaviours), SEMEDIA
(Search Environments for MEDIA) and FP5 SPEED-FX (very high resolution
real-time graphic interaction for digital cinema).