Amongst many other things this will have a new child of the 'About' window.
This will consist of an acknowledgement of all the people who've helped Yoshimi
in some way. If anyone particularly *doesn't* want their name on it please let
me know. Then again, if there are missing names I'd like to know about that too!
This list (Yoshimi_Helpers) has only existed in the source files since
V 1.5.1 and is my best attempt at finding everybody.
Release is planned for later this month.
--
Will J Godfrey
http://www.musically.me.uk
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.
Hi all,
Sorry to bother you again with this. I’m running into the same problem again as before (see the latter part of the [LAD] jackd not using jackdrc… thread).
And apparently my last fix involved some magic rather than replicable logic… :-( i.e. the problem seems to have disappeared on that system, without knowing exactly why)
Now I am installing my jack client again on another fresh install of ubuntu 17.04. and a get the same error.
I have installed jackd1, to be exact, these packages are current.
jackd/artful,artful,now 5 all [installed,automatic]
jackd1/artful,now 1:0.125.0-2 amd64 [installed]
jackd1-firewire/artful,now 1:0.125.0-2 amd64 [installed,automatic]
libjack0/artful,now 1:0.125.0-2 amd64 [installed]
qjackctl/artful,now 0.4.5-1ubuntu1 amd64 [installed,automatic]
when I start my client jackd is automatically started but i get the error:
connect(2) call to /dev/shm/jack-0/default/jack_0 failed (err=No such file or directory)
By browsing the jack sources i found the message seems to come from libjack/client.c in server_connect()
peeking into the /dev/shm directory it seems /dev/shm/jack-0/default/jack_0 is briefly created,but disappears again, presumably just before the client is able to connect.
I have been breaking my head over this, but I really have no clue why this is happening, especially as this has worked fine in the past.
I would really like to know what could possibly cause this error and how to fix it.
thanks,
fokke
>From the LAU list:
Am 08.09.18 um 17:23 schrieb Len Ovens:
> I would be willing to help with a govering body if there are others so
> inclined.
I'd definitely be interested in helping OSC staying relevant.
I've dabled with different OSC-to-X bridges in the past. [1] [2] [3]. My
main interest is controlling applications, which talk to some MIDI
device, running on a desktop or Raspi or similar, from another
application on an Android device, since MIDI and USB-OTG support on
Android devices is still somewhat a matter of luck.
The protocols I've seen so far, which embed MIDI in OSC are often too
simplistic. If I can't transmit Sysex, for example, it's no use to me.
And what is the advantage of the verbose commands MidiOSC/midioscar use
over just using the MIDI data type OSC provides?
Also, the MIDI specification has had a few additions in the past years
and a OSC-MIDI protocol hould make sure to support those as well.
Chris
[1] https://github.com/SpotlightKid/osc2rtmidi
[2] https://github.com/SpotlightKid/osc2mqtt
[3] https://github.com/SpotlightKid/touchosc2midi
I have set up and run a number of tests now. Mido under python2, uses
5-7% of two different 2.4GHz+ cores for a one-way midi2tcp --> tcp2midi
link based on the sample code using blocking waits on both MIDI and TCP
ports. I ran into minor difficulty getting Mido into python2, and
difficulty considerably beyond my knowledge getting Mido into python3,
into this up-to-date Manjaro, so my next tests were midi2udp -->
udp2midi using just Python3 and libraries JACK-Client and 'socket';
this is quite promising, just 2% CPU total, and the test is a bit
crude. So now I'm working on designing an algorithm which models a
hardwired MIDI cable over a UDP connection. Specific goals are:
- MIDI over IP. Using UDP right now, will use TCP if a reason to do so
emerges.
- Simplicity. I want this implementable in dedicated hardware as
easily and inexpensively as possible. I want $20 boxes where I can
dial the src/dest IPs using dip switches. Dreaming? Maybe...
- Very well-suited for live performance. This means absolutely no MIDI
event dropouts without being unaware, and using MIDI resends and reset
in emergency to handle packet loss.
- Solid LAN immediately. Very slightly lossy LAN/WLAN next. WAN if
there is call for it.
- Reliability and predictability at MIDI standard speeds -- 31.5 kHz --
not bandwidth availability. Once this works it may be doubled or
quadrupled, or brought to HD-MIDI's clock once someone decides on
one.
The current software thought is to have both sides have two threads:
one thread running callback to JACK, the other handling UDP/TCP, the
threads communicating by Python FIFO queues, the UDP/TCP thread being
constrained by 31.5kHz wait-state loop of some sort.
Any suggestions, abridgements, et cetera? Also any thoughts on the
best way to build that UDP/TCP thread wait-state loop? I'm beginning
to imagine that TCP might have the advantage of being able to build up
the buffering...but all of this has been fairly far from my practical
programming, and I know that so many of you live here, so I thought I
might ask :-)
--
Jonathan E. Brickman jeb(a)ponderworthy.com (785)233-9977
Hear us at ponderworthy.com -- CDs and MP3 available!
Music of compassion; fire, and life!!!
ALMAT - "Algorithms That Matter" Workshop at the
impuls 11th International Ensemble and Composers Academy for
Contemporary Music 2019
Special workshop for computer music practitioners, sound artists and
composers
Call for Participation Deadline: October 1st, 2018
with Robin Minard, David Pirrò, Hanns Holger Rutz
http://www.impuls.cc/academy-2019/special-programs.html#c5169
[please distribute]
Algorithms that Matter (ALMAT) focuses on the experimentation with
algorithms and their embedding in sound works. Rather than conceiving
algorithms as established building blocks or the a priori
formalisation of a compositional idea, we look at them as performing
entities whose consequences are irreducible to models. Algorithms
“matter” in the sense that matter and meaning cannot be distinguished,
neither can artists and their computational tools. Algorithms actively
produce spaces and temporalities which become entangled with their
physical embeddings.
The 2019 edition of the workshop focuses on the development of a
site-specific sound installation. The installation will explore the
interactions of algorithmic and physical spaces and their dynamic and
mutable properties. Participants will work on the premise that spaces
and our perception of them change depending on presence, absence, the
movement of visitors, the time of the day, the rhythm of the
surroundings as well as the sonic and algorithmic interventions we
bring into them.
This workshop seeks to attract computer music practitioners, sound
artists and composers by offering a platform for exchange and
reflection about their personal approaches towards algorithmic
experimentation. The participants are invited to develop their various
approaches within an atmosphere of collaboration, where special
emphasis will be given to the translation of environmental data (such
as sensor input from the surroundings and visitors) through computer
music systems developed and assembled by the participants and
tutors. One question we want to pursue is how behaviours can be
composed that transition from "technical and artificial" to "organic
and alive", particularly through the articulation of spatiality.
The workshop starts with an internal presentation of the participants
for the other participants and tutors. An initial sound situation
using a large number of small reconfigurable speakers forms the
starting point for in-situ work. This structure will then be available
for decomposition and rearrangement by the participants. The space
will become a public exhibition halfway through the workshop, making
it possible to observe and adapt to the interactions with the
audience, a central question in the making of sound installations.
The workshop will be held with technical infrastructure provided by
the Institute for Electronic Music and Acoustics (IEM), including an
48-channel sound system and a selection of sensors. The workshop
ALMAT was developed by David Pirrò and Hanns Holger Rutz (both IEM
Graz) and will be held together with the special support by Robin
Minard.
How to apply:
1.) First, you must register and be accepted as a participant of the
impuls Academy 2019.
2.) Along with your application, you must submit a statement
concerning your specific interest in participating in the ALMAT
Workshop or send this by e-mail to office(at)impuls(dot)cc.
3.) In addition, please send a description of your personal work in
relation to the workshop's theme stating your previous experiences and
describing employed computational approaches, their aesthetic
It is working as here:
https://github.com/ponderworthy/midi2tcp
and quite well. Haven't come up with a single reason to make it more
complicated, beyond much more careful timeout handling. It will need
to be able to destroy its own socket and recreate et cetera, to be
durable with marginal wifi. And performance is not very good, midi2tcp
takes 3-5% of one core and tcp2midi takes 6-8% of another on this i3-
3110 at 2.4GHz.
Am not sure what best to do about the performance. Will probably try
converting Python to binary, but have read over and over again that
that doesn't do so very much.
I also had to rename it back to TCP after re-reading the Mido library
docs yet again, it is using rtpmidi backend but it says it's TCP, not
UDP, and it is using a connected socket which means TCP I do
believe. I had thought RTP-MIDI was UDP? I wonder if judicious use
of UDP would improve performance by a substantial amount?
--
Jonathan E. Brickman jeb(a)ponderworthy.com (785)233-9977
Hear us at ponderworthy.com -- CDs and MP3 available!
Music of compassion; fire, and life!!!
I need lossless JACK MIDI networking outside of JACK's built-in
networking, and not multicast unless someone can tell me
straightforwardly how to get multicast (qmidinet) to run within
localhost as well as outside it. Thus I am thinking of trying my hand
at using the Mido library to bridge JACK MIDI and TCP. I have never
done this sort of coding before, programmatorially I am mostly a deep
scripting guy, Python-heavy with a bunch of Bash on Linux, Powershell-
heavy on Windows of late, with a pile of history on back in Perl on
both and VBA on Windows. Anyone have
hints...suggestions...alternatives...a best or better starting
place? Right now I don't want the applets to do GUI at all, I just
want them to sit quietly in xterms, on JACK servers, keeping
connection, and passing MIDI data to and fro, as other processes and
devices bring it.
--
Jonathan E. Brickman jeb(a)ponderworthy.com (785)233-9977
Hear us at ponderworthy.com -- CDs and MP3 available!
Music of compassion; fire, and life!!!
Hi all.
Next meeting at c-base is on Tuesday 2018-09-11.
To avoid cross-posting too much on the mailing lists I'd like the follow-up
discussion to happen in the thread to the invitation mail I sent to the linux-
audio-user list. There's also a bit more content in that mail than a redirect...
;-)
Cheers
/Daniel
spectmorph-0.4.1 has been released.
Overview of Changes in spectmorph-0.4.1:
----------------------------------------
* macOS is now supported: provide VST plugin for macOS >= 10.9
* Include instruments in source tarball and packages
* Install instruments to system-wide location
* New Instruments: Claudia Ah / Ih / Oh (female version of human voice)
* Improved tools for instrument building
- support displaying tuning in sminspector
- implement "smooth-tune" command for reducing vibrato from recordings
- minor encoder fixes/cleanups
- smlive now supports enable/disable noise
* VST plugin: fix automation in Cubase (define "effCanBeAutomated")
* UI: use Source A / Source B instead of Left Source / Right Source
* UI: update db label properly on grid instrument selection change
* Avoid exporting symbols that don't belong to the SpectMorph namespace
* Fix some LV2 ttl problems
* Fix locale related problems when using atof()
* Minor fixes and cleanups
What is SpectMorph?
-------------------
SpectMorph is a free software project which allows to analyze samples of
musical instruments, and to combine them (morphing). It can be used to
construct hybrid sounds, for instance a sound between a trumpet and a flute; or
smooth transitions, for instance a sound that starts as a trumpet and then
gradually changes to a flute.
SpectMorph ships with many ready-to-use instruments which can be combined using
morphing.
SpectMorph is implemented in C++ and licensed under the GNU LGPL version 3
Integrating SpectMorph into your Work
-------------------------------------
SpectMorph is currently available for Linux and Windows users. Here is a quick
overview of how you can make music using SpectMorph.
- VST Plugin, especially for proprietary solutions that don't support LV2.
(Available on Linux and 64-bit Windows)
- LV2 Plugin, for any sequencer that supports it.
- JACK Client.
- BEAST Module, integrating into BEASTs modular environment.
Note that at this point, we may still change the way sound synthesis works, so
newer versions of SpectMorph may sound (slightly) different than the current
version.
Links:
------
Website: http://www.spectmorph.org
Download: http://www.spectmorph.org/downloads
There are many audio demos on the website, which demonstrate morphing between
instruments.
--
Stefan Westerfeld, Hamburg/Germany, http://space.twc.de/~stefan