This is Steinway_IMIS soundfont, version 2.2.
ftp://musix.ourproject.org/pub/musix/sf2/Steinway_IMIS2.2
This version fixes the issue with loops. I hope this is the good one
and there are no more remaining major bugs.
Marcos is a little busy right now, so he asked me to make this fix. He
is thinking to make other improvements, so expect more updates soon.
Is anybody out here in LAU land have experience with PISound?
https://www.blokas.io/pisound/
I have just bought one and am having quite sever teething problems with it.
It keeps freezing for ~45 seconds when running X and I cannot get it to
use the full display.
cheers
Worik
--
If not me then who? If not now then when? If not here then where?
So, here I stand, I can do no other
root(a)worik.org 021-1680650, (03) 4821804 Aotearoa (New Zealand)
Dear list,
I recently bought a LinnStrument from Roger Linn Design:
http://www.rogerlinndesign.com/linnstrument.html
It is a great isomorphic midi-controller, and as such it is immediately
recognized on Linux.
The distinguishing feature of the LinnStrument is that it senses 3
degrees of freedom on each note: x-direction, y direction and
z-direction (pressure). The x-direction is mapped to pitch-bend, and
y-direction to CC74.
A cool feature is the "slide", where the pitch-bend is used to slide
between all notes in a row.
To allow individual pitch and CC74 values for each note, it sends each
note on a separate midi-channel ("MPE"):
http://www.rogerlinndesign.com/implementing-mpe.html
Bitwig has added support for this, and there is 20 presets in version
1.3.11, where this is used (tag: linnstrument). The LinnStrument
controller is not recognized automatically on Linux in version 1.3.11,
but it can be configured manually, and then it works fine. Note that
both midi-in and midi-out has to be configured, if not there is no
sound! It should look like this: https://ibin.co/2msBJVgpKtf9.png
Now I would like to also use it with the free Linux synths.
Here's what I have been able to make work this far.
Synthv1:
PME works reasonably well: I can play polyphonic in MPE mode, but it
tends to miss the "note off"s.
I can get the slide to work, by setting
<param index="36" name="DEF1_PITCHBEND">2</param>
<param index="78" name="DEF2_PITCHBEND">2</param>
is a preset.
Zynaddsubfx:
I can not get MPE to work.
Sending only on one channel, and setting PWheelB.Rng to 2400 cents, I
cant get the sliding to work, but only when playing with one finger.
If I enable MPE on the LinnStrument there is only an occasional sound,
when it happens to send on the channel, that Zyn is listening on.
I'll love to hear if other LinnStrument users have been able to do more
with any of the free synths on Linux.
All the best,
Thomas
Hi everyone
Following my question about JACK and tempo transmission over a network, I felt
the time is right for me to share some ideas about possible setup(s) of a
studio mainly based on free software. The key idea is that such a studio is to
be distributed among many hosts connected together with a fast local network.
While the infrastructure should run primarily on FLOSS software, we should not
shun proprietary tools, allowing a certain grade of interoperability between
different systems (OS and applications).
I don’t really know if I’m telling nonsense, but these ideas stem from my own
experience over the years: the main use case for me is to make music for
videos, being assisted in score and parts preparation, as well as
“quick” mockup creation.
Since money has always been tight for me, but I am a musician with enough
curiosity and a certain experience with computers, I have been fiddling with
Linux and music software for many years.
What I’m trying to demonstrate is that the effort of integrating FLOSS and
proprietary s/w with the great possibilities of modern and inexpensive h/w
could give a professional great flexibility and relative ease of use, while
keeping “low” costs and minimizing licensing and forced obsolescence woes.
Basically, what I am trying to achieve is a network mainly made of Ethernet
cables (while minimising audio cables), with the following nodes:
* a master (or maybe better, a “conductor” ;-) ) machine controlling and
transmitting the transport information, ideally a tablet or a minipc with a
touchscreen showing the “big clock” and the “big buttons” (transport controls)
* another machine (the router) with audio h/w and a DAW, receiving audio data
from the network. The same machine could also host a notation software,
perhaps
* optionally, a machine showing a synced video
* N >= 1 hosts running synths, virtual instruments, rocket launchers,
microwave ovens getting the lunch done while I’m thinking of these things...
;-)
On the FLOSS side, I think there are many of the right tools to implement my
idea, I’m using them since many years. Here’s a incomplete list of them:
1) JACK (obviously ;-) )
2) Cadence/Claudia
3) Carla
4) Qjackctl (gives me the “big clock” and the “big buttons”)
5) Ardour
6) MuseScore
7) Xjadeo
While all of these tools do a great job (*really* great), there is still a big
plumbing and tuning work to do. Actually this is the hardest part.
#####
Personally, being a classical trained musician, I tend to compose using a
notation software, with close attention to score neatness: I still prefer my
music being played by humans rather than virtual gizmos. But I’m a “poor man”,
so my great score is also to be “hashed” into a MIDI file, whose tracks are to
be assigned to virtual (grrrr...) instruments, which in turn are to be mixed
together, and the whole composition synced to a video.
Since I cannot afford buying an expensive Mac nor can I pay for a plethora of
licensed software, the challenge is to achieve similar results with
alternative means.
With this post I hope to start a constructive discussion about the potential
of FLOSS music software and practical uses of it, instead of or in conjunction
with other kinds of software, in professional environment.
ciao
Francesco Napoleoni
Hey everyone,
Have you ever had any luck with soundscape renderer (ssr)? I have been
having issues opening audio files with it, and am working with the
developers on github right now to see what fixes that may be possible.
Are there any good alternatives for rendering ambisonic and wavefield
synthesis audio files?
Thank you for any help you may be able to offer,
Brandon Hale
Hey hey,
I'd like to share my latest song:
https://youtu.be/hc4brgfSGzg
Direct OGG download:
https://www.dropbox.com/s/lmwx4bkp39wye6h/rebecca_auf_dem_trekka.ogg
A bit of electro fun, don't mind the German lyrics (or samples). This song
features my friend Beccy "Rebecca auf dem Trekka" Schaefer, as well as F.W.
Yilmaz.
>From the Linux side as well as the usual Midish sequencer and Nama for the
whole production, this song uses Csound for the vocoder sound, using my VC-110
vocoder UDO.
The song is a bit of a pastiche of late 90s dance group Music Instructor, if
anyone remembers them, with loats of monophonic goodness.
Best wishes and have a nice Sunday,
Jeanette
--
* Website: http://juliencoder.de - for summer is a state of sound
* Youtube: https://www.youtube.com/channel/UCMS4rfGrTwz8W7jhC1Jnv7g
* Audiobombs: https://www.audiobombs.com/users/jeanette_c
* GitHub: https://github.com/jeanette-c
You should never try to change me
I can be nobody else
And I like the way I am <3
(Britney Spears)
Hi all,
Further pre-release of SoundTracker 1.0.2 is ready. At this point I
announce feature-freeze; this means that mature 1.0.2 release will have
no new features compared with pre2, only possible bugfixes and
translation updates. So this pre-release has all facilities of ST-1.0.2
and I invite everyone to test it.
ST-1.0.2-pre2 can be downloaded here:
https://sourceforge.net/projects/soundtracker/files/latest/download
Any feedback is welcome in SoundTracker mailing list:
soundtracker-discuss(a)lists.sourceforge.net
What is new in soundtracker-1.0.2-pre2 (26-Feb-2021):
* Clavier look is improved (selectable font, better keys' shape)
* Some keybindings are added to the Sample editor
* When moving an envelope point, pressing CTRL restricts movement to
either vertical or horizontal direction
* Polyphonic try mode is improved: user can switch on/off same note
retrigging on different channels
* Rendering of the song / pattern / track /block into a sample is
implemented
* Volume / FX interpolation is improved: added the facility to
interpolate matching effects only
* Whole sample (data + parameters) copying / pasting is implemented
* Volumes of all samples can be adjusted (multiplied) by a given value
at once
* Added an option to paste a block without cursor movement
* PulseAudio output driver
* Compatibility with FastTracker II is improved
* Some fixes and small improvements
I am deep down a rabbit hole trying to help someone with something that I
think should be simple.
I may be about to solve it in a convoluted way.
I need to be able to use jack-volume:
https://github.com/voidseg/jack-volume
to mute or unmute or raise or lower the volume on command (eventually from
a script, probably python.)
Has anyone ever used jack-volume before? Can anyone explain how it should
work? Test to see if they can get it to work?
Iiuc, it uses OSC to communicate from the (python) client to the
jack-volume (server).
I have never really used OSC either. Does the protocol contain acks that I
can look for with wireshark?
What else should I ask?
all the best,
drew
--
Enjoy the *Paradise Island Cam* playing
*Bahamian Or Nuttin* - https://www.paradiseislandcam.com/
Hey hey,
this is my latest song:
https://youtu.be/ypm4h8gD0s4
and for OGG download:
https://www.dropbox.com/s/4o9red2nzjl3rex/arctic_80.ogg
The song was written as part of a challenge or competition by the DJs for
Climate Action (https://djs4ca.com) to use their sample pack, which was
created from sample of the Green Peace sound library. The sample pack contains
pure field recordings and loops and oneshots made from these. The competition
is still running till the end of February.
Talking to a friend just before beginning the song, we talked about the genre
of slaphouse, so I tried to encorporate something from slaphouse into this
track.
On the Linux side I used a lot of LinuxSampler, some Csound to create my riser
- based on samples - and Yoshimi the the rhythmic main pad. Though the rhythm
was created in my DAW, by automating the volume with an LFO.
Enjoy, feedback would be welcome!
Best wishes,
Jeanette
--
* Website: http://juliencoder.de - for summer is a state of sound
* Youtube: https://www.youtube.com/channel/UCMS4rfGrTwz8W7jhC1Jnv7g
* Audiobombs: https://www.audiobombs.com/users/jeanette_c
* GitHub: https://github.com/jeanette-c
Give me a sign...
Hit me Baby one more time <3
(Britney Spears)
This is just a shout out to acknowledge the fact that there are a lot people out
there who, in spite of worries about home, family and jobs, in these especially
difficult times, *still* manage to significantly advance the software we all
rely on.
As one of the 'Idle Poor' (retired), I don't lose sight of the fact that this is
a considerable effort on their part, for which there are few thanks.
--
Will J Godfrey
http://www.musically.me.ukhttp://yoshimi.github.io
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.