JC and I have added mtalk to our sf2a — solfege to audio — distribution. Once installed, you can do this:

% mtalk 'Please get me a quart of milk at \
   the store.  Thanks.' -p -t tempo:144

The option -p means “play the audio file immediately,” and -t temp:144 sets the temp. Here is what the result sounds like:

Please get me a quart of milk at the store. Thanks.

This little programming project (we have to get our fun somehow:-) was inspired by James Gleick’s book, The Information. See previous post


A Poem

A friend sent me this link to a poem by Gjertrud Schnackenberg. It is lovely, musical, strong . I will quote just a few lines. It is from The Light Gray Soil:

My fingers touch
A penny, long forgotten in my coat,
Forgotten in the shock, December eighth,
Midnight emergency, a penny swept
Together with belongings from his coat
Into a sack of “Personal Effects,”
Then locked away, then given to the “Spouse.”
Nearly relinquished, nearly overlooked.
Surely the last he touched, now briefly mine.
A token of our parting, blindly kept.
Alloy of zinc, the copper thinly clad,

Many changes. (1) New name –sf2a (2) source code is at github. (3) At github click on download button if you wish to download. (4) Installation on mac: untar or unzip downoaded file, cd to the resulting folder, run sudo sh setup.sh -install YOUR_USER_NAME; (5) after install, run sf2a 'do re mi' A file out.wav should be created. This is the audio file. (6) There is a musical dictation program, dict that creates audio files and a web page for dictation exercises based on the data in a text file. Use the file dictation.txt in the install folder for an example. Just run dict -m in that folder, then open the web page index.html. (7) For a draft manual, see this web page.

All this works on a mac. Adapting it to Linux is easy. Just change the values of $INSTALL_DIR and $BIN_DIR in setup.sh. I don’t know enough about PC’s to advise on this — one ought to be able to modify the file setup.sh

Once upon a time there were two friends, a cricket and a millipede. The cricket admired the way his friend, Illacme Plenipes, could move forward with with such grace, with such coordination of her 700 legs. And the millipede admired the singing of her friend, Gryllus Rubens. One day Gryllus asked Illacme, “How do you do it? Which leg do you move first? How do you keep the rhythm?” Illacme had never thought about these things. But it seemed like an interesting question. Motionless, she thought for a a while, and then said, “Well, I think I do it like this …”. And there she remained in place, unable to move a single leg, or even the tip of her antennae. Night fell. It was not until Gryllus began to sing that the spell was broken and Illacme, the millipede, was able once again to creep along the damp earth.

An extract from my unpublished manuscript, “Lessons and Parables”

Song of Gryllus rubens


s2sound now supports up to ten independent voices. Here is a two-voice example

voice:1 decay 2.0
h mi q re ti_ h do

h do_ q sol_ sol__ h do_

See github for source code, including the 10-channel mixer in mix.c

Talking Drums

Two days ago I started reading James Gleick’s new book, The Information. I don’t know what the critics think, but it has met my two most important criteria: It held my attention; I learned something new. And a third, good prose style — efficient if not poetic — is also satisfied. So here is today’s little gem from the book: Talking Drums. The first, brief report of these to the European public was by Francis Moore in 1730, who navigated the Gambia River on a reconnaissance mission for the slave trade. A century later, Captain William Allen, on an expedition up the Niger River, noticed more. Speaking of his Cameroon pilot, he wrote:

Suddenly he became totally abstracted, and remained for a while in the attitude of listening. On being taxed with inattention, he said: “You no hear my son speak?” As we had heard no voice, he was asked how he knew it. He said, “Drum speak me, tell me come up deck.” This seemed to be very singular.

Singular it was indeed! It was not until the publication in 1949 of The Talking Drums of Africa, by the missionary John Carrington, that non-Africans understood and deciphered the drummers’ code. Carrington realized that drummers could quite communicate complex information — “birth announcements, warnings, prayers, even jokes” — over long distances through a specialized tonal language. It was a language that was nearing extinction just as its secret was uncovered.

I don’t want to take away your reading pleasure, so I will leave you with these (1) the drummers had developed an amazingly sophisticated system of disambiguation and error correction that allowed them to communicate complex sentences using only two tones, (2) A man from the Lokele village, where Carrington lived for many years, had this to say about him:

He is not really European, despite the color of his skin. He used to be from our village, one of us. After he died, the spirits made a mistake and sent him off to a far away village of whites to enter into the body of a little baby who was born of a white woman instead of one of ours. But because he belongs to us, he could not forget where he came from, and so he came back.” The man added, “If he is a little bit awkward on the drums, this is because of the poor education that the whites gave him.

Postscriptum. As a fun little programming exercise, JC and I worked up something to transform text to an imaginary musical language. Below is the text and the “music.”

Hey man, look at the sun!
Hey man, it keep us warm!
It grow our our food,
It keep us warm.
Hey man, look at the sun,
And feel it be warm on your face!!

Hey man, look at the sun!

For the source code for converting text to music, if you are interested in such things, see our github repository. It is part of our sf2sound project. The most relevant files are talk.py and talk.sh.

NOTES. The transformation of text to “music” effected by talk.py encodes vowels as quarter notes — a = do, e = re, i = mi, o = fa, u = sol. Consonants are encoded as a pair of eighth notes, e.g., p = do re, b = re do. All the plosives are encoded as a major second, unvoiced ones rising, the voiced ones falling. In general, members of a phonetic group — fricatives, liquids, etc. — share some musical feature, e.g. the same interval. Spaces and punctuation marks are coded by a short melodic fragment. See the code for talk.py for further information.

I’ve set up a git repository for sf2sound. There is a two-voice example there, and JC has written some commentary on it, as well as posted the audio files.

The superposition principle makes this all quite easy — we just add together the waveform files that sf2sound produces for the two voices. That file represents the combined voices, and we use text2sf to produce the audio file. For more voices, simply add more waveform files!

Our work so far is quite primitive, both musically and as a software product. But we are having a lot of fun, and learning a lot. Eventually we hope something polished and elegant will come out of this.