The music topic area is rich in opportunity for the creation of one piece production workflows. A computational representation of a piece of music can be created within the body of the document and then rendered as sheet music. The same musical object can be used to generate audio files that can be play the piece of music via an embedded audio player. Corpuses exist where a wide variety of musical scores have be represented using standard document formats such as MusicXML. This ready availability of scores makes is relatively easy to create materials around well known pieces, particularly if they are in the public domain.
If required, created audio files can be converted to waveforms that can be analysed using a variety of signal processing techniques.
In terms of creating learning materials, the one-piece flow approach provides a straightforward way of discovering or creating pieces of music, viewing the sheet music display, creating various visualisations over the music (for example, looking at pitch over the duration the piece), and rendering an audio version of the music that we can embed in the output document and play back and listen to directly.
Sheet music and the (embedded) audio files can be generated directly from the representation of a piece of music. As well as creating midi files, we can also create audio files (eg
.wav.files). Using soundfounts, it is possible to create audio files using different instruments, where appropriate.
In terms of creating learning activities, learners can listen to provided pieces of music, as well as edit them and listen to the changes. Learners can also create their own pieces of music, from which they can directly generate audio and visual representations. This opens up the possibility for a wide range of hands on music analysis tasks in a notebook setting, where learners can annotate the materials as well as creating and responding to their own musical creations in a self-narrated way.
music21 package provides a wide ranging toolkit for computer-aided musicology.
Import packages from
import music21 from music21 import *
Create a simple score using TinyNotation¶
TinyNotation is a simple notation for representing music.
Whilst TinyNotation may be be appropriate for representing complex pieces of music, the straightforward syntax provides a very quick and easy way creating a simple piece of music.
from music21 import converter s = converter.parse('tinyNotation: 4/4 C4 D2 E4 F4 G4 A4 B4 c4') s.show()
Other visualisations of the music are also possible. For example, we can look at the pitch and note length:
<music21.graph.plot.HorizontalBarPitchSpaceOffset for <music21.stream.Part 0x104943e20>>
Or analyse the key:
<music21.graph.plot.WindowedKey for <music21.stream.Part 0x104943e20>>
We can analyse the music in terms of pitch distribution:
<music21.graph.plot.HistogramPitchClass for <music21.stream.Part 0x104943e20>>
MIDI File Generation and Playback¶
When in a live, interactive Jupyter notebook, we can listen to the same piece of music creating a MIDI file from it and passing that file to an embedded audio player:
At the current time, it seems as if we need a web server to server the Jupyter Book page to load the midi player. A workaround to get this to work without the need for a webserver is to save a score as a Midi file, convert it to an audio file, and then load the audio file into an Jupyter
An advantage of this second approach is that we generate an
.wav file audio asset that can be played by any music player. A downside is that the wav file may be quite large, and will also take time to create from the original Midi file.
fn_midi = s.write('midi', fp='demo.mid')
Converting Midi Files to Audio Files¶
We can convert a MIDI file to an audio file (eg a
.wav file) using the fluisynth cross-platfrom command line application.
# fluidsynth: https://www.fluidsynth.org/ #!brew install fluidsynth # Requires a soundfont: https://github.com/FluidSynth/fluidsynth/wiki/SoundFont -> https://www.dropbox.com/s/4x27l49kxcwamp5/GeneralUser_GS_1.471.zip?dl=1 #http://www.schristiancollins.com/generaluser.php #%pip install git+https://github.com/nwhitehead/pyfluidsynth.git
A utility function will handle the conversion for us.
#Based on: https://gist.github.com/devonbryant/1810984 import os import subprocess def to_audio(midi_file, sf2="GeneralUser/GeneralUser.sf2", out_dir=".", out_type='wav', txt_file=None, append=True): """ Convert a single midi file to an audio file. If a text file is specified, the first line of text in the file will be used in the name of the output audio file. For example, with a MIDI file named '01.mid' and a text file with 'A major', the output audio file would be 'A_major_01.wav'. If append is false, the output name will just use the text (e.g. 'A_major.wav') Args: midi_file (str): the file path for the .mid midi file to convert sf2 (str): the file path for a .sf2 soundfont file out_dir (str): the directory path for where to write the audio out out_type (str): the output audio type (see 'fluidsynth -T help' for options) txt_file (str): optional text file with additional information of how to name the output file append (bool): whether or not to append the optional text to the original .mid file name or replace it """ fbase = os.path.splitext(os.path.basename(midi_file)) if not txt_file: out_file = out_dir + '/' + fbase + '.' + out_type else: line = 'out' with open(txt_file, 'r') as f: line = re.sub(r'\s', '_', f.readline().strip()) if append: out_file = out_dir + '/' + line + '_' + fbase + '.' + out_type else: out_file = out_dir + '/' + line + '.' + out_type subprocess.call(['fluidsynth', '-T', out_type, '-F', out_file, '-ni', sf2, midi_file])
Pass the name of the sound file and create a
We can now embed an audio player to play the wav file:
from IPython.display import Audio Audio(fn_midi.replace(".mid", ".wav"))