Random Music Generator

As my final project in Music 220a, I created a random music generator in ChucK. The program takes as inputs a time signature, key, tempo, and musical style, and generates a short piece of music using these parameters. For more on the design and source code, see the project website. 


Face is an audiovisual performance piece I created in Music 220b. Using ChucK and Processing, I created a concatenative speech synthesizer with real-time control and a talking face. In the performance, I used this to have a conversation with my computer about the nature of creation and existence, much like Dr. Frankenstein and his monster. For more on the project, see the ​website.


SoundPaint is an application I developed as my final project in Music 256a at CCRMA.It is an audiovisual creation tool, merging a basic paint application with a live-input sound analyzer and additive-synthesis generator. See the project website for more on the design of the application and to download the current version.

​Robert Colcord

Perceptual Audio Coder

In Music 422 at CCRMA, taught by Dr. Marina Bosi, I worked with a team to develop a Huffman-coded, block switching, stereo perceptual audio coder in Python. The coder, developed throughout the quarter, used a basic psychoacoustic masking model and block floating-point quantization to encode and compress audio files. As a final project, we added Huffman encoding, Mid/Side encoding, and block switching to further increase compression gains and improve the fidelity of the coder. The final report can be read here.


​MeowMix is another project I made in Music 256a. It is an interactive cat-sound sequencer, created to contrast sharply with typical sequencers. It has only seven sounds that can be sequenced, but uses an interactive Tamagachi-esque gameplay mode to control the sounds in a unique way. For more information, see the project website.

"A SoundHound for the Sounds of Hounds"

Weakly Supervised Modeling of Animal Sounds

"A SoundHound for the Sounds of Hounds" is a project I contributed to while taking CS229: Machine Learning at Stanford. We developed a weakly supervised method for finding acoustic models of different animal species using Mel-Frequency Cepstral Coefficients and a clustering algorithm that clusters similar, repeated sounds. This method would ideally make it quite trivial to identify and annotate lengthy field recordings of different animal species. For more information, view the poster or read thereport. 

Super Singo!

Super Singo is a game I created as part of a group project in Music 257: Neuroplasticity and Musical Gaming. It is a interval-singing training game in the form of a 2D platformer. The player sings pitches to control the motion of the sprite (named Singo) and navigate a maze. Intervals get larger and the game moves faster as the player progresses through the levels. The game was designed to challenge the player, but also to be easy enough that learning can occur. In addition to the game, we created a poster summarizing the game design and our hypotheses about the training benefits of playing the game.


FaceDeux is a whimsical sonic visualizer I developed in Music 256a, inspired by a previous project (Face, see below). It uses an FFT to visualize different aspects of sounds, although it is designed more for entertainment than practical, analytical purposes. To download the app and read more about the design, see the project website.