There are many ways to incorporate generative music into electronic music performance. Since 2012 I have explored methods of integrating SuperCollider instruments into a Ableton Live set. This most recent development is a touch-screen UI built with modern web technologies, controlling SuperCollider patches through a Node.js server (running inside an Electron app). By using experimental Ableton Link support not yet released in SuperCollider, I can synchronize the clocks of rhythmic generative patches from SuperCollider with sounds in an Ableton Live set.
My response to a design challenge, a prototype of an interactive LED wall experience.
Conceptualize and prototype an immersive and interactive experience...
An LED wall allowing participants to paint gestures on a touchscreen.
Imagine entering the lobby of a modern office building. One of the large walls looks textured, like it could be a screen. There is a small podium close to the wall in the center, when you approach the screen on the podium calls to you touch it, when you do the wall in front of you lights up immensely bright as you paint a pattern over the screen.
The Transdimensional Audio Workstation is an interactive musical experience. The machine allows one to tune into another dimension in order to send a musical communication to one’s self. After sending the music, any transdimensional response will be interpreted and played back automatically.
The folks from Bells Atlas had an idea to release a single song from their EP, chosen at random to anyone who signed up. I liked this idea and built it for them: hyperlust.bellsatlas.com.
I spoke about musical creation algorithms and interfaces at "Music + Tech" night hosted by the Institute for the Future. The talk was on two topics: algorithmic music and musical creation GUIs.
Over the recent holiday season I decided to try working on a project together with my family. I proposed the medium of LED light animation and started talking with Pegs (my mom) about it. What we ended up with looks pretty cool! It is made from some LED strips in the shape of a star.
My most recent musical project is a collaboration with Luke Dahl called Mountains in Space.
Here is a recording similar to what we performed at Burning Man and a few other venues this summer:
We are performing original electronic music based on my explorations with field recordings, rhythmic algorithms, software synths, and Luke's explorations with analog synthesis and custom multi-channel reverb.
More sounds here: soundcloud.com/mountains-in-space
I built a soundscape for the "Deep Forest Lounge" space at the 2014 Priceless festival.
The space was a beautiful creation and functioned as a gathering space for folks exploring the festival's calmer nooks and crannies.
I performed as a guest with the Santa Clara Laptop Orchestra. The performance was in the Contemporary Jewish Museum as part of Google I/O.
Has one of your event callbacks seemingly stopped being called when it should be? Perhaps it is a related issue.
I use the same computer and OS X installation to both perform music and develop web technology. I am happy to let Google run a software updater in the background, but I want to be able to turn it off when I need to.
Here is how I do it.
This is a performance of the music I have been working on throughout the summer. I performed a very similar set in center camp at burning man this year. All of the sounds were created from scratch.
A wonderfully foreshadowing quote by Cowell.
Although existing in all music, the noise-element has been to music as sex to humanity, essential to its existence, but impolite to mention, something to be cloaked by ignorance and silence. Hence the use of noise in music has been largely unconscious and undiscussed. Perhaps this is why it has not been developed, like the more talked-of elements, such as harmony and melody. The use of noise in most music today is little beyond the primitive; in fact, it is behind most native music, where the banality of the thumps often heard in our concerts would not be tolerated.
I performed at a party we held to celebrate changes happening among many friends at one of my favorite spots in San Francisco with wonderful people! Here is a recording of the set:
I performed at CCRMA's annual "Modulations" event in San Francisco. Here is a decent recording:
The "All Worlds Fair" took place in San Francisco at the beautiful old mint building. It was quite a delightful gathering of people and showcased some art and performances that I found quite amazing.
I built a soundscape for the "Seas of the Subconscious" experience that took place in the basement. It sounded pretty sweet in that steel-walled room.
The soundscape is generative, so I could render out 3 minutes or 5 hours if I needed. Here is a short sample:
It is written entirely in SuperCollider so I now have the start of a framework for developing soundscapes in the way that I like to think about them. It is open source:
This piece is a collaboration with Jen Hsu which we first showed at the CCRMA Fall Concert 2012. It was quite a pleasure to create.
"Jnana" is an algorithmic accompaniment system integrated into Ableton Live. It can analyze MIDI input and generate melodic and rhythmic material in a similar style.
I worked with Gracenote during the summer of 2012 to develop interactive applications that were integrated with existing music information retrieval tools and recommendation services.
For a final project, I expanded upon my prior compositional ideas for "Determinism" to create a bit more detailed algorithmic piece.
"tulpasynth" is a prototype real-time collaborative music creation system that takes advantage of touchscreen gestures for a tangible, responsive UI. It has been generally well-received when I have shown it at CCRMA events and at the 2011 bay area Maker Faire.
For this assignment, I brought together a variety of techniques to create some algorithmic music.
For this assignment, I gathered a few recordings and created a short musical statement by manipulating the sounds in ChucK. The rhythmic variation is created with first-order Markov Chains, and the string-like sounds are created by feeding a recording into a Karplus Strong algorithm.
Here is a "musical statement" I created while working on a homework assignment. The assignment was to experiment with FM synthesis by building some timbres and submitting them along with a musical statement.
"KnacK" is a framework that I started this past quarter during my time in Music 220a.
The basic idea of the framework is to provide some conventions for making compositional code modular and reusable. There are some other features that I am interested in developing further as well such as a MVC-like interaction between raw "aesthetic data" and the instruments/performers in a musical piece.
"tulpasynth" is a real-time collaborative music creation system that I created this past quarter during Music 256A.
It is a collaborative web-based application for creating music with others by manipulating simple shapes in the web browser.
Below is an audio montage/narrative-like soundscape that I have created from various clips of NPR interviewees and reporters. It was originally developed for a 4-channel audio system, but below is the binaural stereo mix. Audio was generated using ChucK. End result and source code can be found below. Hope you enjoy.
NOTE: Unlike most content on this site, this audio is NOT licensed as Creative Commons. The content is copyrighted by NPR.
NOTE: Source code is licensed MIT.
Download source code
This summer, I drove across the country with an amazing woman who told me about some philisophical issues that she has concerning Computer Science. One of our discussions was about how she felt that if she were to enter the field of Computer Science, she would rather spend her time on computational theory research, seeing as eventually we will have solved all of the problems we can with our current models of computation.
I worked with IBM on big data visualization software and HTML5 mobile application research and development.
During the summer of 2011, I worked freelance for a startup company called "Float" who are in the process of developing a system for augmenting real-world interaction with technology.
In September I will officially be joining Stanford's Center for Computer Research in Music and Acoustics to pursue a Master's Degree in Music, Science and Technology.
Every New Year's Day, many of my family and friends gather to play bingo, eat and drink (like the good italians we are). In recent years there have been too many people in the house to hear what numbers have been called so I decided to take this opportunity to learn some new web technologies.
I recently developed a piece of software for loud italians to use as they play Bingo. I used CSS3 animations in a few different ways and would like to share my findings.
I find the significance of the Fibonacci number sequence in nature, art, and mathematics very interesting. For some time I have been thinking about how this pattern can be applied to music and have been developing my own musical composition that is algorithmically based on the Fibonacci sequence.
The composition as it stands currently. Please enjoy, and feel free to download if you would like.
This semester I did a decent amount of work on the framework for the Concert project: https://github.com/joshelser/Concert. Once this robust framework is complete, adding features should be a breeze thanks to the modular nature of our code. I have begun to work with the Backbone.js framework, which is turning out to be quite wonderful. For more details, see the Fall 2010 Final Presentation blog post on the Concert development blog: http://blog.concertsoundorganizer.com/post/2169460543/fall-2010-final-rcos-presentation.
Last year I developed a short composition based on the Fibonacci numbers in which various accompaniments were generated algorithmically, you can listen to it here.
This semester, I am working with Prof. Curtis Bahn in a much greater capacity, and will be hopefully be developing this idea into the composition that I have always wanted it to be.
Here is the result of this semester's work on my Fibonacci composition:
If you are interested in my process, please feel free to read my final report below, or email me.
I have worked for SCI for a few years, wearing many hats. The majority of my time has been spent editing and producing sound and video, although I have worked as an audio engineer and boom operator on many occasions.
My Symfony project is finally complete. I cannot take credit for the design, but I am quite proud of how the backend came out. Of course, all of the coolness is stuff the general public cannot see. Symfony worked out quite well, I would certainly use it again if I had to do a PHP project in the future.
Concert is an online collaborative organizational tool for sounds. I am developing Concert as part of Software Design and Documentation class here at RPI. See our wiki for more information.
This semester, for "Interactive Arts Programming" class, I will be making an interactive composition based on the Fibonacci sequence. The piece will involve me playing the bass line of my composition, while a computer takes my performance as input and generates algorithmically determined accompaniments.
This past new years, I worked for Pyro/FX at Times Square, programming and operating the digital firing system to fire all of the pyrotechnics underneath the ball. It was an exciting and nerve-wracking experience, but like all shows I've done, very rewarding in the end.
For the past few years I have been using Quicksilver for OS X, an application which has dramatically changed the way many people get things done on their Mac. Here are a few things you can do fairly easily (if you want to be as cool as me).
This semester I have been pursuing an independent study in DSP Programming for Music/Audio Applications. For my final project, I will be designing an Overdrive/Distortion effect that is customized for the sound of my bass.
"PyroInventory" is a complete inventory system for fireworks and explosives, customized for a Pyrotechnics Entertainment company called "Pyro/FX". Written in PHP/MySQL, it is a dynamic, database-driven web application.
Pyro/FX is a pyrotechnics/fireworks entertainment company based out of Hamden, CT. I have worked with Pyro/FX on configuring and implementing an automated firing system, I designed a customized fireworks/explosives inventory system for them, and have operated the computer firing system on multiple large-scale entertainment events.
Here is another song I made recently. I do not think it is perfect, but I am proud of it. I think we are going to try to perform it as a band.
This is a song I made for Computer Music class. It is produced entirely of sounds I have recorded myself. There are still a few compression issues that I would like to work out, but overall I'm pretty proud of it.
This is a video that I made for school this semester for Intermediate Video class. I wouldn't say that it is polished by any means, but I was proud to submit it (which is more than I can say for my final project for that same class). If I remember correctly the assignment was to build a climax using few shots.
So I had to make this for a video class...It's not nearly perfect, but then again the only reason it is finished is because it is not perfect. All of my projects that I start on my own which don't have a due date, I never end up finishing because I want them to be perfect.
Tripeg Studios is a film studio facility which I worked for in 2006. I worked primarily in a tech-support/systems-support role.