VIRTUALBAND: INTERACTING WITH STYLISTICALLY CONSISTENT AGENTS
Names should be omitted for double-blind reviewing Afﬁliations should be omitted for double-blind reviewing
ABSTRACT VirtualBand is a multi-agent system dedicated to live computer-enhanced music performances. VirtualBand enables one or several musicians to interact in real-time with stylistically plausible virtual agents. The problem addressed is the generation of virtual agents that each represent the style of a given musician, while reacting to human players. The solution we propose is a generation framework that relies on feature interaction. Virtual agents exploit a style database, which consists of an audio signal from which a set of MIR features are extracted. Musical interactions are represented by directed connections between virtual agents. The connections are speciﬁed as database ﬁlters based on MIR features extracted from the agent’s audio signal. We claim that such a connection framework allows to implement meaningful musical interactions and to produce stylistically consistent musical output. We illustrate this concept through several examples in jazz improvisation, beat boxing and interactive mash-ups. 1. INTRODUCTION Collective improvisation is a group practice in which several musicians contribute their part to produce a coherent musical output. Each musician tpically brings in musical knowledge, taste, and technical skills, and more generally his style, which makes him or her unique and recognizable. However, good improvisations are not only about putting together the competence of several individuals. Listening and interacting to each other is also crucial, as it enables each musician to adapt his or her playing to the global musical output in terms of rhythm, intensity, harmony, etc. The combination of individual styles with the deﬁnition of their interaction deﬁnes the quality of a music band. In short, group improvisation can be seen as interactions between stylistically consistent agents. Previous works have attempted to simulate computationally the behavior of real musicians in a context of musical group practice. For instance in  a MIDI-based model of an improviser’s personality is proposed, to build a virtual trio system, but no explicit interactions between
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for proﬁt or commercial advantage and that copies bear this notice and the full citation on the ﬁrst page. c 2013 International Society for Music Information Retrieval.
virtual agents and real musicians are proposed. Improtek  is a system for live improvisation that generates musical sequences built in real-time from a live source using feature similarity and concatenative synthesis. Improtek plays along with a musician improvising on harmonic and beat based music, in the style of this musician, but interactions with the virtual agents are uniquely based on similarity measures, thereby limiting the scope of possible man-machine interactions.  describes the Jambot, a system that infers in real-time various features from an audio signal and then produces percussive responses, following alternatively two behaviors: “imitative” or “intelligent”. In the imitative behavior, the system responds directly to a user onset by a percussive sound, which is limited both in terms of interaction and style. In the intelligent behavior, the system bases its output on a global understanding of the musical context (tempo, meter, etc.), but not directly from the user input: the system is not really interacting with the musician. Most importantly, even the style of the Jambot is not clearly deﬁned.  presents Beatback, a system that generates a MIDI signal based on rhythmic patterns and using Markov models. Besides the fact that the sole use of MIDI signals is restrictive musically, interactions are limited to rhythmic elements only. In , Martin et al. proposes a system that interacts in real-time with a human musician, by adapting its behavior both on prior knowledge (database of musical situations) and parameters extracted during the current session, but this system forces the musician to deal with a graphical interface during the improvisation, which makes the system hardly usable in a live performance. In this paper we revisit the problem of musical interaction with stylistically consistent agents by taking a featurebased MIR approach to the problem. We introduce VirtualBand, a reactive music multi-agent system in which real musicians can control and interact with virtual musicians that retain their own style. A virtual musician is a representation of an actual musician as a virtual agent. A virtual agent is designed to play or improvise like the actual musician it represents. To build the virtual agent, we record the musician in a musical situation that ﬁts with the targeted improvisation context. The resulting audio content is stored in a database. This database is organized in musically relevant chunks (usually beats and bars). To each chunk is associated a set of features that are used to deﬁne and implement the interaction schemes. During the performance, the virtual agent uses concatenative synthesis (see  for instance) to gen-
etc. as explained below.
. a style database consists of an audio signal from which a set of MIR features is extracted. SYSTEM DESCRIPTION The core of VirtualBand system (see Figure 1) consists of: • A clock that establishes the tempo by sending notiﬁcations at each beat to every agent. are also represented by agents. a connection is a directed link from a master (real or virtual) agent to a slave virtual agent (see Figure 1).g. Concretely. in different moods. a virtual agent computes at every beat the next audio chunk to play by choosing among the available audio beats in its database (and possibly transforming them). 2. 2.. Concretely.. in the context of jazz improvisation.g. numbers of onsets or rhythmic patterns. as speciﬁed by its connection with other agents (see below).. a pair of features (one for each agent). The link is deﬁned by a mapping between feature values. harmonic to noise ratio or chroma). In Section 2 we show how to build a reﬂexive virtual band from the interaction point of view.. e. harmony. The real musicians. spectral centroid. In Section 1 we describe the main components of VirtualBand. etc. Virtual agents play music from a style database consisting of prerecorded audio content. staccato or legato.M. which are the core contribution of our approach. from which various features are extracted. playing with different intensities. Human agents are responsible for analyzing the input (audio or MIDI) from the human musician they represent. These features are means for real agents to control the virtual agents via connections. Once the next chunk to play (beat or bar) has been selected. and spectral features. deﬁned by musical genre such as Bossa nova. as illustrated in this paper.). attempts to cover a large range of musical situations (e. mainly because time stretching algorithms are still not compatible with the real-time constraints of the system. erate music streams as seamless concatenations of database chunks. The interactions between the real and virtual musicians are implemented in VirtualBand by connections. Technically. called here real agents. To do so. virtual agents apply a succession of ﬁlters to this database.g.. so style databases are limited to speciﬁc musical situations. but they can be imported from a speciﬁc musical context (e. it is difﬁcult to express one’s style exhaustively. The values of the features will be used as a means of controlling virtual agents via the connections. Note that the subject of individual style capture is still in its infancy and further studies should reﬁne this concept. In the recording step. e. In that sense our multi-agent system is an instance of a reactive system rather than deliberative . the recorded audio signal is segmented into bars and beats (in our current implementation the tempo is given a priori so the segmentation task is easy). in reaction to the analyzed input. those participating in the improvisation. We imposed a ﬁxed tempo in the current version. VirtualBand modules are in the rectangular frame at the bottom. The extracted features are common MIR features (temporal features. The music stored in the database is organized in bars and beats from which are extracted a set of feature values. e. and a ﬁlter that selects from
Figure 1. Swing.1 Agents There are two types of agents: human and virtual agents. and illustrate the power of feature interaction on several conﬁgurations of the system. Connections between agents are represented by red arrows. representing respectively actual and virtual musicians. These mapping in turn are speciﬁed as ﬁlters that deﬁne which database chunks slave agents should select from their database. the agent performs a cross-fade between the end of the beat currently being played and the beginning of the selected beat and proceeds. but rather extract various acoustic and/or MIDI features from the musical output of the real musician. Architecture of VirtualBand.S. if the musician played with a chord grid).e. Of course. Section 3 describes applications of VirtualBand in two other musical contexts: beatboxing and automatic mash-up. A connection deﬁnes precisely the interaction scheme between two agents (real or virtual). A connection consists of two agents (a master and a slave). either from the bars or from the beats. Its main potential lies in the seemingly inﬁnite possibilities provided by feature interaction. and a tempo. R.3 Connections A connection between two agents represents a musical interaction between the corresponding musicians. using various patterns. 2. and to extract acoustic features on the signal in real-time. i. a musician plays so as to fully express his musical style.g.2 Style Databases A style database represents the musical skills of a musician in a given musical style/context. The construction of such a style database requires two steps: recording and extraction.g. Real agents do not relate to any prerecorded audio. • An audio engine and a mixer to play the outputs of the virtual agents. In the feature extraction step. These features are used to implement the interactions with the virtual agents.2.
whose value is the name of the next chord in the sequence. for instance pitched instruments (piano.. This agent is connected to every virtual agent that is harmony-dependent. Reﬂexive style databases can be used to generate various species of self-accompaniments such as duos or trios with oneself (see. musicians improvise on a chord sequence that is known to all musicians beforehand (e. For instance. For instance. One problem is that the feature types may operate in different spaces. and a combination of both (c). to accumulate at least one chunk for every chord of the sequence. such as standard jazz. which are not discussed in this paper.e. Pitch Shifting is an effect that shifts up or down an audio signal pitch-wise ( ).4 Harmony In tonal musical contexts. before it can start playing back relevant audio material. Because the database contains only what the musician has played so far.8). In VirtualBand. The chord sequence agent (CSA) provides a unique feature. In order to reduce feeding. Technically. the connection is deﬁned as follows: the master agent is CSA.CHORD matches CSA.e. the feeding phase can be drastically reduced. with different distributions of values. hitCount) and 3) a master-feature-value. Formally. an interaction is speciﬁed by an equation of the following form: Consider a Master agent M A that controls a Slave agent SA. provided at time t (e. The three examples of Figure 2 show how to harmonize the song Solar from a small number of input chords. of the guitar. 2) a slave-feature type (e. such a ﬁlter can be speciﬁed as: RG. Substitution rules consist in replacing a chord by another chord with an equivalent harmonic function. those databases can be built on-the-ﬂy.M. reﬂexive style database do not differ from pre-recorded ones. i. Such a feeding phase is usually boring for the musician as well as for the audience. Solar. the number of onsets in each bar.the database the best musical contents (bars/beats) to play depending on the features of each agent (see Figure 1). its master feature is CHORD and the slave agent is Vi . The ﬁlter is deﬁned as: PCA. and dispatches harmony to all other agents. the system has much less possibilities for generation.f = µ(M A. as well as explicit musical knowledge. reﬂexive style database raises sparsity issues. until a single value is found. harmony is implemented by a speciﬁc agent that represents the chord sequence. as explained in the next section. its size will typically be much smaller than that of pre-recorded databases. they raise various realtime issues since their segmentation and feature extraction must be performed in real-time without interfering with the main VirtualBand loop. and the slave feature is the hit-count.g. Without substitutions or transpositions. ). substitutions (b). The database consists of various recorded beats with a single audio chords.5 Reﬂexive Style Databases and transformations VirtualBand can also be used in a memoryless mode. 0.RMS matches VD. VirtualBand uses chord substitutions to play in a certain harmonic context audio that was recorded in another harmonic context. This is particularly visible in contexts using predetermined chord grids. The master feature of the connection can be the R. and then selects in the database the audio beat with the hit count closest to the target hit count.
In this case the slave feature is the chord of PCA (PCA. the ﬁlter establishes a linear mapping from the input RMS value to a target hit count value. Instead.Hit-Count 2. provided the two harmonic contexts may be substituted.CHORD). and ofers a lot of ﬂexibility to represent many diverse interaction scenarios. that is most appropriate for the context deﬁned by M A. such as transpositions. As a consequence. using transpositions (a). One motivation for getting rid of prerecorded material is to avoid “canned music” effects due to the fact that the audience does not know the content of the style databases. 12 input chords would be required in the
. Conceptually. 2. This operation is widely used in Jazz to bring variety to standard pieces (see ). To select the next beat to be played by the virtual drummer. the system requires a long feeding phase. a mapping is needed to map values in the master feature space to values in the slave feature space. By combining transpositions and substitutions. consider a piano comping agent (PCA) as slave agent. Therefore. in a manner typical of interactive systems such as the Continuator  or Omax .CHORD The matching is not necessarily strict. In general. For a virtual agent Vi . We use so-called substitution rules to further expand the style database. of the virtual drummer.. we propose to expand automatically style databases by using audio transformations. The input chords (highlighted) generate the remaining chords of Solar (marked in the same color).S. consider a real guitar agent (RG) that controls of virtual drummer (VD). Each beat is annotated with a feature CHORD which represents the chord name. the audio chunks of a database are pitch-shifted so that the transposed bars can be used in new harmonic situations. by Miles Davis). Such a mapping can be deﬁned by: SA.g. This selection is performed using 1) a master-feature type (e.g. without pre-recorded style databases.g. the problem is to select a chunk in SA’s database. The slave feature and ﬁlter are left undetermined but should be implemented so that the virtual agent Vi will play something that is harmonically consistent with the chord returned by the CSA. but can use substitution rules. spectralCentroid). More interestingly. The ﬁlter can be designed in various ways. singing voice). guitar. In principle. called CHORD.f ) The audio chunk in SA is then determined by applying a succession of ﬁlters to the database. e. i. In VirtualBand.g.
a common interaction scheme is question-answer dialogues. EXAMPLES A typical jazz guitar combo is a formation where a guitarist improvises melodies on an harmonic grid. the other one either stops playing. In the ﬁrst and second examples we limit the formation to a duo. This can be emulated simply in VirtualBand by using other pairs of features. proﬁles is deﬁned as the scalar product of the two sets. During an improvisation. the system behaves as a self harmonizer: the virtual guitarist follows the spectral centroid of the real guitarist with one bar delay (see Figure 3) music material that will sound like what the guitarist just played. while the two others accompany with chords and bass (i. melody. 3. is connected to the real guitarist. we extend te example to a guitar trio conﬁguration with the presented settings. a guitarist plays a melody. The distance between two R. (a) requires 5 input chords.g. chord comping. the input chord G min 7 can be substituted for C 7 using substitution “G min 7 : C 7”. VA uses substitution “F min 7 : Bb 7” of bar 7 of RA.M..e.spectralCentroid oppositeT o RG.1 Spectral centroid based duo For two jazz guitarists improvising together.g. Each of these roles. We chose the spectral centroid as an approximation of pitch. Typically the answering melody is in the same pitch range. A virtual agent (VG). We can simulate such scenarios with various conﬁgurations of VirtualBand based on pitch features of audio content. the other guitarists adapt their behavior so that each mode is always played by someone. A reﬂexive style database records the incoming audio
Figure 3.2 Rhythm-based duo Question-answer musical dialogues can also be based on rhythmical similarities. by using speciﬁc controllers.spectralCentroid matches RG. and as soon as he ﬁnishes. For instance. A guitarist plays a rhythm pattern for a few bars. At bar 11 (Db maj7) VA uses a transposition of bar 5 (F maj7) of RA to match the spectral centroid of bar 10 of RA. Score of a spectral-centroid based duo: a real agent (RA) plays the melody of Solar. 3. still satisfying harmonic constraints). or shares similar rhythm patterns with the original melody. we can introduce a feature that represents the rhythm pattern by using a ”‘RMS proﬁle”’. e. Such a conﬁguration is easily speciﬁed as: VG. associated to the reﬂexive database. In contrast. with a one-bar delay. Illustration of the feeding phase reduction using (a) transpositions. or accompany the ﬁrst one with. Another mode consists in doing the opposite. it is possible to switch from one ﬁlter to another during the performance. the two outputs (real and virtual guitar) are more clearly distinct. When a musician takes the lead (solo). At bar 8 (Bb 7). and Bb 7 with a combination of these two operations.SpectralCentroid In such a mode. Such a proﬁle is obtained by computing the RMS 12 times per beat over one bar. but more reﬁned descriptors could be used interchangably. the virtual agent selects in the database the bar with the spectral centroid closest to the real guitarist’s one. While a guitarist proposes a melody. In such a conﬁguration. represent different guitar playing modes. and the other one responds by playing a similar pattern.. Then. and (c) only 2.SpectralCentroid In this mode. chords and bass.Figure 2. The following examples illustrate various conﬁgurations of VirtualBand with one real guitarist and two reﬂexive virtual agents. 3. The connection ﬁlter speciﬁes that at each bar. feeding phase. Of course. (b) requires 8. and forcing the agent to play audio that is far away pitch-wise to the human’s input (of course. e. but also F min 7 using a 1 tone downard transposition. This mode can be speciﬁed as:
. of the real guitarist (RG) and extract the spectral centroid of each bar. guitarists typically shift roles in turn. (b) substitutions and (c) a combination of both. in which the virtual agent is speciﬁed respectively by a spectral centroid feature and by a rhythm pattern feature extracted from real guitarist. A virtual agent (VA) matches the spectral centroid of RA. For instance in (c). lower notes of the guitar).S. This can be speciﬁed as: VG. the other guitarist plays another melody that borrows elements from the ﬁrst one. An accompanying web site illustrates all these conﬁgurations with videos of the corresponding performances.
percussion and humming).. 4. or comping chords (a lot of notes per bar. the virtual agents adapt their mode and their rhythm. the virtual agent will be able to accompany melodies of the guitarist with chords and chords with melodies. but some great beatboxers are able to play two modes at the same time (e. We solve this problem in Vir-
Hidden for anonymity. With VirtualBand. Both choose and play their audio beats from the same reﬂexive database.S. Such an opposite pattern allows the musician to play with various playing modes.3).RhythmProﬁle In such a conﬁguration.g. proﬁle. among four possible modes: melody. i. and following his or her rhythmic patterns. The real musician can trigger rhythm patterns he previously recorded by playing similar patterns. songs that share moments where the harmony is similar. the system records and stores in a reﬂexive database the incoming audio of the real beatboxer.e. From this audio. depending on the mode of the real beatbox. Then two virtual agents are connected to this database. chord. but after some time. typically alternate between modes. each playing mode (melody.RhythmProﬁle closestT o RG. and then more melodies. chords and bass) is typically represented by one musician. 3. the database distinguishes between two playing modes: the percussion mode and the humming mode. Virtual agents follow a mutually exclusive principle and try to match the humans rhythm with a one-bar delay. it requires typically larger size databases. OTHER APPLICATIONS In this section. with one bar of delay. like in the previous example. Following an mutually exclusive principle. one to the chords). with songs whose tracks (e. The guitar trio with oneself follows more or less the rules of a standard guitar trio. Because the rhythm patterns played by the virtual agent are not identical to the real guitarist”’s.g. Beatboxing also involves humming. and agent follows a mutually exclusive mode. this mode can lead to fascinating interactions. Ap and Ah are the percussive and humming virtual agents. Furthermore. except for the set of features selected by the classiﬁer (here harmonic-to-noise ratio. voice track) are available as separated audio ﬁles. The classiﬁcation is performed following a similar scheme than . However. O and A are hummed vowels.2 Mash-up A mash-up is a song or composition created by blending two or more prerecorded songs.. in addition to the R. Here the musician is free to play as he wants. usually by overlaying a track of one song over the tracks of another (see ). or at least superimposable. the musical interaction is of course very different.e. virtual agents play alternatively. spectral centroid and Yin).
. in . A video showing an example of this trio can be seen in the accompanying web site 1 .1
VG.1 Reﬂexive Beatboxing Beatboxing is a music style where musicians use their mouth to simulate percussion. regularly spaced) can be triggered by playing a fast and regular melody. Each virtual agent is associated to a unique playing mode (one to the bass. drum track. 4. Beatboxing with VirtualBand aims at augmenting the performance of a moderately good beatboxer by allowing him or her to play. via reﬂexive virtual agents. Here. a eal guitarist can be self-accompanied by two reﬂexive virtual agents. Notation of a 16-bar performance of the reﬂexive beatboxing system: t and k are percussive sounds. so it is best used not right away during the session.e. i. Common beatboxers
human Ap Ah human Ap Ah human Ap Ah human Ap Ah
t---|t-t-|kk--|t--t| ----|----|----|----| ----|----|----|----|
5 6 7 8
O-O-|O-O-|A-A-|AO--| ----|t-t-|t-t-|t-t-| ----|----|----|----|
9 10 11 12
tt--|tktk|OAOA|OAOA| kk--|----|----|tktk| ----|AO--|A-A-|----|
13 14 15 16
tktk|tktk|----|----| tktk|----|----|tktk| ----|OAOA|OAOA|OAOA|
Table 1.M. a walking bass. speech or singing (see ).. Using the same settings than above (3. Here again.. following the scheme used in . we describe applications of VirtualBand in two other musical contexts. guitar track. For instance. several modes at the same time. e. as explained in the introduction of this section: if the real guitarist plays (and the database records) gentle chords. bass and silence. Mash-up usually exploit multi-track songs. A mash-up requires two songs (or more) that are harmonically compatible. Finding two such songs is difﬁcult. each agent follows the rhythmical patterns of the guitarist. they play only if the real guitarist is not playing in the same mode.g. another ﬁlter can be designed to play an opposite rhythm pattern..3 Trio In a guitar trio. Table 1 illustrates a typical session with such an augmented beatbox. i. Automatic playing mode idenﬁtication is described. when the system has accumlated enough rhythm samples to play back interesting variations. the reﬂexive database extracts the playing mode from each recorded beat. one representing the percussion mode and the other the humming mode. 4.
Otsu. this paper shows that even with ”‘only”’ reactive agents rich interactions can take place. plays breaks and more intensively on chorus or bridge.  M. pages 100–105.  L. CHI ’13. Reﬂexive loopers for solo musical improvisation. The continuator: Musical interaction with style. self goal making. This effect is mainly due to the complex interactions that may occur between 2 features computed on various audio signals. 2000.T. Goto.  T. In Markus Hannebauer. 2005. C. We use four different drummers: the drummer of “Nirvana” in “Smell Like Teen Spirit”. Pachet. Salerno. Schwarz. Mashups. Music Reference Services Quarterly. d’Inverno. M. volume 18. In Proc.tualBand with the transposition and substitution (see Section 2. REFERENCES  Jerry Coker.  A. Roy. Asoh. Jin. Martens. Automatic classiﬁcation of guitar playing modes.  D. as they enlarge the harmonic possibilities of a database. and authorship: A mashupsampliography. Nardi. Pitch shifting of audio signals using the constant-q transform. Beatback: A real-time interactive percussion system for rhythmic practise and exploration. McEwan.  F. Moreira. i.. Hamanaka. In International Joint Conference on Artiﬁcial Intelligence. of the 15th Int. Stowell and M. taken from the vast library of features developed in MIR. and W. Martin. Journal of New Music Research. 32(3):333–341. H. 2012.
 R. Foulon. Pasquier.5). 2003. Alfred Music Publishing. and M. Note that the muted agent still plays with the rest of the group (even if it is not audible). However. Improtek: integrating harmonic controls into improvisation in the ﬁliation of omax. In Proceedings of the International Computer Music Conference 2012 (ICMC 2012). sampling. As such. D. 2011. pages 180–187. planning. 2013. pages 51– 58.  J. Brown. or how to predict the emergence of long term structure from basic low-level feature interaction.  C. 6. 1997. A. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. Queen Mary.  F. VirtualBand agents are reactive and not deliberative. it only scratches the surface of the new ﬁeld of individual style modeling. in order to master the replacing agent in real-time. We illustrate mash-up with VirtualBand by replacing the drummer of the band “The Police” in “Roxanne” by another drummer. D. editors. and N. Current research in concatenative sound synthesis. of Electronic Engineering. pages 571–574. Each drummer follows the rhythmical patterns of the muted original drummer. Roy. volume 2103 of Lecture Notes in Computer Science. We can hear in the provided example that each drummer plays according to the global structure of the song. Chemillier. We represent each track of a song by a virtual agent.  D.  J. ACM. Iocchi. In 25th BCS Conference on Human-Computer Interaction. A similarity algorithm for interactive style imitation. and P. Grobelny. and A. pages 2205–2208. The virtual agent representing the new track is connected to the muted agent of the original song. Plumbley. Interactions are speciﬁed using features pairs. and M. P. Nika and M. Pachet. Springer.  A. 5. We have illustrated the system with several jazz improvization scenarios. A learning-based jam session system that imitates a player’s personality model. such as a guitar jazz trio played by a single musician as well as on other types of musical situations such as mashup and beatboxing. 2008. Dept. In Proceedings of the International Conference on New Interfaces for Musical Expression. VirtualBand illustrates the power of feature interaction to represent complex and meaningful musical interplay occurring in real performance. 2008.
http://tinyurl.R. Eigenfeldt. Hawryshkewich. even simple ones like the one used in this paper. Further work will focus on issues like how to ”‘saturate”’ a style database. The audio of each track is stored in a different database. they do not attempt to exhibit autonomy. Pachet. Reactivity and deliberation: A survey on multi-robot systems. pages 9–12. 2013. i. the drummer of “Pixies” in “Hey”. P. etc. Technical Report.e. J. a reactive multi-agent system for live musical improvisation involving real and virtual musicians. Mashing-up with VirtualBand consists in playing a song where one agent is muted and replaced by the track agent of another song (for instance the drum track is muted and replaced by the drum track of another song). University of London. Klapuri. Beyond reﬂexivity: Mediating between imitative and intelligent action in an interactive music system. Balancing Reactivity and Social Deliberation in MultiAgent Systems. CONCLUSION We have presented VirtualBand. VirtualBand revists the problem of interacting with stylistically consistent agents by taking a MIR approach to the problem. Gifford and A.e. Centre for Digital Music C4DMTR-08-01. ICMC.com/bu5hx6q
. 2011. F. Elements of the Jazz Language for the Developing Improvisor. L. Characteristics of the beatboxing vocal style. Submitted. In Proceedings of the International Computer Music Conference (ICMC). In Proc. Conference on Digital Audio Effects (DAFx12). Schrkhuber and A. 2003. pages 9–34. Jan Wendler. 11(3-4):229–239. and Enrico Pagello. and Bossa nova and Funk drums by the jazz drummer Jeff Boudreaux 2 . 2011. 2012.