This action might not be possible to undo. Are you sure you want to continue?
Names should be omitted for double-blind reviewing Afﬁliations should be omitted for double-blind reviewing
ABSTRACT VirtualBand is a multi-agent system dedicated to live computer-enhanced music performances. VirtualBand enables one or several musicians to interact in real-time with stylistically plausible virtual agents. The problem addressed is the generation of virtual agents that each represent the style of a given musician, while reacting to human players. The solution we propose is a generation framework that relies on feature interaction. Virtual agents exploit a style database, which consists of an audio signal from which a set of MIR features are extracted. Musical interactions are represented by directed connections between virtual agents. The connections are speciﬁed as database ﬁlters based on MIR features extracted from the agent’s audio signal. We claim that such a connection framework allows to implement meaningful musical interactions and to produce stylistically consistent musical output. We illustrate this concept through several examples in jazz improvisation, beat boxing and interactive mash-ups. 1. INTRODUCTION Collective improvisation is a group practice in which several musicians contribute their part to produce a coherent musical output. Each musician tpically brings in musical knowledge, taste, and technical skills, and more generally his style, which makes him or her unique and recognizable. However, good improvisations are not only about putting together the competence of several individuals. Listening and interacting to each other is also crucial, as it enables each musician to adapt his or her playing to the global musical output in terms of rhythm, intensity, harmony, etc. The combination of individual styles with the deﬁnition of their interaction deﬁnes the quality of a music band. In short, group improvisation can be seen as interactions between stylistically consistent agents. Previous works have attempted to simulate computationally the behavior of real musicians in a context of musical group practice. For instance in  a MIDI-based model of an improviser’s personality is proposed, to build a virtual trio system, but no explicit interactions between
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for proﬁt or commercial advantage and that copies bear this notice and the full citation on the ﬁrst page. c 2013 International Society for Music Information Retrieval.
virtual agents and real musicians are proposed. Improtek  is a system for live improvisation that generates musical sequences built in real-time from a live source using feature similarity and concatenative synthesis. Improtek plays along with a musician improvising on harmonic and beat based music, in the style of this musician, but interactions with the virtual agents are uniquely based on similarity measures, thereby limiting the scope of possible man-machine interactions.  describes the Jambot, a system that infers in real-time various features from an audio signal and then produces percussive responses, following alternatively two behaviors: “imitative” or “intelligent”. In the imitative behavior, the system responds directly to a user onset by a percussive sound, which is limited both in terms of interaction and style. In the intelligent behavior, the system bases its output on a global understanding of the musical context (tempo, meter, etc.), but not directly from the user input: the system is not really interacting with the musician. Most importantly, even the style of the Jambot is not clearly deﬁned.  presents Beatback, a system that generates a MIDI signal based on rhythmic patterns and using Markov models. Besides the fact that the sole use of MIDI signals is restrictive musically, interactions are limited to rhythmic elements only. In , Martin et al. proposes a system that interacts in real-time with a human musician, by adapting its behavior both on prior knowledge (database of musical situations) and parameters extracted during the current session, but this system forces the musician to deal with a graphical interface during the improvisation, which makes the system hardly usable in a live performance. In this paper we revisit the problem of musical interaction with stylistically consistent agents by taking a featurebased MIR approach to the problem. We introduce VirtualBand, a reactive music multi-agent system in which real musicians can control and interact with virtual musicians that retain their own style. A virtual musician is a representation of an actual musician as a virtual agent. A virtual agent is designed to play or improvise like the actual musician it represents. To build the virtual agent, we record the musician in a musical situation that ﬁts with the targeted improvisation context. The resulting audio content is stored in a database. This database is organized in musically relevant chunks (usually beats and bars). To each chunk is associated a set of features that are used to deﬁne and implement the interaction schemes. During the performance, the virtual agent uses concatenative synthesis (see  for instance) to gen-
g. staccato or legato. in reaction to the analyzed input. 2. called here real agents. erate music streams as seamless concatenations of database chunks. virtual agents apply a succession of ﬁlters to this database. either from the bars or from the beats. the recorded audio signal is segmented into bars and beats (in our current implementation the tempo is given a priori so the segmentation task is easy). so style databases are limited to speciﬁc musical situations. The values of the features will be used as a means of controlling virtual agents via the connections.S. A connection consists of two agents (a master and a slave).. We imposed a ﬁxed tempo in the current version.g.). and illustrate the power of feature interaction on several conﬁgurations of the system. harmonic to noise ratio or chroma).2. a style database consists of an audio signal from which a set of MIR features is extracted. as speciﬁed by its connection with other agents (see below). The construction of such a style database requires two steps: recording and extraction. harmony. in the context of jazz improvisation..1 Agents There are two types of agents: human and virtual agents. it is difﬁcult to express one’s style exhaustively. e.e. playing with different intensities.. Note that the subject of individual style capture is still in its infancy and further studies should reﬁne this concept. Once the next chunk to play (beat or bar) has been selected. SYSTEM DESCRIPTION The core of VirtualBand system (see Figure 1) consists of: • A clock that establishes the tempo by sending notiﬁcations at each beat to every agent. a connection is a directed link from a master (real or virtual) agent to a slave virtual agent (see Figure 1). The interactions between the real and virtual musicians are implemented in VirtualBand by connections.g. R. etc. deﬁned by musical genre such as Bossa nova. etc. but rather extract various acoustic and/or MIDI features from the musical output of the real musician. as illustrated in this paper. 2. spectral centroid.M. In the feature extraction step. e. and a ﬁlter that selects from Figure 1. Concretely. those participating in the improvisation. and spectral features. e. Connections between agents are represented by red arrows. These features are means for real agents to control the virtual agents via connections. These features are used to implement the interactions with the virtual agents. Of course. These mapping in turn are speciﬁed as ﬁlters that deﬁne which database chunks slave agents should select from their database. as explained below. In that sense our multi-agent system is an instance of a reactive system rather than deliberative . and to extract acoustic features on the signal in real-time. mainly because time stretching algorithms are still not compatible with the real-time constraints of the system.2 Style Databases A style database represents the musical skills of a musician in a given musical style/context. In the recording step. The extracted features are common MIR features (temporal features. a pair of features (one for each agent). but they can be imported from a speciﬁc musical context (e. which are the core contribution of our approach. . Human agents are responsible for analyzing the input (audio or MIDI) from the human musician they represent. VirtualBand modules are in the rectangular frame at the bottom. are also represented by agents. i. using various patterns.g.3 Connections A connection between two agents represents a musical interaction between the corresponding musicians. Architecture of VirtualBand. The music stored in the database is organized in bars and beats from which are extracted a set of feature values. if the musician played with a chord grid). Real agents do not relate to any prerecorded audio. Swing. The real musicians. 2.. a virtual agent computes at every beat the next audio chunk to play by choosing among the available audio beats in its database (and possibly transforming them). Virtual agents play music from a style database consisting of prerecorded audio content. a musician plays so as to fully express his musical style. and a tempo. In Section 1 we describe the main components of VirtualBand. in different moods. Technically. A connection deﬁnes precisely the interaction scheme between two agents (real or virtual). • An audio engine and a mixer to play the outputs of the virtual agents. from which various features are extracted. Concretely. the agent performs a cross-fade between the end of the beat currently being played and the beginning of the selected beat and proceeds. To do so. Its main potential lies in the seemingly inﬁnite possibilities provided by feature interaction. attempts to cover a large range of musical situations (e. Section 3 describes applications of VirtualBand in two other musical contexts: beatboxing and automatic mash-up. The link is deﬁned by a mapping between feature values.. numbers of onsets or rhythmic patterns. In Section 2 we show how to build a reﬂexive virtual band from the interaction point of view. representing respectively actual and virtual musicians.g.
CHORD The matching is not necessarily strict. One problem is that the feature types may operate in different spaces. that is most appropriate for the context deﬁned by M A.8). and then selects in the database the audio beat with the hit count closest to the target hit count.g. the ﬁlter establishes a linear mapping from the input RMS value to a target hit count value. By combining transpositions and substitutions. reﬂexive style database do not differ from pre-recorded ones. One motivation for getting rid of prerecorded material is to avoid “canned music” effects due to the fact that the audience does not know the content of the style databases. This is particularly visible in contexts using predetermined chord grids.M..S. as explained in the next section. In order to reduce feeding. More interestingly.5 Reﬂexive Style Databases and transformations VirtualBand can also be used in a memoryless mode.e. consider a real guitar agent (RG) that controls of virtual drummer (VD). We use so-called substitution rules to further expand the style database. The ﬁlter can be designed in various ways. and the slave feature is the hit-count. The database consists of various recorded beats with a single audio chords. reﬂexive style database raises sparsity issues. the system has much less possibilities for generation.g. This selection is performed using 1) a master-feature type (e. for instance pitched instruments (piano. Such a feeding phase is usually boring for the musician as well as for the audience. to accumulate at least one chunk for every chord of the sequence. In this case the slave feature is the chord of PCA (PCA. before it can start playing back relevant audio material. those databases can be built on-the-ﬂy. such as transpositions. the problem is to select a chunk in SA’s database. which are not discussed in this paper. The three examples of Figure 2 show how to harmonize the song Solar from a small number of input chords. consider a piano comping agent (PCA) as slave agent. Pitch Shifting is an effect that shifts up or down an audio signal pitch-wise ( ). its size will typically be much smaller than that of pre-recorded databases. For a virtual agent Vi .RMS matches VD. a mapping is needed to map values in the master feature space to values in the slave feature space. musicians improvise on a chord sequence that is known to all musicians beforehand (e. ). i. The chord sequence agent (CSA) provides a unique feature. and dispatches harmony to all other agents.Hit-Count 2. provided at time t (e. In principle. of the guitar. the connection is deﬁned as follows: the master agent is CSA. the number of onsets in each bar. the system requires a long feeding phase. 2. As a consequence.g. For instance. In VirtualBand. Each beat is annotated with a feature CHORD which represents the chord name.CHORD).. we propose to expand automatically style databases by using audio transformations. Solar. This agent is connected to every virtual agent that is harmony-dependent. 12 input chords would be required in the . without pre-recorded style databases. substitutions (b). e. an interaction is speciﬁed by an equation of the following form: Consider a Master agent M A that controls a Slave agent SA. In general. called CHORD. as well as explicit musical knowledge. VirtualBand uses chord substitutions to play in a certain harmonic context audio that was recorded in another harmonic context. the audio chunks of a database are pitch-shifted so that the transposed bars can be used in new harmonic situations. by Miles Davis).the database the best musical contents (bars/beats) to play depending on the features of each agent (see Figure 1).g.f ) The audio chunk in SA is then determined by applying a succession of ﬁlters to the database.CHORD matches CSA. until a single value is found. whose value is the name of the next chord in the sequence. Conceptually. of the virtual drummer. The input chords (highlighted) generate the remaining chords of Solar (marked in the same color). 2) a slave-feature type (e. using transpositions (a). For instance. harmony is implemented by a speciﬁc agent that represents the chord sequence. 0. To select the next beat to be played by the virtual drummer.g. Because the database contains only what the musician has played so far. such a ﬁlter can be speciﬁed as: RG. The ﬁlter is deﬁned as: PCA. hitCount) and 3) a master-feature-value.f = µ(M A. provided the two harmonic contexts may be substituted. Formally. guitar. Without substitutions or transpositions. Therefore. Reﬂexive style databases can be used to generate various species of self-accompaniments such as duos or trios with oneself (see.e. i. spectralCentroid). In VirtualBand. The slave feature and ﬁlter are left undetermined but should be implemented so that the virtual agent Vi will play something that is harmonically consistent with the chord returned by the CSA.4 Harmony In tonal musical contexts. and a combination of both (c). This operation is widely used in Jazz to bring variety to standard pieces (see ). the feeding phase can be drastically reduced. Technically. with different distributions of values. Such a mapping can be deﬁned by: SA. Substitution rules consist in replacing a chord by another chord with an equivalent harmonic function. Instead. its master feature is CHORD and the slave agent is Vi . in a manner typical of interactive systems such as the Continuator  or Omax . singing voice). but can use substitution rules. such as standard jazz. and ofers a lot of ﬂexibility to represent many diverse interaction scenarios. The master feature of the connection can be the R. they raise various realtime issues since their segmentation and feature extraction must be performed in real-time without interfering with the main VirtualBand loop.
In such a conﬁguration. and forcing the agent to play audio that is far away pitch-wise to the human’s input (of course. (b) substitutions and (c) a combination of both. we can introduce a feature that represents the rhythm pattern by using a ”‘RMS proﬁle”’. the other one either stops playing. For instance in (c). In the ﬁrst and second examples we limit the formation to a duo.SpectralCentroid In this mode. or accompany the ﬁrst one with. but more reﬁned descriptors could be used interchangably.. while the two others accompany with chords and bass (i. An accompanying web site illustrates all these conﬁgurations with videos of the corresponding performances. The connection ﬁlter speciﬁes that at each bar. e. 3. We chose the spectral centroid as an approximation of pitch. A reﬂexive style database records the incoming audio Figure 3. but also F min 7 using a 1 tone downard transposition. The following examples illustrate various conﬁgurations of VirtualBand with one real guitarist and two reﬂexive virtual agents. Of course. chord comping. During an improvisation. When a musician takes the lead (solo). While a guitarist proposes a melody. Another mode consists in doing the opposite.M. melody. the two outputs (real and virtual guitar) are more clearly distinct. and as soon as he ﬁnishes. Such a conﬁguration is easily speciﬁed as: VG. the system behaves as a self harmonizer: the virtual guitarist follows the spectral centroid of the real guitarist with one bar delay (see Figure 3) music material that will sound like what the guitarist just played. lower notes of the guitar). Score of a spectral-centroid based duo: a real agent (RA) plays the melody of Solar.S.e. 3.g. chords and bass.spectralCentroid oppositeT o RG. and Bb 7 with a combination of these two operations. The distance between two R. We can simulate such scenarios with various conﬁgurations of VirtualBand based on pitch features of audio content. This mode can be speciﬁed as: . in which the virtual agent is speciﬁed respectively by a spectral centroid feature and by a rhythm pattern feature extracted from real guitarist. of the real guitarist (RG) and extract the spectral centroid of each bar. it is possible to switch from one ﬁlter to another during the performance. (a) requires 5 input chords. A virtual agent (VG).spectralCentroid matches RG. This can be emulated simply in VirtualBand by using other pairs of features. Then. Typically the answering melody is in the same pitch range. This can be speciﬁed as: VG. associated to the reﬂexive database. proﬁles is deﬁned as the scalar product of the two sets. A guitarist plays a rhythm pattern for a few bars.1 Spectral centroid based duo For two jazz guitarists improvising together. and (c) only 2.2 Rhythm-based duo Question-answer musical dialogues can also be based on rhythmical similarities. or shares similar rhythm patterns with the original melody. and the other one responds by playing a similar pattern. still satisfying harmonic constraints). feeding phase. Illustration of the feeding phase reduction using (a) transpositions. For instance. represent different guitar playing modes. we extend te example to a guitar trio conﬁguration with the presented settings. In contrast. A virtual agent (VA) matches the spectral centroid of RA. with a one-bar delay. the other guitarist plays another melody that borrows elements from the ﬁrst one. At bar 8 (Bb 7).g.Figure 2. a guitarist plays a melody. EXAMPLES A typical jazz guitar combo is a formation where a guitarist improvises melodies on an harmonic grid. is connected to the real guitarist. by using speciﬁc controllers. e. the virtual agent selects in the database the bar with the spectral centroid closest to the real guitarist’s one.. the input chord G min 7 can be substituted for C 7 using substitution “G min 7 : C 7”. At bar 11 (Db maj7) VA uses a transposition of bar 5 (F maj7) of RA to match the spectral centroid of bar 10 of RA. 3. a common interaction scheme is question-answer dialogues. the other guitarists adapt their behavior so that each mode is always played by someone. VA uses substitution “F min 7 : Bb 7” of bar 7 of RA. guitarists typically shift roles in turn. Each of these roles. (b) requires 8. Such a proﬁle is obtained by computing the RMS 12 times per beat over one bar.SpectralCentroid In such a mode.
1 Reﬂexive Beatboxing Beatboxing is a music style where musicians use their mouth to simulate percussion. but after some time.S. Common beatboxers 1 human Ap Ah human Ap Ah human Ap Ah human Ap Ah t---|t-t-|kk--|t--t| ----|----|----|----| ----|----|----|----| 5 6 7 8 O-O-|O-O-|A-A-|AO--| ----|t-t-|t-t-|t-t-| ----|----|----|----| 9 10 11 12 tt--|tktk|OAOA|OAOA| kk--|----|----|tktk| ----|AO--|A-A-|----| 13 14 15 16 tktk|tktk|----|----| tktk|----|----|tktk| ----|OAOA|OAOA|OAOA| Table 1. except for the set of features selected by the classiﬁer (here harmonic-to-noise ratio. the virtual agents adapt their mode and their rhythm. several modes at the same time. Following an mutually exclusive principle.M. Each virtual agent is associated to a unique playing mode (one to the bass. a eal guitarist can be self-accompanied by two reﬂexive virtual agents. 3. proﬁle.3 Trio In a guitar trio. or at least superimposable. with one bar of delay. The guitar trio with oneself follows more or less the rules of a standard guitar trio. depending on the mode of the real beatbox.1 2 3 4 VG. speech or singing (see ). like in the previous example.g. With VirtualBand. regularly spaced) can be triggered by playing a fast and regular melody. songs that share moments where the harmony is similar. it requires typically larger size databases. typically alternate between modes. However. Because the rhythm patterns played by the virtual agent are not identical to the real guitarist”’s. Mash-up usually exploit multi-track songs. guitar track. Automatic playing mode idenﬁtication is described. we describe applications of VirtualBand in two other musical contexts.RhythmProﬁle In such a conﬁguration. Table 1 illustrates a typical session with such an augmented beatbox. chords and bass) is typically represented by one musician. the musical interaction is of course very different. Finding two such songs is difﬁcult. following the scheme used in . . From this audio. Using the same settings than above (3.. Such an opposite pattern allows the musician to play with various playing modes. drum track. Notation of a 16-bar performance of the reﬂexive beatboxing system: t and k are percussive sounds. Virtual agents follow a mutually exclusive principle and try to match the humans rhythm with a one-bar delay. i. the virtual agent will be able to accompany melodies of the guitarist with chords and chords with melodies.. percussion and humming).. so it is best used not right away during the session. one to the chords). the reﬂexive database extracts the playing mode from each recorded beat. spectral centroid and Yin). but some great beatboxers are able to play two modes at the same time (e. voice track) are available as separated audio ﬁles. The classiﬁcation is performed following a similar scheme than .2 Mash-up A mash-up is a song or composition created by blending two or more prerecorded songs. Ap and Ah are the percussive and humming virtual agents. or comping chords (a lot of notes per bar. Then two virtual agents are connected to this database. A mash-up requires two songs (or more) that are harmonically compatible. virtual agents play alternatively. each agent follows the rhythmical patterns of the guitarist. OTHER APPLICATIONS In this section.g.RhythmProﬁle closestT o RG. in . Here again. and following his or her rhythmic patterns.. one representing the percussion mode and the other the humming mode. We solve this problem in Vir- Hidden for anonymity. a walking bass. another ﬁlter can be designed to play an opposite rhythm pattern. Furthermore. and then more melodies. usually by overlaying a track of one song over the tracks of another (see ). For instance. O and A are hummed vowels. this mode can lead to fascinating interactions. as explained in the introduction of this section: if the real guitarist plays (and the database records) gentle chords. each playing mode (melody. Here. the database distinguishes between two playing modes: the percussion mode and the humming mode. among four possible modes: melody. chord. 4. 4. 4. when the system has accumlated enough rhythm samples to play back interesting variations. The real musician can trigger rhythm patterns he previously recorded by playing similar patterns. Here the musician is free to play as he wants.g. Both choose and play their audio beats from the same reﬂexive database.3). in addition to the R. Beatboxing also involves humming.e.e.. and agent follows a mutually exclusive mode. A video showing an example of this trio can be seen in the accompanying web site 1 . with songs whose tracks (e. e. via reﬂexive virtual agents. they play only if the real guitarist is not playing in the same mode. the system records and stores in a reﬂexive database the incoming audio of the real beatboxer. i. i. Beatboxing with VirtualBand aims at augmenting the performance of a moderately good beatboxer by allowing him or her to play. bass and silence.e.
Roy. Note that the muted agent still plays with the rest of the group (even if it is not audible). Goto. H. We use four different drummers: the drummer of “Nirvana” in “Smell Like Teen Spirit”.. Foulon. such as a guitar jazz trio played by a single musician as well as on other types of musical situations such as mashup and beatboxing. We have illustrated the system with several jazz improvization scenarios. M. A. 6. pages 51– 58. of the 15th Int.e. 2013. pages 180–187. Moreira. Mashups. In Proceedings of the International Computer Music Conference (ICMC). 2003. Centre for Digital Music C4DMTR-08-01. i. volume 18. CHI ’13. VirtualBand agents are reactive and not deliberative. In Proceedings of the International Conference on New Interfaces for Musical Expression. planning. This effect is mainly due to the complex interactions that may occur between 2 features computed on various audio signals. In 25th BCS Conference on Human-Computer Interaction. pages 571–574. In Proc. Reactivity and deliberation: A survey on multi-robot systems. a reactive multi-agent system for live musical improvisation involving real and virtual musicians. In Proc. 2008. sampling. Interactions are speciﬁed using features pairs. 11(3-4):229–239. 2005. 2003. Current research in concatenative sound synthesis. Martin. 2000. they do not attempt to exhibit autonomy. taken from the vast library of features developed in MIR. REFERENCES  Jerry Coker.  F. The continuator: Musical interaction with style. Stowell and M. D. Iocchi. VirtualBand revists the problem of interacting with stylistically consistent agents by taking a MIR approach to the problem. Beyond reﬂexivity: Mediating between imitative and intelligent action in an interactive music system. 1997. Pitch shifting of audio signals using the constant-q transform.  M. plays breaks and more intensively on chorus or bridge. pages 100–105. We can hear in the provided example that each drummer plays according to the global structure of the song. self goal making.  D. even simple ones like the one used in this paper. Reﬂexive loopers for solo musical improvisation. and Bossa nova and Funk drums by the jazz drummer Jeff Boudreaux 2 .  D. etc.  L.  F. Elements of the Jazz Language for the Developing Improvisor. 2  R. Salerno. Grobelny. Klapuri. Alfred Music Publishing. McEwan. and Enrico Pagello. 2011. As such. Martens. We illustrate mash-up with VirtualBand by replacing the drummer of the band “The Police” in “Roxanne” by another drummer. 2012. pages 2205–2208. P. Technical Report.R.  A. of Electronic Engineering. We represent each track of a song by a virtual agent. The audio of each track is stored in a different database. pages 9–12. Otsu. volume 2103 of Lecture Notes in Computer Science.tualBand with the transposition and substitution (see Section 2. J. L. 2011. Music Reference Services Quarterly.  J.5). Gifford and A. VirtualBand illustrates the power of feature interaction to represent complex and meaningful musical interplay occurring in real performance.  A. Springer. editors. Balancing Reactivity and Social Deliberation in MultiAgent Systems. Beatback: A real-time interactive percussion system for rhythmic practise and exploration. 2013. A similarity algorithm for interactive style imitation. Mashing-up with VirtualBand consists in playing a song where one agent is muted and replaced by the track agent of another song (for instance the drum track is muted and replaced by the drum track of another song). Asoh. ACM. Automatic classiﬁcation of guitar playing modes. 2008. Roy.com/bu5hx6q . and A. Plumbley. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. and M. and authorship: A mashupsampliography. ICMC. Brown. http://tinyurl. Pachet. d’Inverno. and M. 5. Schrkhuber and A. D. Jin. as they enlarge the harmonic possibilities of a database. and N.  C. i. Hamanaka. and P.e. Queen Mary. The virtual agent representing the new track is connected to the muted agent of the original song. pages 9–34.  T. Pasquier. University of London. Nika and M. Nardi. In Proceedings of the International Computer Music Conference 2012 (ICMC 2012). and W. in order to master the replacing agent in real-time. CONCLUSION We have presented VirtualBand. or how to predict the emergence of long term structure from basic low-level feature interaction. In Markus Hannebauer. Characteristics of the beatboxing vocal style. 2012. this paper shows that even with ”‘only”’ reactive agents rich interactions can take place. C. P. However. F. Journal of New Music Research. In International Joint Conference on Artiﬁcial Intelligence. Conference on Digital Audio Effects (DAFx12). the drummer of “Pixies” in “Hey”. 32(3):333–341. it only scratches the surface of the new ﬁeld of individual style modeling. Hawryshkewich. Jan Wendler. Submitted. Further work will focus on issues like how to ”‘saturate”’ a style database. Dept. Eigenfeldt. Schwarz. Each drummer follows the rhythmical patterns of the muted original drummer. Pachet.T.  J. Pachet. 2011. Chemillier. Improtek: integrating harmonic controls into improvisation in the ﬁliation of omax. A learning-based jam session system that imitates a player’s personality model.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview.