SONIC IDEAS | EXTRAS | CMMAS | CONTACTO | LOGIN
Vol3 No.2 INICIO
HOME
Vol3 No.2
ISSN 2317-9694

Video/Presentation
Convocatorias/Calls
Materiales EXTRA Materials
Login

Cupón
codigo:

Back to the Parlour
Por/By: Rajmil Fischman

Imagine a weekend afternoon in a middle class parlour during the second half of the nineteenth century: a soiree, with music performed live by family members gathered around the piano. This scene - once a common occurrence that fostered creative social interaction - became increasingly rare; being displaced by substitute social behaviours arising from technological developments such as sound recording (van der Merwe, 1989), television, etc. Furthermore, as the schism between ‘popular’ and ‘art’ deepened, and the latter demanded increasing levels of virtuosity in order to realise musical ideas, performance of certain strands of contemporary music became nearly impossible for anyone but professionals; disappearing from the ‘soiree’ repertoire.

Recently, technological development has heralded possible shifts from the relatively passive activity of listening to – and/ or viewing - readymade artefacts, to more active forms of engagement. This is most notable in computer games, in which new paradigms encouraging communal activity in the modern home1 or in virtual venues2 (often accessed from home) exemplify a process whereby human beings meet together - physically and/or virtually - in order to enjoy a common activity. Therefore, it is proposed here that, given appropriate tools and conditions, this is also possible in the case of music performance in a modern incarnation of the parlour; be it at home or as a ‘cyber-soiree’. This type of transfigured reincarnation corresponds to the process of retrieval within McLuhan’s tetrad (McLuhan and Powers, 1989)3.

Meanwhile, post-modern blurring of ‘popular’/‘art’ boundaries has challenged this ‘traditional’ schism. Such challenge is not only a result of aesthetic choices but also of means of dissemination enabled by new technologies (e.g. YouTube features both Boulez and Daft Punk). This is particularly true when it comes to electronic media: Landy (2007, p. 17) has proposed the concept of sound based music4 to encompass a wide range of creative output, which happens to defy ‘popular’/‘art’ dichotomies. Therefore, it is possible to conceive performers in the modern parlour who embrace music resulting from a wide spectrum of aesthetic conceptions and approaches, as long as they possess the skills to perform the music. In order to enable wide access, these skills need not be specific to musical practice as long as: 1) current technology is used to reduce virtuosity requirements, subsuming complexity while allowing user control through simpler actions and 2) performance is based on paradigms that are idiomatic to a wide range of contemporary potential performers. It is in this context that videogames seem to offer an ideal platform for the realisation of music in the ‘new parlour’. Thus, music-making may become potentially accessible to a videogame generation that prefers interactivity and, although lacking formal musical training, already possesses appropriate strategic and motoric performance skills5. Moreover, as the degree of sophistication of sonic output and its organisation is subsumed within the technology, it is possible to envisage a demand for further sophistication in order to achieve more refined musical expression and so on6: ideally, one could hope for a feedback cycle that may further erode the differentiation between ‘popular’ and ‘art’ until this is made irrelevant.

It is proposed here that existing compositional knowledge and current technology make this possible today. The problem now is one of content and this is precisely the role of the creator(s); focusing on the elaboration of appropriate approaches and paradigms for the realisation7 of an idiomatic, expressive and musically interesting repertoire, by means of the adaptation of present technologies. Note that this differs from the traditional conception of ‘creator’ as the authoritarian producer of a more or less ‘fixed’ work of art, in favour of an enabler of music that ‘emerges as an activity … that creates its own code at the same time as the work’ (Attali 1985, p.135). Rather, this type of creativity resembles the role of the team of designers in a videogame, who create a set of rules and physics (and, we should admit it, also elements that on their own could be considered as artwork) which, depending on the degrees of freedom a player has8, enable the latter to create a unique realisation comparable with a musical improvisation. As Attali points out, this would not necessarily be ‘a new music, but a new way of making music9 (ibid., p. 134). In this context, we could imagine a McLuhanesque retrieval of the traditional musical score transfigured into the rules and physics of the game: a videogame score.

Transfiguration also applies to the ontological difference between artistic performance and gaming (i.e. aesthetic achievement and pleasure as opposed to pleasure uniquely derived from overcoming challenges within competition). Nevertheless, this distinction is not necessarily as significant as it may seem, given current concerns for inter-activity in music - following in the steps of literature and drama (Ryan, 2001)10 - and the creation of works based on game principles. Research on interactivity includes Winkler (1995, 1998), Barrett and Hammer (1998), Camurri et al (2000), Sapir (2000), Wilson and Bromwich (2000), Harris and Bongers (2002), Ciufo (2003), Rudi (2005), Essl and O’Modhrain (2006), Lee et al (2006), Feldmeier and Paradiso (2007), and many others. Pioneering examples of ‘game compositions’ include Xenakis’ Duel (1959) and Stratégie (1962), and mechanisms used in Cage’s works (e.g. Variations I-III, 1958-1963).

Finally, transfiguration is also evident in current ubiquity and relative affordability, enabling a much larger social cross-section to embrace new technologies: access is no longer restricted to affluent middle classes or aristocracy (at least in industrialised societies). In fact, a whole genre of music videogames accessible to a wide population has evolved, including recent titles such as Guitar Hero (2005 - 2009), DJ Hero (2009), Karaoke Revolution (2006), SingStar (2007), Wii Music (2008), Rez (2001-2010), etc. However, artistic creation and commercial gaming have not yet converged on a significant scale11. Until very recently, the former has yielded a heterogeneous variety of specialist platforms and tools, with limited transferability to other creative contexts or users, let alone the wide community reached by computer gaming. On the other hand, the majority of music videogames either emulate repertoire produced more effectively by more traditional means (e.g. guitar, orchestra, etc.), or provide limited musical palettes and means of articulation (see for instance Elektroplankton, 2005).


Sii Me, Hear Me, Touch Me, Heal me...

The motivations and context described above gradually shaped the conception of a structured interactive immersive musical experience (SiiMe), in which users advance at their own pace and choose their own trajectory through a musical work, but have to act within its rules and constraints towards a final goal - i.e. the realisation of the work. It is structured in a way akin to the rules and physics of a videogame, which apply to musical creativity. It is interactive not only in the sense that the technology acts with and reacts to the person(s) using it, but also because, together with the latter, it instantiates the musical work. It is as immersive as music and time-based art can be, absorbing the participant from her/his daily existence into a world in which time is bent according to the work’s own rules which, in turn, depend on the nature and organisation of the materials. However, as opposed to literature, where there is a tension between interactivity and immersion12, the former actually reinforces the latter, as is normally the case in the performing arts. Ultimately, extending this idea to other senses when technology becomes available (e.g. true three-dimensional visuals, smell and touch) it might be possible to achieve total immersion of the type portrayed in the famous holodeck; featured in the television series Star Treck: The Next Generation (Roddenberry et al, 1987- 1994). In such case, the term musical extends beyond the exclusive use of sound to articulate time; to include a wider set of media. Finally, it becomes an experience when the participants take part in a unique event in their life; achieving one of those rare moments of lucidity and inspiration characteristic of engaging with art at its deeper level.

The structured interactive immersive musical experience should thus enable and facilitate the unleashing of creative potential latent in all of us: the break of the mirror, just like in the case of the boy who could potentially hear, talk, see and feel, but ‘Needed to remove his inner block’:


Listening to you, I get the music

Gazing at you, I get the heat

Following you, I climb the mountain

I get excitement at your feet

(Tommy. Townshend et al, 1969).


Beau Geste?13

As stated above, I believe that it is possible to enable Attali’s ‘new way of making music’ and its corresponding creative expression, as long as two conditions are met. In this section we will examine the first of these; namely that technology can subsume complexity by means of simpler actions. In fact, the technology of acoustic instruments began to do just that since the times when humans first used tools to emit sounds: an action (or set of actions) by a ‘performer’14 originates a physical process which produces air vibrations that we perceive as sound. This type of action belongs to the wider category of gesture, defined by Cadoz as a multimodal phenomenon consiting of ‘all physical behaviour, besides vocal transmission, by which a human being informs or transforms his immediate environment’ (Cadoz 1988, p. 5)15. A gesture can be causal (ibid.) when it is the direct cause of the sound; e.g. the action of hitting an object. In this case, it can be an initiator as well as a modifier of sound (Cadoz, 1988; Vertegaal et al, 1996; Mulder et al, 1997). Alternatively, it can be non-causal, whereby it is ‘essentially informational and applies, in coded form, to objects that have no direct causal link with the concrete phenomenon that is thus described or evoked’ (ibid.): this is a wide category that includes indications for the production of sound such as a musical score, computer data for synthesis, etc. Furthermore, gesture itself might consist of both causal and non-causal elements. For instance, consider a pianist performing a soft legato passage: in addition to causal components such as particular smooth movement of fingers, hands and arms, the performer might be providing further information through non-causal elements such as the movement of the head and upper body. However, the overall behaviour of the pianist is integrated into a single gesture: the separation into head, upper body, arms, hands and fingers is rather a matter of convenience for the analysis of the latter.

Historically, a gesture could generally16 be clearly identified as the direct cause of the sound. However, the development of electronic and digital interfaces resulted in the decoupling of the sound control – that is the mechanism used by the performer to originate the sound – and the sound producer - the actual generator of sound (Sapir, 2000; Wanderley, 2001). As an illustration of decoupling, in a typical MIDI keyboard, control is carried by means of the keys17, buttons18, sliders and wheels19. This information is independent from the oscillators that create the actual sounds to an extent that, by using different programs, two identical gestures can produce very different timbres. For instance, if the identical gestures consist of a long keyboard press with a particular velocity and the two programs chosen are a xylophone and a bowed string then, in addition to the differences in spectral content characteristic of each timbre, the first key press will result in a very short sound and the second press will produce a long sustained one. This example points to the fact that the causal link between gesture and the resulting sound is weakened and, in extreme cases, completely broken20, and begs the question: when does a gesture becomes a beau geste?

Indeed, there is potential loss of ‘causal logic’ and consequential lack of expression inherent in decoupling (Cadoz et al, 1984, 1988, 1990; Mulder, 1994; Roads, 1996; Goto, 1999, 2005): expression is affected when performers’ gestures cannot be associated to sonic outputs. Gesture must be believable as cause of sound generation. Therefore, if we aim to subsume complexity by means of simpler actions, it is important to ask how far can we ‘simplify’ before the causal link is weakened beyond recognition. Likewise, it is important to remember that simplicity should not interfere with the ability to articulate nuance, thus affecting expression, as in the case of MIDI21.


Mapping and Metaphor

We have seen that gesture must be believable as a cause of sound generation, even when it is not necessarily functional (e.g. in the case of exaggerated circular hand movements of a rock guitarist). The consequences of a gesture should appear to be the result of the performer’s actions. Therefore, when sound control and production are coupled again, their relationship should appear to be causal, and this is the concern of the mapping strategy connecting the performer’s input to the sonic outcome (Winkler, 1995; Sapir, 2000; Wessel and Wright, 2001; Hunt et al, 2003; Ciufo, 2003).

Mapping entails the ‘correspondence between gestures or control parameters … and sound generation or synthesis parameters’ (Levitin et al, 2002). We can distinguish between three types of input and output parameter mapping (Rovan et al, 1997): one-to-one correspondence, convergent (many mapped to few) and divergent (few mapped to many). Mapping strategies may be modal, where internal modes choose which algorithm and sound output will result from each gesture, or non-modal, where the sound output is always the same for a given gesture (Fels et al, 2002). Moreover, use of higher levels of abstraction as control structures (e.g. perceptual attributes such as ‘brightness’), instead of raw synthesis variables (e.g. amplitudes of partials) facilitates the correlation between gestures and the resulting sounds (Hunt et al, 2002). This can be achieved by implementing additional mapping layers that have the complementary advantage of enabling modal mappings, while leaving the principal mapping layer untouched. Further flexibility can be introduced by allowing these additional layers to be time-varying (Momeni and Henry, 2006).

Mappings are most successful when they are intuitive (Choi et al, 1995, Mulder et al, 1996, 1997; Wessel et al, 2002; Momeni and Wessel, 2003), exploiting intrinsic properties ‘of the musician’s cognitive map so that a gesture or movement in the physical domain is tightly coupled … with the intention of the musician’ (Levitin et al 2002, p. 183). In fact, successful gestures might ‘incorporate expressive gestures from other domains’ (ibid. p. 184), since ‘spontaneous gesture-sound associations (not necessarily musical) are the results of massive, life-long experience, and may be a valuable source of competence that can be actively exploited in digital audio applications’ (Jensenius et al 2005, p. 282). This issue leads to the concept of metaphor (Sapir, 2000), whereby electronic interfaces emulate existing gestural paradigms: metaphors may originate in acoustic instruments22 or in more generic sources23.

The role that metaphor plays in developing expressive devices is intimately related to transparency, defined as

a quality of mappings… that provides an indication of the psychophysiological distance … between the intent (or perceived intent, in the case of the audience) of the artist to produce some output and the fulfilment of that intent through some control action’ (Gadd and Fels 2002, p. 1).


Metaphor facilitates transparency, enabling designers, performers and audiences


to refer to cultural bases or elements that are “common knowledge” … understood and accepted as part of a culture … [and] used as referent rather than being explained by reference to something else. For example, scents are often compared to that of a rose, but the scent of a rose is never identified by comparison to something else’ (ibid. p. 2, footnote 2)24 .


Thus, there is a clear connection between this “common knowledge”, spontaneous gesture-sound associations and cognitive mappings. Furthermore, the interfaces that enable the creation of a structured interactive immersive musical experience should behave as strong metaphors embedded in “common knowledge”, since they belong to cognitive mappings from daily human activity: as long as they are linked to appropriate sounds (this is crucial), they have the potential to produce convincing mappings for coupling control to sound generation. This does not apply exclusively to direct sound recognition or source bonding25 as a result of an identifiable gesture, but also to less obvious correspondences, such as the identification of energy profiles, spectral similarity, and correspondences resulting from psychological and cultural conditioning (e.g. the sounds used to depict punches and kicks in films). Furthermore, the user does not have to consider the set of parameters that create the gestures or their mappings, but rather conceive natural actions adept to human activity (e.g. throwing a virtual object, speech emphasis gestures, etc.); being reinforced by the multimodal nature of these actions (e.g. the physical movement associated with ‘throwing’).


Learnability, Virtuosity, Effort and Expression

We have seen above that the implementation of a convincing metaphor is essential for the creation of a successful controller. However, when wishing to address a wide pool of potential performers we must take into account the time and effort an individual must invest in order to reach an acceptable level of proficiency; that is, a level at which they are not merely coming to grips with the “instrument” but actually making music. Therefore, it is essential to consider the instrument’s learnability, or ‘the amount of learning necessary to achieve certain tasks’ (Schakel, quoted in Vertegaal et al, 1996, p. 308), and achieve a balance between the latter and the potential for virtuosic expression (Hunt et al, 2002). Ideally, technological artefacts that require little training for basic use but provide the potential for skill development through years of experience strike the right balance between a gentle learning curve and ongoing challenge (Levitin et al, 2002); providing ‘a manner of control that offers a “low entry fee with no ceiling on virtuosity” and allows expressive control’ (Wessel and Wright, 2001).

Learnability can be improved by subsuming complexity in the technology while allowing user control through strong metaphors embedded in “common knowledge”. This includes both actions and behaviours (e.g. videogame paradigms). Potential for virtuosity can be achieved as a result of

  1. Expanding the inventory of actions.
  2. Developing and modifying mappings.
  3. Interchanging outputs (as long as gesture-output correspondence is maintained).

Thus, musically acceptable results will be obtained with relatively less effort invested in mastering “the musical instrument”. However, assuming that effort should be eradicated altogether may not be advisable since it actually fulfils an expressive function. Perception of physical effort can enhance expressivity. In contrast to a great deal of attention paid to ‘production of sound directly from abstract thought by implementing a model of cognition’ (Mulder, 1994, p. 247), there has been relatively little discussion concerning ‘human music performers, who physically effectuate the performance - with effort’ (ibid.). Furthermore, ‘it is likely that some constraints should be implemented such that effort, as an essential component of expression, must be applied by the performer’ (ibid., p. 248). Physical effort serves as ‘enlargement of motion by projecting it, and the expression of musical tension through the musician’s whole body language’ (Vertegaal et al, 1996, p. 309). In an interview with Krefeld, musician Michael Waiswisz presents a more extreme view, advocating the introduction of real effort:

The creation of an electronic music instrument shouldn’t just be the quest for ergonomic efficiency. You can go on making things technically easier, faster, and more logical, but over the years I have come to the conclusion that this doesn’t improve the musical instrument. I’m afraid that one has to suffer a bit while playing; the physical effort you make is what is perceived by listeners as the cause and manifestation of the musical tension of the work’ (Krefeld, 1990, p. 29).

This may have implications concerning the balance between effort and learnability. However, regardless of whether effort is real or virtual, metaphors can take advantage of its expressive capabilities by implying it through association with action archetypes existent in our cognitive map as a result of daily experience (e.g. the muscular activity involved in throwing an object). At the same time, since this association may be virtual, it is possible to have control over the amount of real effort required by the performer in order to achieve the type of expressivity mentioned by Waiswisz.


Gesture’s Double Role

So far, we have been considering gesture as intimately linked to the sounds it produces and their expressivity, or as a symbolic function of sound providing additional information about the sound it causes (Cadoz, 1988), as exemplified in the case of the pianist playing soft legato discussed above. However, gesture should be considered through


an approach based on representation and on dual complementary control of the causes of the processing representation and control of effects. Here causality subdivides into two contributions: the gesture and the instrument, and it is in this aim that we may set the principle of the “composition of the instrumental gesture”’ (Cadoz, 1988, p. 6).

Furthermore, instrumental gesture is ‘one of the objective elements, alongside the sound and the instrument (or their computer substitutes or representations), by which the musical composition process can operate’ (Cadoz and Ramstein, 1990, p. 54). Thus, in addition to its symbolic function of sound, gesture also has a role as object of composition, which must be considered by the music creator. Furthermore, since gesture is an essential part of the compositional process, it is not sufficient to produce devices and interfaces a priori: their validity can only be proven by the compositional process itself and its necessities. This was corroborated empirically by Waisvisz’s trial and error development of The Hands (through composition), and his decision to ‘start the musical phase …, to forget the technology …, to enter the musical domain’ (Krefeld, 1990, p. 30) before completion of the instrument.


The discussion so far outlines four essential issues concerning gesture:


1. It should be believable as a cause of sound generation; even when it is not necessarily functional: its consequences should appear to be the result of performers’ actions.


2. It is also an object of composition; thus its validity can only be corroborated within the context of the musical work.


3. When decoupling sound cause and source, gesture causality is mainly the result of:

a. Choice of metaphor.

b. Mapping strategy.

c. Perceived effort.


4. It is important to strike a balance between learnability, on the one hand, and the development of virtuosity, on the other.


Despacio se Va Lejos26

I have described above the long (possibly very long) term aims leading to the ideal structured interactive immersive musical experience. Indeed, the path towards its ultimate panacea is far off and fraught with unanswered questions. This is not only a problem of ultimate technologies (e.g. true three-dimensional immersive imaging, intelligent inter-activity, etc.) but also of aesthetic approaches and creative strategies (e.g. metaphor and mapping design, type of interactive behaviour, etc.); as well as social behaviours arising from technological developments (e.g. how, when and where does music performance and dissemination take place?). Such issues, together with the dual role of gesture and the view that, historically, the most successful aesthetic frameworks have generally been achieved when practice feeds theory, suggest a methodology whereby conceptual design, technology, creative output and actual performance feed each other through a series of developmental iterations. For instance, the validity of the metaphors designed with presently available technology may be tested and evaluated through the composition of musical works and their performance/dissemination. These, in turn, may feed back into subsequent design and refinement of metaphors and, as a consequence, feed technological development, and so on.

Furthermore, the sophistication of the final aim also suggests a staged developmental process, in which complexity increases with each successive stage. Beginning within an existing type of performance mechanism that provides tightest control and least variability, such as a conventional concert situation, further stages can then advance towards more sophisticated performance mechanisms. Therefore, most of the initial effort can focus on the construction and implementation of the metaphor. More specifically, stage progression can be planned according to the following criteria:


1. Increasing complexity according to the categories below:

1.1 Fixed Interactive.

This category varies from works with a fixed score, to systems that respond to performers’ actions, producing sound, images, etc. and/ or eliciting performance actions from the performer. In the case of the latter, for instance, this might consist of the implementation of paradigms similar to those of videogames motivations27 (e.g. forced action, obstacles, immediate challenges, mini-objectives, facing a boss, etc.) using a live electronics graphics interface (e.g. MAX/MSP/Jitter)28.

1.2 Passive Active.

A passive system only acts when prompted by the user; for example, responding only when a joystick is activated. An active system generates its own output without being prompted; for instance, performing sounds and displaying images following inbuilt generative algorithms.


1.3 Single-user Multi-user.

This concerns the number of simultaneous performers, and their interaction with the system and with each other.


2. Evolving performance mechanics.

This involves the social behaviours within which performances take place, ranging from the traditional recital context29 to situations where there is no distinction between performer and audience30.


3. Widening scope. This concerns the technologies within which a structured interactive immersive musical experience may be implemented. Specifically:

3.1 Controller types. This includes a variety of devices, from existing music controllers31 to games controllers, joysticks, Bluetooth32 and Wi-Fi33 devices, etc. For instance, implementations of MAX/MSP/ Jitter (Cycling ’74, 1990-2010) drivers for the Nintendo Wii34 (Akamatsu, 2007) and USB game controllers (Benson, 2008) are already available, facilitating scope expansion.

3.2 Platforms. This includes different operating systems (e.g. Windows, OS, Linux) as well as games platforms (e.g. Sony Playstation35, Microsoft Xbox36, Nintendo Wii, etc.).


4. Widening media.

In addition to ‘traditional’ extensions of the media used in music37, technology might enable other senses, notably touch and smell. The media itself might become more sophisticated: for instance, video might actually become holographic, existing (or at least giving the illusion of existing) in three dimensions.

Within these criteria, a pragmatic mid to long term strategy can be formulated for research and development of structured interactive immersive musical experiences. Therefore, while this approach intends to cover the full range of complexity, it is sensible to apply some constraints to the scope and media, as follows:

1. Concentrate on a single type of controller which provides data that is reasonably generic so that its acquisition can be adapted relatively quickly to the output of other controllers. This ensures faster and easier migration when better alternatives appear.

The data considered to be essential focuses on hand and finger movement, both because it can provide a great deal of flexibility and detail, and because of the manual dexterity that human beings develop naturally38. This consists of the following:

1.1 Position in three dimensions: from these, it is possible to calculate velocities and accelerations.

1.2 Orientation angles in three dimensions: from these, it is possible to calculate angular velocities and accelerations.

1.3 Finger bending: from these, it is possible to identify hand shapes corresponding to particular combinations of finger bend values.

It is also desirable to have buttons and switches, but this is not essential because button and switch functionality can be emulated using a computer keyboard, pedals, etc.


2. Concentrate on a single platform but use a development environment that is as portable as possible. Therefore, rather than choosing a specialised games platform, it is reasonable to begin development using a personal computer running a widely available operating system, relatively stable and durable software (e.g. MAX/MSP/Jitter), and multi-platform development tools (e.g. GCC39).


3. Restrict media initially to audio and, as the project progresses, to audio and video. Eventually, other media can be incorporated.


With this in mind, it is now possible to devise a sequence of possible stages, as shown in table 1. These begin with a fixed work notated on a printed score, in a traditional recital context40. The controller and interface are only used to produce sonic output driven by the performer. After satisfactory results are obtained in stage 1, the strategy attempts to capitalise on these in stage 2, by initiating interactivity through a score that reacts to the user’s performance. For instance, this could employ videogame paradigms that reward accuracy in gesture execution, execution of expressive temporal variation while keeping up with timing (e.g. rubato), etc. This way, performance is opened to non-expert users and, although a traditional recital context is still implemented, this is not restricted to the concert hall any longer. As complexity and performance mechanics develop we approach a multiple user, fully participatory situation: the reference to participants includes both human beings and technological devices since the controller/interface/computer system becomes active.



Table 1 SiiMe: mid to long term sequence of development stages


A journey of a thousand miles begins with a single step
44

In this section I will describe initial work in stage 1. It is truly the first step but there are some encouraging signs that it is possible to proceed further in the journey towards the structured interactive immersive musical experience.

The controller chosen was the P5Glove45 (see figure 1) connected via USB to a personal computer running Windows and controlled through MAX/MSP/Jitter. In spite of being an ‘old’ device released in 2002, this glove was chosen because it provides the desired data (three-dimensional position and orientation, and finger bend data), which is expected to be present in future technologies. It is also significantly cheap compared to other alternatives46, and once the libraries provided by the manufacturer are discarded (this is described below), it provides remarkable sensitivity and speed; as well as detection within a wide spatial range that may be calibrated by each individual user.

Naturally, the P5 is not without problems. Firstly, as mentioned above, the software libraries provided by the original manufacturer are unresponsive and sluggish in tracking the glove’s parameters, and work within a small spatial range. These difficulties were resolved by tapping directly into the raw data provided by the glove through the USB interface; building on developments pioneered by McMullan (2008; see also his software library libp5glove, 2003) and Bencina (2006). Secondly, the glove uses the USB 1.1 protocol. However, so far, data transfer speed has been more than adequate for tracking hand gestures, providing reliable measurements every 20 milliseconds (50 measurements per second). Thirdly, due to the way in which they are measured47, orientation values can include jumps which, depending on their application, may require smoothing through some low-pass filtering. Finally, the rubber bars used for the measurement of finger bending can slip since they are connected to the fingers by means of plastic rings. As a result, there is loss of resolution in bending measurements (and consequent recognition of hand shapes). This situation might be improved by devising better ways of attaching the bars more tightly to the fingers (e.g., using Velcro). At the time of writing, this issue still remains to be investigated.



Fig. 1 Essential Reality P5 Glove. The buttons, enumerated clockwise from the right, are A, B, C and D. There are three sensors visible in the picture


In order to use the glove, a MAX/MSP external (P5GloveRF) was written in C, which tracks the data transmitted via USB, calculating the glove’s three-dimensional position, orientation, velocity and acceleration. It also provides the amount of bending for each finger48 and the result of button presses A, B and C49 (see figure 1). Figure 2 shows a patch that displays information provided by P5GloveRF, including graphic tracking of position, orientation, velocity and acceleration, and an arrow that displays the glove’s position and orientation in a three-dimensional set of axes.



Fig. 2 MAX/MSP patch that displays information provided by the external P5GloveRF


The external also allows the user to store a collection of hand shapes and it is capable of recognising stored shapes when they occur during a performance: figure 3 shows two instances of a subpatch that stores hand shapes. It also allows the user to calibrate the glove according to her/his anatomic requirements by entering maximum and minimum values in each of the three dimensions (X, Y, Z), corresponding to a comfortable reach of the arm and hand: a calibration patch is shown in figure 4.



Fig. 3 Two instances of a patch that stores hand shapes



Fig. 4 Glove calibration patch


A number of time and frequency domain processes that can be controlled in real time by the glove have been implemented in MAX/MSP; including granulation, a bank of variable comb filters, time-stretch, spectral shift and stretch, and multiple formant control. Also, the glove can affect spatialisation (including Doppler shift) in stereo, 5.1 surround and octophonic formats. Naturally, the collection of processes will be enlarged as development progresses. However, these do not become significant until they are incorporated into appropriate metaphors. I will therefore conclude by providing an example of how this can be done.


As You Sow So Shall You Reap

Granular techniques can be used in order to implement a metaphor that could perhaps be best described as “spreading particles”; a gesture similar to that used, for instance, when sowing seeds in a relative large area. This consists of moving the arm in an arched trajectory while opening the hand to release the “seeds”, which spread as they travel through the air.

An analysis of this gesture that suits our purpose considers the vector velocity (i.e. value and direction) of the hand at the time of release, which determines the direction in which a seed travels when it leaves the hand. Because seeds are not released all at the same time and the hand keeps moving, the starting point of each seed and its vector velocity will change with the current position of the hand (see figure 5). Therefore, in order to implement this metaphor, we can create the following mappings:

1. The moment of release is identified as the instant when the glove changes from a closed shape to an open shape; identified by corresponding finger bend values.

2. Each seed can be mapped to an individual audio grain.

3. The velocity of release of each seed (audio grain) is obtained from the glove velocity at the time of creation of the grain.

4. It is possible to implement the physics that affect the trajectory of the grains, such as gravity forces, viscosity, bouncing boundaries, average life time of grains, etc. This is commonly done for particle generation in video and games (e.g. fountains, fireworks, etc.).


Of course, these mappings can take many guises: for instance, imagine that the particles are alive and have their own means of propulsion, like in the case of releasing a flock of winged insects, or birds. Also, gravity, viscosity, etc. do not have to correspond to real life phenomena (e.g. gravity could point upwards or sidewise, or change direction in time).



Fig. 5 Vector velocity of each grain as the open hand progresses through an arched trajectory: v(t1) is the velocity at time t1, v(t2) is the velocity at time t2 and so on


Hopefully, this example illustrates the wealth of possibilities available even with the simplest of metaphors, unveiling the potential offered by today’s technology and prompting the reader’s imagination to venture into the promised land of possibilities offered by future developments. Perhaps, by means of our imagination, it is possible to catch a glimpse of the ultimate structured interactive immersive musical experience calling in the distance... Sii Me.


Conclusion: I am, You are, She is...

...musical.

Yes, except for pathological conditions, we all seem to be musical. In fact, cognitive archaeologist Stephen Mithen argues that musical competence occurred much earlier than language, being an ability of adaptive value, essential to evolution: musical ability already existed in the Neanderthals, who, in spite of not having linguistic abilities, possessed ‘a prelinguistic musical mode of thought and action’ (Mithen, 2005, p. 267). He proposes that musicality evolved in parallel with – and was possibly developed in our brains before - language, rather than being a consequence of the latter. This means that it is a capacity existent in all us and deeply rooted in our mental constitution, at least as much as linguistic ability.

This is corroborated by Sloboda et al (2005), who studied people self-defined as “tone deaf” and concluded that, except for those who suffer from congenital amusia50, human beings are inherently musical. The fact that there is a higher percentage of individuals that define themselves as “tone deaf”51 is a result of actual acquisition of musical competence, which ‘has multiple personal, social, and environmental precursors’, rather than being a lack of underlying capacities. ‘Adults may therefore self-define as “unmusical” or “tone-deaf” for reasons unconnected to any underlying anomaly’ (Sloboda et al, 2005, p.255).

Indeed, I am, you are, she is musical and if this is so, we can not only listen to music but we can also make music: Attali’s heralded age of composition, a new order to follow previous stages in the political economy of music, is open to us. We can all meet in the parlour, whichever form it takes, as long as we have the means and develop the skills to make music in it. Thanks to technology which is available today (at least in some societies) we can begin to embark on the thousand mile journey towards this age of composition. More important, when the economics that match that technology become truly committed to serving global needs rather than minority power, it will also be available to all in both industrialised and developing societies, and we will not forget again that we can all make music. Then, the new parlour will be far from the exclusive middle class room, it will provide access to everyone in real and virtual realms, and will unleash creative energy and activity. Perhaps, it will shed light on Oscar Wilde’s anti-mimetic statement that ‘Life imitates Art far more than Art imitates Life’ (Wilde, 1891/2004, p.26). Perhaps life and art could become one?

We shall meet in the holodeck...


References_________________

Attali J (trans. Massumi B), 1985. Noise. The Political Economy of Music. University of Minnesota Press (Minneapolis, USA).

Barrett N and Hammer O, 1998. ‘Mimetic Dynamics’, Organised Sound, 3(3): 211-18.

BBC, 2008. Games ‘to outsell’ music, video. BBC news, 5/11/2008. http://news.bbc.co.uk/2/hi/technology/7709298.stm. Accessed: 23/4/2010.

Bencina R, 2006. P5 Glove Developments. http://www.simulus.org/p5glove/. Accessed: 22/4/2010.

Bennett J, 2000. ‘BMB con.: Collaborative Experiences with Sound, Image and Space’, Leonardo Music Journal 9: 29-34.

Bongers B, 2007. ‘Electronic Musical Instruments: Experiences of a New Luthier’. Leonardo Music Journal 17: 9-16.

Cadoz C, Luciani A and Florens J, 1984. ‘Responsive input devices and sound synthesis by simulation of instrumental mechanisms: The Cordis system’. Computer Music Journal 8(3): 60–73.

Cadoz C, 1988. ‘Instrumental gesture and musical composition’. Proceedings of the International Computer Music Conference -ICMC-88, Cologne. ICMAPress (San Francisco, USA): 1–12.

Cadoz C and Ramstein C, 1990. ‘Capture, Representation and ”Composition” of the Instrumental Gesture’. Proceedings of the 1990 International Computer Music Conference, Galsgow. ICMAPress (San Francisco, USA): 53–56. Also available online: http://quod.lib.umich.edu/cgi/t/text/text-idx?c=icmc;idno=bbp2372.1990.*. Accessed: 21/4/2011.

Camurri A, Hashimoto S, Ricchetti M, Ricci A, Suzuki K, Trocca R and Volpe G, 2000. ‘EyesWeb: Toward Gesture and Affect Recognition in Interactive Dance and Music Systems’. Computer Music Journal 24(1): 57–69.

Calvino I (Trans. Weaver W), 1998. If on a Winter’s Night a Traveler. Vintage (London, UK).

Choi I, Bargar R, and Goudeseune C, 1995. ‘A Manifold interface for a high dimensional control space’.

Proceedings of the 1995 International Computer Music Conference, Banff. ICMAPress (San Francisco, USA): 385–392. Also available online: http://quod.lib.umich.edu/cgi/t/text/text-idx?c=icmc;idno=bbp2372.1995.*. Accessed: 21/4/2011.

Cook P and Leider C, 2000. ‘Making the Computer Sing: The SqueezeVox’. Proceedings of the XIII Colloquim on Musical Informatics, L’Aquila, Italy, Sept. 2000.

Ciufo T, 2003. ‘Design concepts and control strategies for interactive improvisational music systems’. Proceedings of MAXIS, Leeds, UK: Sheffield Hallam University. Also available online: http://www.ciufo.org/media/design_concepts.pdf. Accessed: 14/4/2010.

Essl G and O’Modhrain S, 2006. ‘An enactive approach to the design of new tangible musical instruments’. Organised Sound 11(3): 285-296.

Feldmeier M and Paradiso J A, 2007. ‘An Interactive Music Environment for Large Groups with Giveaway Wireless Motion Sensors’. Computer Music Journal 31(1): 50-67.

Fels S and Hinton G, 1998. ‘Glove-TalkII – A Neural-Network Interface which Maps Gestures to Parallel Formant Speech Synthesiser Controls’. IEEE Transactions on Neural Networks, 9(1): 205-212. Also available online: http://www.cs.toronto.edu/~hinton/absps/glovetalkii.pdf. Accessed 6/5/2010.

Fels S, Gadd A and Mulder A, 2002. ‘Mapping transparency through metaphor: towards more expressive musical instruments’, Organised Sound 7(2): 109-126.

Gadd A, Fels S, 2002. ‘MetaMuse: Metaphors for Expressive Instruments’. Proceedings of the International Conference on New Interfaces for Musical Expression, 2002. http://www.nime.org/2002/proceedings/paper/gadd.pdf. Accessed 14/4/2010. 

Gorman M, Lahav M, Saltzman E and Betke M, 2007. ‘A Camera-Based Music-Making Tool for Physical Rehabilitation’. Computer Music Journal 31(2): 39-53.

Goto S, 1999. ‘T’he Aesthetics and Technological Aspects of Virtual Musical Instruments: The Case of the Super-Polm MIDI Violin’. Leonardo Music Journal 9:115–120.

Goto S, 2005. ‘Virtual Musical Instruments: Technological Aspects and Interactive Performance Issues’. HZ Journal 6. http://hz-journal.org/n6/goto.html. Accessed 14/4/2010.

Goudeseune C, 1999. A Violin Controller for Real-Time Audio Synthesis. http://zx81.isl.uiuc.edu/camilleg/eviolin.html. Accessed 14/4/2010.

Goudeseune C, Garnett G, Johnson T, 2001. ‘Resonant Processing of Instrumental Sound Controlled by Spatial Position’. New Instruments and Musical Expression workshop, SIGCHI ‘01, Seattle. http://zx81.isl.uiuc.edu/camilleg/nime01.pdf. Accessed: 14/4/2010.

Harris Y and Bongers B, 2002. ‘Approaches to creating interactivated spaces, from intimate to inhabited interfaces’. Organised Sound 7(3): 239-46.

Hunt A D, Paradis M and Wanderley M, 2003. ‘The importance of parameter mapping in electronic instrument design’. Journal of New Music Research 32(4): 429-440. Also available online: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.20.3098&rep=rep1&type=pdf. Accessed: 14/4/2010.

Jensenius A R, Godoy R and Wanderley M M, 2005. ‘Developing Tools for Studying Musical Gestures within the Max/MSP/Jitter Environment.’ Proceedings of the 2005 International Computer Music Conference, Barcelona. ICMAPress (San Francisco, USA): 282– 285. Also available online: http://quod.lib.umich.edu/cgi/t/text/text-idx?c=icmc;idno=bbp2372.2005.*. Accessed: 21/4/2011.

Jordà S, 2005. Digital Luthery: Crafting Musical Computers for New Musics, Performance, and Improvisation. Ph.D. dissertation, Universidad Pompeu Fabra (Barcelona).

Krefeld V, 1990. ‘The Hand in The Web: An Interview with Michel Waisvisz’. Computer Music Journal 14(2): 28-33.

Landy L, 2007. Understanding the Art of Sound Organization. MIT Press (Cambridge, USA).

Lee E, Karrer T and Borchers J, 2006. ‘Toward a Framework for Interactive Systems to Conduct Digital Audio and Video Streams’. Computer Music Journal, 30(1): 21-36.

Levitin D J, McAdams S and Adams R L, 2002. ‘Control parameters for musical instruments: a foundation for new mappings of gesture to sound’. Organised Sound 7(2): 171–189.

McLuhan M and Powers B R, 1989. The Global Village, Transformations in World Life and Media in the 21st Century. Oxford University Press (Oxford, UK).

McMullan J, 2008. P5glove. http://noisybox.net/computers/p5glove/. Accessed: 22/4/2010.

Miranda E R and Wanderley M M, 2006. New Digital Musical Instruments: Control and Interaction Beyond the Keyboard. A-R Editions (Middleton, USA).

Mithen S, 2005. The Singing Neanderthals: The Origins of Music, Language, Mind and Body. Weidenfeld and Nicolson (London, UK).

Moore F R, 1988. ‘The dysfunctions of MIDI’. Computer Music Journal 12(1):19–28.

Morales-Manzanares R, Morales E F, Dannenberg R, Berger J, 2001. ‘SICIB: An Interactive Music Composition System Using Body Movements’. Computer Music Journal 25(2): 25- 36.

Momeni A, Wessel D, 2003. ‘Characterizing and Controlling Musical Material Intuitively with Geometric Models’. Proceedings of the International Conference on New Interfaces for Musical Expression, McGill, 2003. http://www.nime.org/2003/onlineproceedings/Papers/NIME03_Momeni.pdf. Accessed: 14/4/2010.

Momeni A and Henry C, 2006. ‘Dynamic Independent Mapping Layers for Concurrent Control of Audio and Video Synthesis’. Computer Music Journal 30(1): 49-66.

Mulder A, 1994. ‘Virtual Musical Instruments: Accessing the Sound Synthesis Universe as a Performer’.

Proceedings of the First Brazilian Symposium on Computer Music: Caxambu, Minas Gerais, Brazil. 243

250. Also available online: http://www.xspasm.com/x/sfu/vmi/BSCM1.pdf. Accessed: 14/4/2010.

Mulder A G E, 1996. ‘Getting a GRIP on alternate controllers: Addressing the variability of gestural expression in musical instrument design’. Leonardo Music Journal 6: 33-40.

Mulder A, Fels S and Mase K, 1997. ‘Empty-handed Gesture Analysis in Max/FTS’. Proceedings of Kansei – The Technology of Emotion, AIMI International Workshop, Genova, Italy. http://hct.ece.ubc.ca/publications/pdf/mulder-fels-mase-1997.pdf. Accessed: 14/4/2010.

Mulder A, Fels S and Mase K, 1999. ‘Design of Virtual 3d instruments for musical interaction’. Graphics Interface ’99: 76-83.

Oliver J, 2010 (forthcoming). ‘The MANO Controller: A Video Based Hand Tracking System’.http://www.jaimeoliver.pe/publications. Accessed: 21/4/2011.

Ryan M L, 2001. Narrative as Virtual Reality: Immersion and Interactivity in Literature and Electronic Media. Johns Hopkins University Press (Baltimore, USA).

Roads C, 1996. The Computer Music Tutorial. MIT Press (Cambridge, USA).

Rovan J B, Wanderley M M, Dubnov S and Depalle P, 1997. ‘Instrumental gestural mapping strategies as expressivity determinants in computer music performance’. Proceedings of Kansei – The Technology of Emotion, AIMI International Workshop, Genova, Italy. http://www.ircam.fr/equipes/analyse-synthese/wanderle/Gestes/Externe/kansei_final.pdf. Accessed: 14/4/2010. 

Rudi J, 2005. ‘Computer Music Video: A Composer’s Perspective’. Computer Music Journal 29(4): 36-44.


Sapir S, 2000. ‘Interactive Digital Audio Environments: Gesture as a Musical Parameter’. Proceedings COST-G6 Conference on Digital Audio Effects (DAFx’00): 25–30. http://profs.sci.univr.it/~dafx/Final-Papers/pdf/Sapir.pdf. Accessed: 14/4/2010.

Sinclair J M (general consultant), 1991. Collins English Dictionary. HarperCollins Publishers (Glasgow, UK).

Sloboda J A, Wise K J and Peretz I, 2005. ‘Quantifying tone deafness in the general population’. In Avanzini G, Lopez L, Koelsch S, and Manjno M (eds.), The neurosciences and music II: From perception to performance. New York Academy of Sciences (New York, USA): 255-261.

Smalley D, 1997. ‘Spectromorphology: explaining sound-shapes’. Organised Sound 2(2): 107-120.

Thompson J, Berbank-Green B and Cusworth N, 2007. The Computer Game Design Course (Principles, Practices and Techniques for the Aspiring Game Designer. Thames & Hudson (London, UK).

Tye M, 2007. ‘Qualia’. Stanford Encyclopedia of Philosophy. http://plato.stanford.edu/entries/qualia/. Accessed: 23/4/2010.

van der Merwe P, 1989. Origins of the Popular Style: The Antecedents of Twentieth-Century Popular Music. Clarendon Press (Oxford, UK).

Vertegaal R, Ungvary T and Kieslinger M, 1996. ‘Towards a Musician’s Cockpit: Transducer, Feedback and Musical Function’. Proceedings of the 1996 International Computer Music Conference. San Francisco. ICMAPress (San Francisco, USA): 308– 311. Also available online: http://quod.lib.umich.edu/cgi/t/text/text-idx?c=icmc;idno=bbp2372.1996.*. Accessed: 21/4/2011.

Wessel D and Wright M, 2001. ‘Problems and Prospects for Intimate Musical Control of Computers.’ Computer Music Journal 26(3): 11-22. An earlier version presented at the ACM CHI Workshop on New Instruments for Musical Expression, Seattle, Washington http://www.nime.org/2001/papers/wessel.pdf. Accessed: 14/4/2010.

Wessel D, Wright D, Schott D, 2002. ‘Intimate Musical Control of Computers with a Variety of Controllers and Gesture Mapping Metaphors’. Proceedings of the International Conference on New Interfaces for Musical Expression, 2002. http://www.nime.org/2002/proceedings/paper/wessel.pdf. Accessed: 14/4/2010.

Wilde O, 1891/2004. The Decay of Lying. Kessinger Publishing Co. (Kila, USA)

Williamson V A, McDonald C, Deutsch D, Griffiths T D, Stewart L, 2010. ‘Faster decline of pitch memory over time in congenital amusia’. Advances in Cognitive Psychology 6: 15-22. Also available online: www.ac-psych.org/download.php?id=80. Accessed: 23/4/2010.

Wilson L and Bromwich M A, 2000. ‘Lifting Bodies: Interactive Dance - Finding New Methodologies In The Motifs Prompted By New Technology - A Critique And Progress Report With Particular Reference To The Bodycoder System’. Organised Sound 5(1): 9-16.

Winkler T, 1995. ‘Making Motion Musical: Gesture Mapping Strategies for Interactive Computer Music’. Proceedings International Computer Music Conference, Banff, Canada, ICMAPress (San Francisco, USA): 261 - 264. Also available online: http://quod.lib.umich.edu/cgi/t/text/text-idx?c=icmc;idno=bbp2372.1995.*. Accessed: 21/4/2011.

Winkler T, 1998. Composing Interactive Music. MIT Press: (Cambridge, USA).

Recordings_____________

Townshend P, Entwistle J, Moon K – The Who and Miller A R (a.k.a. Williamson S B), 1969. Tommy. Track Records, Track 613013/4 and Decca: Decca DXSW 7205. Subsequent releases in CD: Polydor 800 077-2, Polydor 531 043-2 MCA MCAD-10005, MCAD-11417 (1996). Release in SACD: Geffen B0001386-36 (2003), Polydor 9861011 (2004).


Television______________

Roddenberry G, Berman R, Piller M, 1987-1994. Star Treck: The Next Generation. Paramount. http://www.startrek.com/startrek/view/series/TNG/. Accessed: 13/4/2010.


Software________________

Akamatsu M, 2007. aka.wiremote, Nintendo Wii Remote http://www.iamas.ac.jp/~aka/max/#aka_wiiremote. Accessed: 23/4/2010.

Benson, A. 2008. Making Connections: Connecting a Joystick to MaxMSP/Jitter http://cycling74.com/2007/03/12/making-connections-connecting-ajoystick-to-maxmspjitter/. Accessed: 24/4/2010.

Cycling ’74, 1990-2010. MAX/MSP/Jitter. http://cycling74.com/. Accessed: 9/4/2010.

McMullan J, 2003. libp5glove. Library of functions used to obtain raw data from the P5 Glove. http://noisybox.net/computers/p5glove/libp5glove_svn_20050206.tar.gz. Accessed: 22/4/2010.

Videogames______________

Boom Blox, 2008. Developer/publisher: Steven Spielberg and Electronic Arts.http://www.ea.com/games/boom-blox. Accessed: 13/4/2010.

Call of Duty 4: Modern Warfare, 2007. Developer: Infinity Ward. Publisher: Activision. http://www.callofduty.com/. Accessed: 13/4/2010.

Call of Duty: Modern Warfare 2, 2009. Developer: Infinity Ward. Publisher: Activision. http://www.callofduty.com/. Accessed: 13/4/2010.

DJ Hero, 2009. Developer: FreeStyleGames. Publisher: Activision. http://www.djhero.com/. Accessed: 13/4/2010.

Elektroplankton, 2005. Developer: Toshio Iwai. Publisher: Nintendo. http://electroplankton.nintendo.co.uk/. Accessed: 23/4/2010.

Guitar Hero, 2005 – 2009. Developer: Harmonix Music Systems. Publisher: Activision Publishing, Inc. http://hub.guitarhero.com/. Accessed: 23/4/2010.

Karaoke Revolution, 2003-8. Developer: Harmonix Music Systems and Blitz Games. Publisher: Konami. http://www.konami.com/. Accessed: 23/4/2010.

Rez, 2001-2010. Developer: Q Entertainment, Inc. and HexaDrive Inc. Publisher: Q Entertainment Inc. http://www.qentertainment.com/eng/cc0000/cb5000/. Accessed: 23/4/2010.

SingStar, 2007. Developer: SCEE London Studios Publisher: Sony Computer Entrertainment. http://www.singstargame.com/en-gb/. Accessed: 23/4/2010.

Wii Music, 2008. Developer/publisher: Nintendo. http://www.wiimusic.com/. Accessed: 23/4/2010.

Wii Sports, 2006. Developer/publisher: Nintendo. http://www.nintendo.co.uk/NOE/en_GB/games/wii/wii_sports_2781.html. Accessed: 13/4/2010.

_______________________________________

1 For instance in the case of Nintendo Wii Sports (2006) games.

2 For instance in the case of Call of Duty 4: Modern Warfare (2007) and Call of Duty: Modern Warfare 2 (2009).

3 McLuhan and Powers argue that a tetrad metaphor ‘is applicable to the full range of human artifacts (sic.), whether hardware (objects) or software (ideas)’ (McLuhan and Powers, 1989, p. 7). The tetrad takes into account both figure and ground: figure is considered to be ‘an area of psychic attention’ (ibid., p. 180), whereas ground is a type of ‘cognition which senses all figures in the entire environmental surround at once’ (ibid., p. 180). Thus, ‘every new artifact, whether idea or object, reshapes the environment as it impacts upon it as figure against ground; yet, at the same time, the ground is being altered and eventually reshapes how the artifact is used’(ibid., p. 180). According to this logic, the tetrad itself consists of four interacting processes: 1) the artefact enlarges or enhances something; 2) it obsolesces something; 3) it retrieves something that was obsolesced earlier; 4) it reverses or flips into something else ‘when pushed to the limits of its potential’ (ibid., p. 9). This is exemplified by McLuhan and Powers in the case of cash money: 1) it enhances the speed of transactions; 2) obsolesces barter; 3) retrieves conspicuous consumption – the display of wealth; 4) reverses into credit or non-money, in which the image of wealth becomes more important than actual wealth (ibid. pp. 41-42, 173).

4 ‘Sound-based music typically designates the art form in which the sound, that is, not the musical note, is its basic unit’ (Landy, 2007, p. 17). For a wider dis

cussion of this concept, please refer to Landy’s article in this issue: From Music in the Laboratory to Music of the Folk: On the future of sound-based music.

5 This pool of potential performers is large beyond precedent, as evidenced by videogames sales figures. For instance, the BBC (2008) reported that ‘spending on games will rise by 42% to £4.64bn in 2008, with sales on music and video at £4.46bn’.

6 We have seen this in the area of graphics, where a demand for fidelity independent of content (i.e. a scene may not be possible but it should appear to be

real) is a tacit expectation when a new generation of games appears.

7 The emphasis on the word realisation stresses the idea that the actual repertoire may be instantiated by the user: (s)he may or may not be the original creator of the approach, but is not merely a performer of an existing ‘fixed’ work created by someone else.

8 An increasing number of games allow significant degrees of freedom, even to the extent of allowing players to create their own levels and environments: this is no longer restricted to users with specific technical knowledge. For instance, Boom Blox (2008), for Ninetendo Wii, allows players to build their own levels and share them with other players through the web, providing a graphics interface to do so. The game’s website advertises this feature: ‘Make It Your Own/Share – Unlock characters, worlds, blox, and props throughout the game and use to build whatever you can imagine in Create Mode. You can virtually build anything you can dream up. Remix any level and share what you create with friends via WiiConnect24™’.

9 Original in italics.

10 For instance, Ryan discusses the interactive ‘game-reader’ versus the immersive ‘world-reader’ in the context of Italo Calvino’s If on a Winter’s Night a Traveler (1998).

11 Perhaps DJ Hero (2009) could be seen as a step in this direction, since it emulates performance by actual artists (instead of just using buttons to syn

chronise with existing music). However, it is important to note that this is more a result of the actual mechanics

12 The reader is referred to Ryan (2001) for an in depth discussion concerning the tensions between immersion and interactivity in literature.

13 ‘A noble or gracious gesture or act, especially one that is meaningless’ (Collins English Dictionary, Sinclair 1991, p. 137).

14 In this case, the performer is an individual who uses a device (including parts of her/his own anatomy) in order to produce sound.

15 While other definitions of gesture have been advanced (Levitin et al, 2002; Goto, 2005), they share the idea of multimodal information transmission.

16 I say ‘generally’ because: 1) one may argue that identifying a particular gesture by a performer as the cause of the sound emitted by a device may be a learned process based on our daily experience of how sounds are produced. 2) Even in the case of acoustic devices there may be hidden mechanisms that do not correspond to daily experience: for instance, the sound emitted as a result of pressing a key in a church organ is actually what we experience as air being blown through a pipe; therefore, strictly speaking, the key press is not the direct acoustic cause of the sound. However, hundreds of years of organ performance have led us to associate the key press with the cause of sound in the instrument.

17 Key presses determine note value (which key is pressed), velocity (how fast does the finger hit the key), duration (between the time the key is pressed and released), and possibly aftertouch (key pressure level following the initial attack).

18 Buttons control other parameters such as the MIDI program.

19 Sliders and wheels control continuous parameters such as overall volume (loudness), pitch bend, modulation, etc.

20 For instance, consider a situation in which a laptop performer’s gesture consists of pressing the ‘Enter’ button, and this in turn triggers a long passage of music consisting of many layers of intricate textures.

21 The expressive dysfunctions of MIDI have been lucidly explained by Moore (1988). See also Mulder (1994).

22 For instance SqueezeVox (Cook and Leider, 2000), modelled on the concertina and eviolin (Goudeseune et al, 1999, 2001), modelled on the violin.

23 For instance, the falling rain metaphor in MetaMuse (Gadd and Fels, 2002) and the spatial metaphor for live performance control in SpaceMaster (Momeni and Wessel, 2003).

24 Interestingly, this idea reminisces the philosophical concept of qualia (cf. Tye, 2007, for a quick overview of this concept).

25 ‘The natural tendency to relate sounds to supposed sources and causes, and to relate sounds to each other because they appear to have shared or associated origins’ (Smalley, 1997, p. 110).

26 Spanish proverb equivalent to the English ‘slowly but surely’. Literally, it means ‘slowly one goes far’.

27 Thompson et al, 2007.

28 Cycling ’74, 1990-2010.

29 I.e. the performers make music and the audience listens and sees. This applies to both formal concerts, informal ‘parlour gatherings’ and virtual situations (e.g. a concert taking place within Second Life, http://secondlife.com/. Accessed: 22/4/2010).

30 See, for example, the work of Mark Feldheimer and Joseph Paradiso for an interesting instance of this type of mechanics. In their own words, ‘the system consists of wireless sensors that are given to audience members to collect rhythm and activity information from the crowd that can be used to dynamically determine musical structure, sonic events, and/or lighting control’ (Feldheimer and Paradiso, 2007, p. 50).

31 Comprehensive listings of music controllers are described in Miranda and Wanderley, 2006; Jordá, 2005; Lee et al, 2006; Gorman et al, 2007 and Mulder, 1994.

32 Buetooth is a proprietary wireless technology. http://www.bluetooth.com/. Accessed: 22/4/2010.

33 Wi-Fi is wireless technology developed by the non-profit international association Wi-Fi Alliance, http://www.wi-fi.org/. Accessed: 22/4/2010.

34 Nintendo Wii. http://www.nintendo.com/wii. Accessed: 22/4/2010.

35 Sony Playstation, http://uk.playstation.com/. Accessed: 22/4/2010.

36 Microsoft Xbox. http://www.xbox.com/. Accessed: 22/4/2010.

37 For instance, in the case of opera, music theatre and, more recently, video.

38 For existing implememntations of hand based controllers see The Hands (Krefeld, 1990), GloveTalkII (Fels and Hinton, 1998), Powerglove (Goto, 1999),

Sound Sculpting (Mulder et al, 1999), DaGlove (Essl and O’MOdhrain, 2006), Lady Glove (Bongers, 2007), and Mano (Oliver, 2010).

39 GCC is a free compiler created by GNU (http://www.gnu.org/). It can be downloaded from http://gcc.gnu.org/. Accessed: 22/4/2010.

40 See footnote 29.

41 See footnote 29.

42 This could be reminiscent of one of the older social behaviours concerning chamber music; when the only participants were the performers, who gathered together to enjoy a session of communal play.

43 This could assume various guises, such as the behaviour during a rave, role play in a virtual or real environment, etc.

44 Proverb attributed to the Lao-tzu (ca. 604-531 BC), considered to be the founder of Taoism.

45 The P5 Glove was originally produced by Essential Reality but is now distributed by the company Virtual Realities (http://www.vrealities.com/P5.html. Accessed: 22/4/2010). It has been used in the past as a music controller (through MIDI) by Richard Boulanger, for instance in his works Hearing Voices (2005) and In the Palms of Our Hands (2005).

46 At the time of writing, it is sold online for $89.

47 The glove has eight infrared sensors distributed throughout its surface. Each measurement transmits data from up to four sensors (in order of strength), from which position and orientation are triangulated. Glitches can occur when the detected sensors change or when there are less than three detected sensors; in which case, it is not possible to obtain reliable calculations for the angles.

48 The glove’s native resolution is 64 steps.

49 The glove has four buttons: A, B, C and D. The latter is used exclusively to turn the glove ON and OFF. Therefore, there is no need to track a press on D.

50 Congenital amusia is a disorder that ‘impacts negatively on music processing, despite a normal amount of musical exposure, normal hearing ability, and no concomitant intellectual or neurological impairments’ (Williamson et al, 2010, p. 15). It is estimated that congenital amusia only affects about 4% of human beings (Sloboda et al, 2005, p. 256).

51 Estimated to be 15% of the population (Sloboda et al, 2005, p. 256).

Hay 1 comentarios


Suscriptor/suscriber
2009-01-09 12:51:17
mi comentario



Agregar un comentario
Comentario/Comment: