The Future of Musical Transportation


Taichi Imanishi (2021)


Introduction

Toru Takemitsu’s concept of portable and non-portable music lacks accuracy, due to its now outdated perspectives on technology and its influence on our perceptions of music-making. Indeed, the aspects of musical transportability that I examined in my field research and musicological analysis in 2021, will be similarly challenged by the future advance of technology. Thus, while it is impossible to predict how precisely musical transportability will change, it is important to estimate it by discussing the most advanced technology at the time of writing. In this paper, the lack of academic research in some areas of discussion has prompted me to undertake cyber-ethnography. This primarily includes analysis of Internet cultures from blogs, news, and media websites (Kelley-Browne 2011: 331, Klenke 2016: 196). To strengthen the empiricism of these findings, the findings of recent studies in the literature are also discussed. 


This paper is divided into three sections. Firstly, I examine how the development of artificial intelligence may affect music education. Secondly, I discuss the ethical implications of ‘reviving’ deceased artists using technology. Thirdly, I discuss the discourse of music-making using AI-led vocaloid and virtual animation programs in the recent Japanese music scene and how such innovations are likely to change music-making on a glocal to global level in the near future.


AI in education: creative challenges


Technology constantly changes the foundation of music-making. One foremost movement is the development of AI or artificial intelligence. According to Dignum (2019), the most significant aspects of AI depend on one’s academic discipline as ‘Computer Science is concerned with the development of computational systems that exhibit characteristics of intelligence…[while] Philosophy is concerned with the meaning of intelligence and its relation to artificial entities’ (Dignum 2019: 11). This paper tends to focus on the former, although my discussion of the recent use of AI will prompt some inter-related reflections on its philosophical consequences.


Although some scholars believe the American engineer, Arthur Samuel (1901-90), to have been the first AI programmer, others argue that it was the British computer scientist, Christopher Strachey (1916-1975), having programmed a game of checkers on a computer in 1951 (Nilsson 2010: 124). The first attempt to employ AI for music productions was witnessed not so long after this: it was done at the University of Illinois during 1955-1956 by two American composers, Lejaren Hiller (1924-1994) and Leonard Maxwell Isaacson (1925-). In this attempt, AI produced various musical styles including that of the 16th century counterpoint writing, and music with a mixture of dynamics and rhythmic patterns. This so inspired avant-garde composers at the Institute for Research and Coordination in Acoustics/Music in Paris that they started to develop methods of composition through AI systems in the early 1960s (Besold et al. 2015: vi). Similarly, musical activities influenced by AI scientists were also observed in Germany a little before the French became serious about it. In the late 1950s, Stockhausen returned to Cologne music school from Paris to work with the German musicologist, Herbert Eimert (1897-1972) and the Belgian-born German experimental acoustic theorist, Werner Meyer-Eppler (1913-1960) who were connected to the AI community in the US. Stockhausen was particularly inspired by Meyer-Eppler and wrote a number of electronic and electroacoustic works, which paved the way for other composers to use electronic resources for compositions. Some of those electronic resources, in fact, featured some early forms of AI technology (e.g. an early form of algorithm, a software process designed to calculate mathematical formulae or carry out problem-solving tasks following given instructions.). (see also Roads 1989: 635, Llorente 2014: 1-10, Eigenfeldt 2016).


In the late 1970s, the Greek-French composer, Iannis Xenakis (1922-2001), built an actual system to carry out more complex automatic AI functions that was developed into the present Unité Polyagogique Informatique CEMAMu (UPIC) system. UPIC combines various synthesis methods, allowing the operator to produce a graphic score by drawing waveforms and volume symbols on a tablet interface. The first work Xenakis produced using this system was Mycenae Alpha (1978). Although it epitomises the advance of computer music at the time, Mycenae Alpha is usually described as a soundscape of various kinds of noises (e.g. clusters of roaring sounds and squeaks) with no sonic and textual development (Harley 2004: 115, MacCallum and Einbond 2008: 210). 


Interestingly, as a forerunner of today’s AI music engineers, Xenakis predicted that in his lifetime technological development would enable many to compose music without any basic musical knowledge (Smaill et al 1994: 109-10, Strawn 1996: 331). Indeed, his prediction began to be realised in the UK soon after his death with the development and use of computerised music programs and digital audio workstations (DAWs).  Today, their use has so diffused into primary and secondary schools that DAWs are widely recognised as essential music tools. The impact on schools’ music pedagogies is phenomenal, with many students learning to compose on DAWs such as Logic, Pro-Tools, Cubase and FL Studio, and rapidly discarding traditional manuscript writing. 


While many may hail the improved efficiency and productivity of musical composition, the take-up of DAWs can also put many educational institutions under strain. One of the issues is the additional cost of the equipment , with the finances of the education sector being squeezed (see also Hein 2017: 236-237). Another problem is ensuring educators get enough good training to pass on adequate subject knowledge to their pupils, including the navigation of DAWs (Leung 2013: 110). In some situations, however, educators have to address both problems at the same time. Drawing on my personal teaching experience, I had been an avid user of the most popular Logic Pro X, but had to learn FL Studio instead, as my school could not afford enough of the requisite Macintosh computers. This affected the way in which I delivered the lessons, because both the students and I had to figure out how to use the program while I ensured they were producing the right coursework to meet the school’s criteria. As a result, the students could not learn the program fully and their compositions lacked some elements that Logic Pro X could have otherwise facilitated.  


In addition, the transition from the traditional music notation method to the computer-based composition method has changed the pedagogical norm, particularly because students no longer have to learn music theory. The positive side of the change is that DAWs allow students to compose intuitively and efficiently within limited timescales. On the other hand, composing on DAWs discourages students from exploring different styles and elements of music that can be learnt through the traditional music notation method. Kardos (2012) writes:


Music technology applications have been designed and developed to speed up the process of creation. This is great for professionals who are working to tight deadlines but the problem for educators is that increasing development in speed and usability can make certain skills and knowledge redundant.


Kardos 2012: 150-151


Despite the above, a PGCE mentor and co-ordinator at Middlesex University, Joshua Emdon, who has also worked in secondary schools as a music teacher, affirms that it is still possible to use music technology to help pupils study music theory and different musical styles including atonal genres, as long as effective teaching with clear lesson structures and planning can be delivered (personal communication 30 March 2020).


DAWs also benefit certain individuals from socially disadvantaged backgrounds. For example, in the field of popular music, many individuals coming from financially disadvantaged backgrounds tend to use DAWs to teach themselves to compose music. This cuts the cost of composition lessons (Noxon 2004: 258, Zuberi 2007: 283-284, Born and Devine 2015: 143-144). Such musical production is becoming more popular today as DAWs are becoming more affordable (Jones 2018: 54, Middleton & Gurevitz 2008: 255). There are even free programs such as BandLab, Sibelius First and GarageBand. 


Furthermore, the intuitive nature of music technology allows those with physical and mental disabilities to create music, which has additional therapeutic benefits (Ventura 2019: 20, Martino and Bertolami 2014: 165-179). McCord (2017) discusses the benefits of music technology for those individuals in an educational setting:


If music technology is used in the curriculum and a student with a physical disability has difficulty playing a piano keyboard to enter notation for a music theory assignment, there should be alternative ways to use technology so that the student is still able to participate equally and learn from the curriculum… Music notation can be achieved for students with physical or vision disabilities through spoken commands.


McCord 2017: 35


Thus, technological development contributes to the ‘production stage’ of musical transportation as, for some, it breaks through barriers to get involved in music-making. However, due to its intuitive and self-explanatory nature, such music technology brings advantages and disadvantages. For example, Logic Pro X and some other DAWs provide a bundle of samples, including from some non-Western instruments, which enable learners to extend their knowledge of ‘world music’, without having to know traditional music theory. Most samples are recorded from real instruments. They can be manipulated by using virtual instruments and some DAWs (e.g. Logic Pro X and Steinberg’s Cubase) also allow the user to choose different tuning systems, such as Hermode, to meet various composing functions (Knakkergaard 2019: 125) (Figure. 1). 


However, these functions could also give the user a ‘shallow’ awareness of world music, as the manipulations are often limited. For example, Logic Pro X provides samples of Japanese instruments including the shakuhachi and koto, but none of these samples produce extended techniques and different timbres, which are important to such music (e.g. oshide or left hand pitch alternation, and muraiki or airy blast). Moreover, given that DAWs are only used to compose virtual music, they do not teach the user about musical contexts, for example, aesthetics and culture, in which these instruments are played. Thus, the user is ultimately distanced from real-life music-making (after Bates 2010). In this respect, Joshua laments that practical music-making, such as composing in a band, has seldom been seen in secondary schools in recent years. Joshua suggests that schools are taking advantage of virtual music-making, to reduce risks of injuries through practical activities (personal communication 30 March 2020). 


Figure 1. Tuning variations on Logic Pro X


Furthermore, while ready-made sound effects such as drumbeats allow individuals to create music intuitively, these functions challenge what it means to be creative, as parts of the composition are, in effect, created by the programmers. Even though one may challenge this by arguing that there are some opportunities to manipulate them in the programs, the foundation remains within the original samples (Maisel 2007: 16, McLeod and Dicola 2011: 62-63, D’Errico 2019: 786-788). This also raises the issue of ownership. Tamplin and Baker (2006: 206) are concerned that “[while] [m]usic technology that uses audio samples and loops may be an appropriate resource for songwriting…[,] this process endangers a heightened sense of ownership of the composition”. In reality, some students studying GCSE and A’ Level music in the UK who rely on too many of such samples and drum-machines tend to lose marks, as the exam boards suspect that they are covering up their poor knowledge of traditional music theory or cutting corners. The report on the feedback of GCSE Music 2018 submissions, for example, reads:


Thankfully, most (but unfortunately not all) candidates were clear in their application of ICT clarifying when and how samples and loops had been used…Other work was drowned out by drum loops and swamped by reverb...moderators advised of the care needed to refine and quantise, as some outcomes were very basic — conversely, others were overly complex and unmusical.


Eduqas 2018: 8


In fact, Joshua has witnessed some students attempting to disguise the drum-machine data by converting it to an audio format, hence making it look as though it was created by them (personal communication 30 March 2020). Thus, the advancing technology may further complicate future issues of originality and ownership. 


AI’s resurrection of the dead


In this section, I discuss how established conceptions of music production and reproduction may be challenged by the AI revival of the late Hibari Misora (1937-1989), Japan’s legendary enka singer in the Showa era. Enka is a hybrid genre that started in the Meiji era (1868-1912) as a form of recitative for protest (referred to as enzetsu no uta). Today, enka usually refers to songs with elements of Western pop and Japanese music (especially with a strong sense of min’yo or folk) (Chang 2017: 63-4, Stevens 2008: 45-47, Yano 2002: 28-44). According to Shamoon (2009: 133), Hibari Misora allegedly appeared in over 150 films and recorded 1,500 songs. The prolific artist, in her relatively short life left a very large number of catalogues for this genre in a career started as a child prodigy (Shamoon 2009: 132-133, Tong 2015: 24). 


In 2019, a full thirty years after her death, her fans were surprised by news of the release of her new song. It was sung by vocaloid (a virtual singing voice synthesiser program) developed from advanced AI technology. 


Figure 2. The single cover for Arekara


Source: YESASIA. 2020. Arekara (Japan Version) CD - Misora Hibari, Columbia Music Entertainment - Japanese Music - Free Shipping. [online] YESASIA. Available at: <https://www.yesasia.com/global/arekara-japan-version/1082286873-0-0-0-en/info.html> [Accessed 30 Sep. 2020].


This at once provoked various arguments including ones about creative and ethical concerns. The song was titled Arekara (Since Then) (Figure. 2), the product of a year’s herculean efforts by two Yamaha engineers, Ryunosuke Daido and Keijiro Saino. They used Yamaha’s program ‘VOCALOID: AI’, which is a few more folds advanced than ordinary vocaloid, which usually consists of samples from real human voices and creates music in a similar way to that of mainstream DAWs. The new vocaloid came with an incredible automated system, that learns the features of a human voice and so improves the emulation quality. According to Daido (ARBAN 2019):


The AI first did not emulate her voice well at all, but it revised [that is, re-learned] her vocal techniques and nuances of her singing a few hundred thousand times by itself and it finally became identical to her voice. We then applied the new score to the program, as we were confident that it could sing it like her.


The process was harder than expected, as Hibari’s vocal techniques were highly complex; therefore, the project team had to modify the system again and again to achieve their goal. Apart from the phonological difficulties, the team also faced challenges in meeting the producer’s expectations: he wanted the voice to be not only reminiscent of Hibari’s old style but also predictive of her singing style today.

The premiere of the song took place on 3rd September 2019 and was filmed as a documentary that was broadcast on 29th September 2019. The documentary covered the process of creating a 4K 3D hologram of Hibari being emulated by AI (Figure. 3), which was projected on the screen. Yoshimi Tendo (1954-), the leading Japanese enka singer who had admired Hibari as a hero since her childhood, directed the choreography of the screen model, who was wearing a costume specially designed by Hanae Mori (1926-), one of Hibari’s former fashion designers. The lyrics of the song were written by the lyricist, Yasushi Akimoto (1958-), who wrote Hibari’s last song Ai San San (1986).


Figure 3. The hologram of Hibari Misora shown in the premiere performance of Arekara


Source: Yamaha Corporation. 2019. Yamaha VOCALOID: AITM Faithfully Reproduces Singing of Legendary Japanese Vocalist Hibari Misora. [online] Yamaha Corporation Available at: <https://www.yamaha.com/en/news_release/2019/19100801/> [Accessed 30 Sep. 2020].

As the performance began, many audience members, including the project team, started sobbing and some were holding their hands together, embracing the moment. Positive comments from the audience were shown after the screening and many of them said they were impressed. However, some negative comments by TV audiences were found on Internet forums, which implied that the documentary had excluded negative comments. Some such comments read: “We’d only appreciate it with real singers, not AI. I felt that it was to show off how far technology has come. I wouldn’t like to see anymore attempts of such”; “the singing was good, but the emulation was weird as it was like a robot”, and “this is disrespecting Hibari Misora”. A Japanese journalist, Kazufumi Nishioka (2019), also attacks the outcome of the project, particularly criticising the way in which the AI’s voice was described as ‘eerie’ by some members of the audience. He cites a lawyer’s explanations of their reactions: that they can accept a revival of the dead in a drama setting because they know the actor is not the real person. However, an AI hologram like AI Hibari shows the personality of the dead so much more realistically than an actor that it excites more audience sympathy. Moreover, the lawyer suggests that the audience may be engaged in a complex psychological struggle: the audience worry that they are not able to fully accept the person’s death and cannot detach themselves from the emulated figure forever.


Some of these views also highlight another issue with ownership, in that those who emulate deceased artists’ voices may be guilty of plagiarism. A Japanese lawyer, Yu Mizuno (2020), argues that this case of voice emulation would be considered a violation of Japanese laws, if it were done without the right-holder’s permission. Article 30 (4) states that the use of others’ thoughts and philosophies is limited to personal use and strictly not for making profits. However, this law may not apply to this case, given that ‘Vocaloid: AI’ learns and sings by itself; therefore, it is not an exact copy of the singer’s voice.


Another significant issue is the publicity rights after the death of artists (Zimmerman 2006): 


[The publicity rights allow] individuals or their successors and assignees to exert legal control over when, whether and how their various personal characteristics (at a minimum, their names and actual likenesses) can be used by others for commercial ends).


Zimmerman 2006: 337


At the same time, this could also give rise to issues of copyright infringement. In the past, an artist sued Sony for using her voice as a ‘sample’ in Jennifer Lopez and LL Cool J’s recording without her permission, as the artist considered it a violation of her publicity rights under the state law in California. In this case, the plaintiff’s claim was pre-empted by the Copyright Act as the sample was created from the artist’s recording (Hunter 2012: 223). It is worth noting that the level of protection depends on the location. For example, Indiana in the U.S. protects broad aspects of personal characteristics including voices and images for the duration of a person’s life plus one hundred years; while there is no law to protect publicity rights in Puerto Rico (Jung 2011: 1-2). Therefore, publicity and copyright laws are not universally applicable.


However, even if one has a right to protect artists’ publicity rights after their death, there is an ethical issue in using their names for others’ benefit. In the case of Hibari Misora, while her adopted son, Kazuya Kato, is believed to have the rights, many people have argued that this does not necessarily mean that she would have approved of the project. One of the most debated parts of the song was the speech during the instrumental break. It reads:


Long time no see you

I have always been keeping an eye on you

You have worked very hard so far

I hope you will continue to work hard and cover my part


Many people agreed that this sounded like a personal speech. In particular, the greeting, “Long time no see you” was largely criticised for showing the dead singer as if she had come back to life. One Internet user responded by saying that it would have been acceptable if the words were “I am AI Hibari Misora. Nice to meet you”.


Similar issues with hologram revivals are becoming more common on a global scale. For example, the hologram of Tupac Shakur, the American rapper who died in 1996, which appeared in Coachella in 2012, became viral among the fans particularly due to its connotation of Jesus’ resurrection. Shakur was wearing a crucifix and sang his song “Hail Mary”, which (to his fans) symbolised Shakur’s resurrection on stage (Cull 2015: 124-125). There are several other examples of holograms of deceased artists, e.g. Michael Jackson at the Billboard Music Awards in 2014, and the duet of Elvis Presley and Céline Dion in 2007 on the TV show, American Idol (Stojnić 2016: 175-181). These hologram performers received similar criticisms concerning the rights of the dead (for which see Penfold-Mounce 2018: 21-22, Harrison 2016: 87-88). 


The Virtual idol: Hatsune Miku, a growing subculture and beyond

While these AI emulations are produced by large production companies spending significant sums to enhance their business development, a similar type of AI entertainment has been practised as a subcultural activity in Japan for more than a generation. A prime example is the program, Hatsune Miku (Figure. 4). 


Figure 4. Hatsune Miku


Source: TVTropes, 2020. Hatsune Miku (Music). [online] TV Tropes. Available at: <https://tvtropes.org/pmwiki/pmwiki.php/Music/HatsuneMiku> [Accessed 30 Sep. 2020].

While the concept of this vocaloid can be classified as a type of DAW, the main difference is that the software allows users to create audio tracks of the character singing songs in a kawaii (cute) anime-like voice. It is even possible to program it to sing in foreign languages with a strong Japanese accent (Kotarba and Lalone 2014: 64-65). Featured in anime and advertisements, Hatsune Miku is now well-known in Japan since its emergence in 2007.


Hatsune Miku was certainly not the first to enter the market. The first vocaloid was released in 2004 by Yamaha and it was programmed in English. Soon after this, Hatsune Miku’s creator, Crypton Future Media, entered the market and released the first Japanese vocaloid featuring an anime character (Le 2015: 2-9). Although the program had targeted the domestic audience, recently its popularity has gone beyond Japan. Since 2014, a series of concerts, titled ‘Miku Expo’, have taken place in the UK, Spain, Germany, Holland, France, China, US, Mexico, Taiwan, Hong Kong, Malaysia and Indonesia. In addition, a further fourteen concerts are planned for North America in 2020. 


Many fans of Hatsune Miku embrace the socialist ideology referred to as otaku (discussed in Black 2012: 224). This Japanese word was first included in the Oxford English Dictionary in 2007 as it began to be recognised outside Japan (Galbraith 2015: 4). Otaku has a complex etymology. Otaku is written in Japanese as お宅, which literally means ‘your house’. It is normally used as an “honorific second person personal pronoun […, which is] a polite way to address someone whose social position in relation to you is not yet known”. It also obliquely connotes one’s desire to distance oneself from others, in a more or less similar way to that of a teenager addressing a man as ‘sir’. It is also a slang word used pejoratively to refer to “a fan of any particular theme, topic, or hobby” and is often aimed at people alienated from others due to their unusual obsessions. Examples of otaku are: 


anime otaku (a fan of anime), cosplay otaku and manga otaku (a fan of Japanese comic books), pasokon otaku (personal computer geeks), gēmu otaku (fans playing video games), and wota (pronounced ‘ota’, previously referred to as ‘idol otaku’) that are extreme fans of idols, i.e. heavily promoted singing girls…[,] testudō otaku or denshamania (railfans) or gunji otaku (military geeks).
Taneska 2009: 3


Yet, otaku is perceived somewhat differently outside Japan, in that Westerners do not feel the term is stigmatised; so they have no hesitation calling themselves otaku. Thus, there are differences in the social status of otaku between Japan and elsewhere (Taneska 2009: 3).


Nevertheless, the Hatsune Miku craze is creating a new space for music-making in Japan, and outside it; hence, it is becoming a global phenomenon. Due to its charismatic appearance and the diligent support of otaku worshippers, interactive live events and concerts for virtual idols have become increasingly popular in the last ten years (Mason-Jones and Zeeng 2012: 203, Kotarba and Lalone 2014: 63-64). The popularity of these events lies in Japan’s development of Internet communities and online entertainment. In particular, Niconico is one of the largest websites that represents the subculture of Japan (Hernández 2019: 47). This website is known for streaming videos as well as hosting all sorts of online parties including ones for vocaloid users to post their compositions for fun (see also Michel 2016). The website also hosts a competition for vocaloid composers called sekai bokaroido taikai (World Vocaloid Convention), which is a two-day event regularly run by ‘world vocaloid committees’ who reside in Western countries. These events have added a new category to the domain of music technology, including the mastery of visuals as well as computerised singing. Hence, those who pursue their career in this field are expected to have a good knowledge of internet communities, animation and composition as well as music technology. As a result of development in this musical genre, there are now several recognised ‘vocaloid composers’ or ‘vocaloid artists’, some of whom are amateurs or semi-professionals. According to RAG Studio’s bokaro (vocaloid) artists ranking published on 4th March 2020, there are thirty-four composers on this listing including DECO*27, 40mP, Neru, Hachi, MikitoP, doriko, kyoon ren, wowaka, reruriri, and KAITO. Many of their compositions and animations conjure up a typical opening theme of Japanese anime.  The styles of these compositions vary from punk to Japanese pop; and many of them use a female voice with a thin and nasal timbre similar to Hatsune Miku. 

An interview with 40mP held in September 2013 reveals that music-making in this genre differs from other DAWs, as previously discussed. One of the unique aspects of vocaloid music-making is how its closed community enjoys contributions from a broad range of artistic perspectives. In writing shonen to maho no robotto (A Boy and the Magic Robot), not only did 40mP emphasise the cuteness and the signature voice of his vocaloid character, but also considered how he could further develop the song with other creators in the vocaloid community, referred to as bokaro kaiwai. In fact, such online music communities are increasingly popular throughout the world, as forums for composers to help each other to develop their skills and share their knowledge (Kenny 2016: 12-13). Bokaro kaiwai mainly differ from ordinary online music communities in being collaborative communities in which music is made with the support of audience members, animators and composers; and in that they endeavour to follow certain styles of music that their community wishes to hear.


Another feature of this domain is that, like 40mP in shonen to maho no robotto, some vocaloid composers record accompaniments for their songs with real instruments and even use them in virtual live concerts. Thus, in effect, such composers experience both virtual and real-life music-making. In addition, bokaro kaiwai also celebrates a new Internet subcultural activity called utattemita (attempt to sing) in which audience members post their singing with karaoke tracks of existing vocaloid songs, contributing to the new musical development beyond vocaloid. Arguably, the motive behind utattemita is the community’s unspoken desire to hear the songs sung by real people (Goto and Hamasaki 2012: 1-8). Moreover, the utattemita movement has recently progressed to a professional level. One of the recent examples of this is Wagakki Bando (a band who play traditional Japanese instruments) covering the song, Senbonzakura. This was originally written for Hatsune Miku (originally titled as ‘Senbonzakura feat. Hatsune Miku, 千本桜 feat.初音ミク’). Senbonzakura was also notably covered by the leading enka singer, Sachiko Kobayashi (1953-) in the prestigious event, NHK Kohaku Utagassen, in 2015, where the audience felt her performance went beyond the original vocaloid quality. The enka singer’s collaboration with the Internet community surprised her established fan base, as it brought enka to the young and modern pop culture. Soon after, in 2016 and 2019, the same song also featured in a new Kabuki production with the actor, Shido Nakamura II, and a hologram of Hatsune Miku, under the title ‘Chokabuki’ (super kabuki). Thus, virtual and real-life collaborations continue to develop along with new technology. 


Discussion


So far in this paper, what has been evident is how advances in information technology, particularly AI applied to automating music composition, have changed the dynamics of certain musical cultures on a glocal to global level.  This supports my view that developing technology challenges our preconceptions of how music is transported from one place to another. Most notably, many of my earlier examples of compositions suggest that there is less need for composer-to-performer musical transportation since some of them do not require any real musicians to perform them. This also means that there are fewer region-specific aspects of musical transportation, in that many would-be composers are able to teach themselves composition through DAWs and compositions are performed on computers. In terms of primary and secondary education settings, music technology is now gradually replacing traditional music pedagogy in many ways. Firstly, it challenges creativity with its increasingly ready-made and automated music materials within DAWs (automations – e.g. of dynamics, reverb, timbre, etc., pre-composed samples and drum-machines). Secondly, students taking advantage of such materials are not encouraged to explore different elements of music, which the traditional learning of music notation can offer. Thirdly, while individuals may gain some understanding of non-Western instruments due to their virtual accessibility and availability on DAWs, this advantage could also make any learning of musical aesthetics and the cultural significance of those instruments redundant.

However, the accessibility of DAWs benefits those with disabilities, while intuitive interfaces and allows many people to experiment without having to study composition. In this regard, one of the most fundamental parts of musical transportation, the ‘production of music’, has been improved significantly and it is certainly the aspect that will continue to improve. Furthermore, the development of vocaloid and its AI-led functions provides a new culture within its existing musical context. In particular, the emergence of Hatsune Miku has brought a new virtual and real-life aspect to music-making. However, the case of AI-Hibari alerts us to a number of issues regarding creativity, publicity rights, ownership, and ethics. Thus, as technology advances, it will further challenge the current states of politics, laws, and morality. In assessing musical transportability, these will also be important considerations, which were less of less concerns in my field research. 


While these AI-led programs involve some level of human input and control, there have been a few attempts to create compositions with much less human input. For example, in 2012 an attempt to create completely computer-generated compositions promised the emergence of more convincing software in the near future. It produced coherent music after programming more specific elements of music to follow (e.g. scales, modes, and rhythms) (Kang et al. 2012: 442). Indeed, a few years on, more advanced and less human guided programs have emerged, namely Amper Music and Jukedeck. According to Kreutzer and Sirrenberg (2020):


Startups such as Amper Music (cf. 2019) and Jukedeck (cf. 2019) apply Artificial Intelligence to produce music for computer games, videos and advertising. With Jukedeck any layman can try his or her hand as a composer. All you have to do is enter the desired style (such as pop, rock or jazz). In addition, the desired length of the piece as well as a possible timing for highlights etc. must be specified. After a few seconds, the software makes the finished composition available for free download (cf. Jukedeck, 2019)…[In addition,] [t]he music artist Benoit Carré alias SKYGGE already produced the pop album Hello World [(2018)] with the AI software Flow Machines—an EU research project.


Kreutzer and Sirrenbereg 2020: 218


However, there are still issues with producing what constitutes ‘successful music’. Kreutzer and Sirrenbereg report that:

the greatest challenge in the creative process lies in the structuring of the AI-created music components as well as in their sequences and transitions. Only elegant connections can turn a song into a successful song.
Kreutzer and Sirrenbereg 2020: 218


They suggest that AI can produce music without much input from a human being, but when it comes to ‘creativity’, individuals are still required. This implies that AI cannot fully replace a human being yet, considering that ‘creativity’ depends on an individual’s taste and experience. In fact, as yet, no-one has been able to answer the question, “Can artificial intelligence compose better music than humans?” Relating to this, in 2018 in the New York Times, Marshall (2018) considered whether AI could make people laugh. He described a series of experimental shows run by Piotr Mirowski, a senior research scientist working on AI at Google DeepMind, with his AI robot. At the time of reporting, the shows had demonstrated that the AI had yet to grasp humour well, especially when it came to improvisation, as it was only able to follow storylines and simple contexts. 

The above discussions of the limitations of AI illustrate that there are ongoing issues with AI. It has not reached the point where it can grasp multitudes of human emotions and music-making. Yet, considering how rapidly technology is advancing, most of these challenges will likely be overcome in the future. 


In conclusion, technological advancement, especially in the area of AI, is changing the existing models of musical transportation. This has affected the ways in which music is created and performed. Ultimately the traditional notion of musical composition will be rendered even more obsolete, with the composer-to-musician aspect further reduced in a totally computerised music-making process. At the moment, however, the significant part of the process still requires individuals to add a creative edge and make adjustments to improve outcomes. While the changes are taking place gradually, many present-day musicians have to adapt. As one consequence, in Japan there are currently collaborations of real musicians and vocaloid artists; and elsewhere, concerts featuring holograms and computerised singing. These changes will continue to shape musical transportation.


References

Bates, Eliot. 2012. “The Social Life of Musical Instruments.” Ethnomusicology 56(3):363.

Besold, Tarek R., Marco Schorlemmer, and Alan Smaill, eds. 2015. Computational Creativity Research: Towards Creative Machines. Atlantis Thinking Machines. Paris: Atlantis Press.

Besold, Tarek R., Marco Schorlemmer, and Alan Smaill, eds. 2015. Computational Creativity Research: Towards Creative Machines. Atlantis Thinking Machines. Paris: Atlantis Press.

Black, Daniel. 2012. “The Virtual Idol: Producing and Consuming Digital Femininity.” In Idols and Celebrity in Japanese Media Culture, edited by Patrick W. Galbraith and Jason G. Karlin, 209–228. Basingstoke: Palgrave Macmillan.

Born, Georgina, and Kyle Devine. 2015. “Music Technology, Gender, and Class: Digitization, Educational and Social Change in Britain.” Twentieth-Century Music 12(2):135–72.

Chang, Yu-Jeong. 2017. “Trot and Ballad: Popular Genres of Korean Pop.” In Made in Korea: Studies in Popular Music, edited by Hyunjoon Shin and Seung-Ah Lee, 63–70. Abingdon: Routledge.

Cull, Felicity. 2015. “Dead Music in Live Music Culture.” In The Digital Evolution of Live Music, edited by Angela Jones and Rebecca Jane Bennett, 109–122. Amsterdam: Elsevier.

D’Errico, Mike. 2019. “Electronic Music.” In The SAGE International Encyclopedia of Music and Culture, edited by Janet Sturman, 786–88. California: SAGE Publications.

Dignum, Virginia. 2019. Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. Basel: Springer Nature.

Eigenfeldt, Arne. 2016. “Exploring Moment-Form In Generative Music.” [online] Research Gate. Available at: https://www.researchgate.net/publication/306207035_EXPLORING_MOMENT-FORM_IN_GENERATIVE_MUSIC> [Accessed 30 March 2020].

Galbraith, Patrick W., Thiam Huat Kam, and Björn-Ole Kamm. 2015. “Introduction: ‘Otaku’ Research: Past, Present and Future.” In Debating Otaku in Contemporary Japan: Historical Perspectives and New Horizons, edited by Patrick W. Galbraith, Thiam Huat Kam, and Björn-Ole Kamm, 1–20. London: Bloomsbury Publishing.

Grüll, Ingo. 2005. “Conga: A Conducting Gesture Analysis Framework.” Diploma thesis, Ulm: Ulm University.

Hamasaki, Masahiro, and Masataka Goto. 2012. “Songrium: Tayona Kankeiseini Motozuku Ongakushichoshien Sabisu.” Information Processing Society of Japan 2012-MUS-96(1):1–8.

Harley, James. 2004. Xenakis: His Life in Music. New York: Routledge.

Harrison, Ted. 2016. The Death and Resurrection of Elvis Presley. London: Reaktion Books.

Hein, Ethan. 2017. “The Promise and Pitfalls of the Digital Studio.” In The Oxford Handbook of Technology and Music Education, edited by Alex Ruthmann and Roger Mantie, 233–240. New York: Oxford University Press.

Hernández-Pérez, Manuel. 2019. “The Anime Industry, Networks of Participation, Environments for the Management of Content in Japan.” In Japanese Media Cultures in Japan and Abroad: Transnational Consumption of Manga, Anime, and Media-Mixes, edited by Manuel Hernández-Pérez, 46–65. Basel: MDPI.

Jones, Darren. 2018. The Complete Guide to Music Technology Using Cubase 10. Morrisville: Lulu.com.

Kang, Semin, Soo-Yol Ok, and Young-Min Kang. 2012. “Automatic Music Generation and Machine Learning Based Evaluation.” In Multimedia and Signal Processing: Second International Conference, CMSP 2012, Shanghai, China, December 7-9, 2012, Proceedings, edited by Fu Lee Wang, Jingsheng Lei, Rynson W. H. Lau, and Jingxin Zhang, 436–443. Heidelberg: Springer.

Kardos, Leah. 2012. “How Music Technology Can Make Sound and Music Worlds Accessible to Student Composers in Further Education Colleges.” British Journal of Music Education 29(2):143–51.

Kelley-Browne, Elizabeth. 2011. “Cyber-Ethnography: The Emerging Research Approach for 21st Century Research Investigation.” In Handbook of Research on Transformative Online Education and Liberation: Models for Social Equality: Models for Social Equality, edited by Kurubacak Gulsun and T. Volkan Yuzer, 330–339. Hershey: IGI Global.

Kenny, Ailbhe. 2016. Communities of Musical Practice. Abingdon: Routledge.

Klenke, Karin. 2016. Qualitative Research in the Study of Leadership. Bingley: Emerald Group Publishing.

Knakkergaard, Martin. 2019. “Systemic Abstractions: The Imaginary Regime.” In The Oxford Handbook of Sound and Imagination, edited by Mark Grimshawaagaard, Mads Waltherhansen, and Martin Knakkergaard, 2:117–32. New York: Oxford University Press.

Kotarba, Joseph A., and Nicolas J. LaLone. 2014. “The Scene: A Conceptual Template for an Interactionist Approach to Contemporary Music.” In Studies in Symbolic Interaction: Revisiting Symbolic Interaction in Music Studies and New Interpretive Works, edited by Norman K. Denzin, 42:51–65. Bingley: Emerald Group Publishing.

Kreutzer, Ralf T., and Marie Sirrenberg. 2020. Understanding Artificial Intelligence: Fundamentals, Use Cases and Methods for a Corporate AI Journey. Cham: Springer Nature Switzerland.

Le, Linh Thi Khanh. 2015. “Examining the Rise of Hatsune Miku : The First International Virtual Idol.” The UCI Undergraduate Research Journal.

Leung, Chi Cheung. 2013. “Music Composition Education in Hong Kong.” In Creative Arts in Education and Culture: Perspectives from Greater China, edited by Samuel Leong and Bo Wah Leung, 97–116. Dordrecht: Springer.

Llorente, Glenn. 2014. “Stockhausen’s Studie II: Elektronische Musik (1954).” Early Music Models in Post-War European Musical Modernism: Herb Alpert School of Music, UCLA. [online] Available at: https://www.academia.edu/11782648/Stockhausen_s_Studie_II_Elektronische_Musik_1954_Exploring_the_Extent_of_Multiple_Serialism_in_Electronic_Music [Accessed 28 Aug. 2020].

MacCallum, John, and Aaron Einbond. 2008. “Real-Time Analysis of Sensory Dissonance.” In Computer Music Modeling and Retrieval. Sense of Sounds: 4th International Symposium, CMMR 2007, Copenhagen, Denmark, August 2007, Revised Papers, edited by Richard Kronland-Martinet, Sølvi Ystad, and Kristoffer Jensen, 203–211. Berlin: Springer.

Maisel, Eric. 2007. Creativity for Life: Practical Advice on the Artist’s Personality, and Career from America’s Foremost Creativity Coach. California: New World Library.

Martino, Lisa, and Michael Bertolami. 2014. “Using Music Technology with Children and Adolescents with Visual Impairments and Additional Disabilities.” In Music Technology in Therapeutic and Health Settings, edited by Wendy Magee. London: Jessica Kingsley Publishers.

Mason-Jones, Hugh, and Augusta Zeeng. 2012. Media Reloaded. Cambridge: Cambridge University Press.

McCord, Kimberly. 2017. Teaching the Postsecondary Music Student with Disabilities. New York: Oxford University Press.

McLeod, Kembrew, and Peter DiCola. 2011. Creative License: The Law and Culture of Digital Sampling. Duke University Press.

Middleton, Paul, and Steven Gurevitz. 2008. Music Technology Workbook: Key Concepts and Practical Projects. Oxford: Focal Press.

Nilsson, Nils J. 2010. The Quest for Artificial Intelligence: A History of Ideas and Achievements. Cambridge: Cambridge University Press.

Noxon, James. 2004. “Music Technology As a Team Sport.” Journal of Technology in Music Learning 2(2):56–61.

Penfold-Mounce, Ruth. 2018. Death, The Dead and Popular Culture. Bingley: Emerald Group Publishing.

Roads, Curtis. 1989. “VII Music and Artificial Intelligence (Overview).” In The Music Machine: Selected Readings from Computer Music Journal, edited by Curtis Roads, 635–638. Massachusetts: MIT Press.

Shamoon, Deborah. 2009. “Misora Hibari and the Girl Star in Postwar Japanese Cinema.” Signs: Journal of Women in Culture and Society 35(1):131–55.

Stevens, Carolyn S. 2008. Japanese Popular Music: Culture, Authenticity and Power. Abingdon: Routledge.

Stojnić, Aneta. 2016. “Live or Living Dead: (Un)Settling the Stage for the Hologram Performer.” In The Crisis in the Humanities: Transdisciplinary Solutions, edited by Žarko Cvejić, Andrija Filipović, and Ana Petrov, 174–182. Newcastle: Cambridge Scholars Publishing.

Strawn, John. 1996. “Waveform Segment, Graphic, and Stochastic Synthesis.” In The Computer Music Tutorial, edited by Curtis Roads, 317–346. Cambridge, Massachusetts: MIT Press.

Taneska, Biljana Kochoska., 2009. OTAKU — the living force of the social media network. Available at: <https://www.academia.edu/5987809/OTAKU_the_living_force_of_the_social_media_network> [Accessed 2 Apr. 2020].

Tong Koon, Fung Benny. 2015. “A Tale of Two Stars: Understanding the Establishment of Femininity in Enka through Misora Hibari and Fuji Keiko.” Situations: Cultural Studies in the Asian Context, Patterns, Rhythms, Movement: East Asian Perspectives in Cultural Geography, 8(1):23–44.

Ventura, Michele Della. 2019. “Exploring the Impact of Artificial Intelligence in Music Education to Enhance the Dyslexic Student’s Skills.” In Learning Technology for Education Challenges: 8th International Workshop, LTEC 2019, Zamora, Spain, July 15–18, 2019, Proceedings, edited by Lorna Uden, Dario Liberona, Galo Sanchez, and Sara Rodríguez-González, 14–22. Cham: Springer.

Yano, Christine Reiko. 2002. Tears of Longing: Nostalgia and the Nation in Japanese Popular Song. Massachusetts: Harvard University Asia Center.

Yun, Yoemun, and Si-Ho Cha. 2013. “Designing Virtual Instruments for Computer Music.” International Journal of Multimedia and Ubiquitous Engineering 8(5):173–78.

Zimmerman, Diane Leenheer. 2006. “Who Put The Right In the Right of Publicity?” In Intellectual Property Rights: Critical Concepts in Law, edited by David Vaver, 337–368. Abingdon: Routledge.

Zuberi, Nabeel. 2007. “Is This the Future? Black Music and Technology Discourse.” Science Fiction Studies 34(2):283–300.


 





unsplash