To view PDF files

You need Adobe Reader 7.0 or later in order to read PDF files on this site.
If Adobe Reader is not installed on your computer, click the button below and go to the download site.

Feature Articles: 2020 Showcase

Vol. 14, No. 12, pp. 36–42, Dec. 2016. https://doi.org/10.53829/ntr201612fa6

2020 Entertainment—A New Form of Hospitality Achieved with Entertainment × ICT

Tomohiro Nishitani, Akira Ono, Tomoyuki Kanekiyo,
Takahiro Yamaguchi, Akio Kameda, Shin’ya Nishida,
Junichi Nakagawa, and Motohiro Makiguchi

Abstract

This article introduces new applications of information and communication technology (ICT) in the entertainment field. Specifically, we describe two events involving the collaborative creation of new forms of kabuki using ICT: Dwango Co., Ltd.’s Cho Kabuki event held at Niconico Chokaigi (29–30 April 2016, Makuhari Messe, Chiba Prefecture, Japan) and Shochiku Company Limited’s KABUKI LION SHI-SHI-O held at the Japan KABUKI Festival (3–7 May 2016, Las Vegas, USA).

Keywords: Kabuki, ICT, hospitality

PDF PDF

1. A new way to enjoy kabuki

Until now, kabuki has mostly been enjoyed live on stage. However, NTT Service Evolution Laboratories has used information and communication technology (ICT) to create new ways to enjoy kabuki in each of the pre-performance, mid-performance, and post-performance periods.

1.1 Pre-performance

First, for the pre-performance period, we created an interactive exhibit called Henshin Kabuki (transformation kabuki). This non-theater kabuki performance was held at the KABUKI LION Interactive Showcase in front of a Las Vegas theater. The exhibit features the unique make-up methods characteristic of kabuki and uses the henshin (transformations) of kabuki actors to showcase the culture and technology prowess that Japan prides itself on, creating a wondrous experience. Participants chose a kumadori (stage makeup worn by kabuki actors) mask of their choice and stood in front of the large-screen monitors. The angle-free object search technology automatically detected the type of mask chosen regardless of the mask’s angle or tilt and superimposed the kumadori patterns onto the faces of the participants using augmented reality (AR) (Photo 1). The AR superimposition was made possible due to edge computing technology, which allows servers in distant locations to process images at high speed, allowing for clear, non-blurry images to be placed even if the subjects are moving around at a high speed.


Photo 1. Kabuki actor Somegoro Ichikawa experienced the interactive exhibit.

In addition, projection mapping was done on a three-dimensional (3D) face object. All dynamic performance aspects of kabuki such as timing and facial expressions were maintained and combined with the concept of an amplified experience that extracts the key performance points and makes them stand out in a larger-than-life way.

Moreover, fifty kumadori masks were hung on a wall. The masks, which ordinarily should not move at all, made use of the same HenGenTou (deformation lamp) light projection technology that was used in the main performance to freely take on a diverse variety of facial expressions such as laughter and anger.

Of the 1018 participants from many different age groups, nationalities, and genders, approximately 90% reacted positively to the new interactive experience. The mask-recognition technology that used angle-free object search had an accurate identification rate of 99.2%. However, some challenges for the future also became apparent such as finding a way to provide guest guidance in an exhibit that contains multiple aspects and optimizing the angle-free object search lexicon tuning.

1.2 Mid-performance

The mid-performance period introduced the Cho Kabuki performance and a highly realistic remote rendition of KABUKI LION SHI-SHI-O (Photos 2 and 3).


Photo 2. Poster of KABUKI LION SHI-SHI-O.


Photo 3. Poster of Cho Kabuki.

1.2.1 KABUKI LION SHI-SHI-O

Four initiatives were carried out to achieve the highly realistic remote rendition of the KABUKI LION SHI-SHI-O main performance held in Las Vegas.

(1) Room reproduction using 4K multi-screens (Photo 4)

Footage from nine 4K-resolution cameras was encoded using high-compression HEVC (High Efficiency Video Coding). MMT (MPEG* Media Transport), which enables multiple videos and voices to be flexibly synchronized in real time, was then used to create the first international 4K multi-screen relay. The Haneda (Japan) venue’s omnidirectional screens on the front stage (three 180-inch screens), stage passage (one lower screen, three rear screens), and ceiling (two upper screens) all simultaneously showed 4K video footage. The display received an energetic response from over 80% of the 198 participants that consisted of media personnel and invitees. Over 70% responded in a highly favorable manner, stating that they would like to visit again. On the system side, the synchronization and latency both produced positive results as expected, and various parameters were confirmed for future use. However, problems arose with the linkages between the multiple screens.


Photo 4. Highly realistic remote rendition with omnidirectional screens.

(2) Remote greeting using pseudo-3D images generated by Kirari! (Photo 5)

Subject abstraction technology was used to finely extract live footage of the performers from their backgrounds. Synchronization of audio with the pseudo-3D video display at the remote stage enabled Somegoro Ichikawa, who was present at the live event in Las Vegas, to greet those viewing the remote stage at the Haneda venue as if he were there in person. This was a world-first for this technology. Despite a lack of prior preparation and confirmation of lighting and camera locations for the actual performance, on the system side, the subject extraction and shadow generation was done well and with high precision. Over 70% of viewers replied that the experience felt as if Somegoro Ichikawa was really standing on the stage in front of them.


Photo 5. Somegoro Ichikawa in Las Vegas greeting the audience in Haneda, Japan.

(3) 4K omnidirectional live footage broadcast for mobile devices (Photo 6)

By compressing video and transmitting high-definition footage solely in the direction that the user is looking, we were able to create a system that controlled bandwidth while maintaining a high-quality viewing experience (Fig. 1). On the technical side of things, the construction and stable operation of a live transmission were achieved, which reduced bandwidth by approximately 80% when compared to transmissions of the entire 4K omnidirectional video. In addition, many users replied that they felt they were actually at the theater. We confirmed that we were able to meet the needs of omnidirectional live viewing. However, certain problems arose with image quality.


Photo 6. 4K omnidirectional live viewing with mobile device.


Fig. 1. A system of 4K omnidirectional live footage broadcast for mobile devices.

(4) HenGenTou performance (Photo 7)

HenGenTou (deformation lamp) technology was used to create the appearance of a city in the middle of the ocean at the top half of the stage background (15 m high x 4.5 m wide). This technology was also used to present rippling of the water in the ocean. Several problems were discovered with the characteristics of the stage, though. For example, when spotlights were shown on actors, the stage setting and movements at the top half of the background did not stand out very much. Also, we recognized the importance of having a script in line with the performance.

We constructed a video transmission network linking the Las Vegas and Haneda venues for the remote live viewing (Fig. 2). Although the period of time was limited, we were able to construct two paths, which were guaranteed for a 1-Gbit/s bandwidth, by combining a line from Internet2, the US research consortium, and GEMnet2, the network testbed owned and operated by NTT Service Evolution Laboratories. During the performance, packet losses were small enough for the error correction function to handle. Future challenges include monitoring micro-burst traffic, reducing costs, and further reducing construction times.


Photo 7. Rippling of the water in the ocean was presented on stage using HenGenTou.


Fig. 2. Outline of Japan-US network for video transmission.

1.2.2 Cho Kabuki

The Cho Kabuki event consisted of five performances of the kabuki play Hanakurabe Senbonzakura. Subject extraction and virtual speaker technologies were used to create highly realistic performances by extracting the figure of the samurai Sato Tadanobu (played by Shido Nakamura) and making it feel as if voices were coming directly from Hatsune Miku’s mouth. The performance received high acclaim as a new form of kabuki performance, with many mentions from television and Internet media sources.

1.3 Post-performance

For the post-performance period, we created an experience that used a cardboard craft box combined with a smartphone to provide a simple 3D video display. Looking inside the box would reveal a palm-sized Hatsune Miku rising up in 3D fashion. This allowed users to experience the vibe of Cho Kabuki.

* MPEG: Moving Picture Experts Group

2. Future development

This project was the first step for kabuki × ICT. In preparation for the next generation of kabuki, we will pursue even more realistic forms of expression and create new kabuki space experiences (installations). We also plan to continue these stage performances and live remote performances in more trials with general users from all over the world to obtain their feedback. In addition to pursuing the development of practical-use services with business potential, we plan to take the knowledge gained from ventures in the field of entertainment and make use of it in hospitality, offering new and emotional experiences with our eyes on 2020.

Tomohiro Nishitani
Research Engineer, 2020 Epoch-making Project, NTT Service Evolution Laboratories.
He received his B.S. and M.S. in physics from Tokyo Institute of Technology in 1997 and 1999. Since joining NTT in 1999, he has been involved in researching peer to peer systems and network address translation traversal and developing VoIP (voice over Internet protocol) applications for smartphones. He has been with NTT Service Evolution Laboratories since 2015. His current research interests include media art using cutting-edge ICT. He is a member of the Institute of Electronics, Information and Communication Engineers (IEICE), the Information Processing Society of Japan (IPSJ), the Acoustical Society of Japan, and the Project Management Institute.
Akira Ono
Senior Research Engineer, Supervisor, Natural Communication Project, NTT Service Evolution Laboratories.
He received an M.E. in computer engineering from Waseda University, Tokyo, in 1992. He joined NTT in 1992 and engaged in research and development (R&D) of video communication systems. From 1999 to 2010, he was with NTT Communications, where he was involved in network engineering and creating consumer services. He moved to NTT Cyber Solution Laboratories (now, NTT Service Evolution Laboratories) in 2010. Since 2015, he has been studying the immersive telepresence technology called “Kirari!”.
Tomoyuki Kanekiyo
Senior Research Engineer, Supervisor, Natural Communication Project, NTT Service Evolution Laboratories.
He received a B.E. in applied physics engineering from Osaka University in 1992. Since joining NTT in 1992, he has been researching video distribution systems and developed a commercial IPTV system and broadcast system for mobile phones. He is currently researching ultra-realistic communication systems.
Takahiro Yamaguchi
Senior Research Engineer, Supervisor, NTT Network Innovation Laboratories.
He received a Ph.D. in electronic engineering from the University of Electro-Communications, Tokyo, in 1998. He joined NTT Optical Network Systems Laboratories in 1998 and has been researching super high-definition image distribution systems. He is a member of IEICE and the Information and Television Engineers of Japan.
Akio Kameda
Research Engineer, Visual Media Project, NTT Media Intelligence Laboratories.
He received a B.E. and M.E. in electrical engineering from Tokyo University of Science in 1993 and 1995. In 1995, he joined NTT Human Interface Laboratories, where he was involved in researching video communication systems on narrowband networks. Since 2012, he has been with NTT Media Intelligence Laboratories, where he has been engaged in R&D of an interactive panorama video distribution system. He is a member of IEICE.
Shin’ya Nishida
Senior Distinguished Scientist, Group Leader of Sensory Representation Research Group, NTT Communication Science Laboratories.
He received a Ph.D. in psychology from Kyoto University in 1996. He joined NTT in 1992. He specializes in psychophysical research on human visual processing, particularly motion perception, cross-attribute/modality integration, time perception, and material perception.
Junichi Nakagawa
Senior Research Engineer, Supervisor, 2020 Epoch-making Project, NTT Service Evolution Laboratories.
He received a B.E. and M.E. in mechanical engineering from Waseda University, Tokyo, in 1988 and 1990, and an M.S. in information networking from Carnegie Mellon University, USA, in 2000. He joined NTT Communication Network Research Laboratories in 1990. He is a member of IPSJ and the Institute of Electrical and Electronics Engineers.
Motohiro Makiguchi
Research Engineer, Natural Communication Project, NTT Service Evolution Laboratories.
He received his B.S. and M.S. in information science from Hokkaido University in 2006 and 2012. Since joining NTT in 2012, he has been researching human computation and crowdsourcing systems and multilayer floating-image projection systems using smartphones. His current research interests include virtual reality and mixed reality.

↑ TOP