Future Telephone t-Room
An Audio and Video Wonder Space Transcending the Barriers of
Space and Time
The t-Room is a video communication system, the approach of which is to build rooms with an identical layout, including walls of display screens on which users and physical or virtual objects are all shown at life size, and to provide symmetry of awareness and immersion in each other’s physical space.
The t-Room consists of eight large-sized LCD panels and eight cameras, which enclose a user space. Remote users‘ images are presented at lifesize on the LCD panels. With this setup, users can freely move around and point to physical objects within the t-Room as if they are were in the same room. Remote and local users can naturally share gaze, gestures, body orientation, and spatial cues, just as they would in face-to-face communication. Then, users can freely enter and leave each others' space, there being no spatial barrier such as the screen of a conventional videoconferencing system. Any number of the t-Room systems can be connected to overlay user spaces. Furthermore, since t-Room allows recording and playback to be iterated as many times as desired, users can exchange video messages like in the manner of e-mail, thus creating communications independent of time.
By overlaying the t-Room space with other spaces, for example, a distant office, a living room, or a restaurant, spatial barriers can be overcome. Moreover, by overlaying the t-Room space onto recorded past spaces and rooms, people can overcome the barriers of time, as well.
Photo:Demonstrating a t-Room made from eight Monoliths
arranged decagonally, connecting three locations.
Five central Monoliths shown in this photo.
Asynchronous golf lesson by the recording and playback capability enables, for example, a cycle of lessons as follows: the teacher showing sample swings, the student responding with practice shots. The cycle looks like exchanging messages of email.
Since there is no need the instructor is always on site, t-Room saves time and travel distances.
Figure 1 illustrates our method for reproducing face-to-face interaction among persons A, B, and C; we duplicate a space and project remote users’ images to surrounding back screens . For each room,
In the figure, we alternatively arrange three screens and three cameras to surround a user, who stands just in front of a screen. The preprocessing denoted by and in the figure is needed. The function of is to extract only the light from real objects in front of the opposite screen and to cancel the light from the screen (visual echo canceller). That of iis to overlaying or superimposing more than two images captured in Rooms 1 and 2 to correctly place images where they should be projected (overlayer). Detailed illustration of the entire wiring is omitted for simplicity. For recording and playback capability, the output of a visual echo canceller is stored. When later accessed, the stored data is put into an overlayer.
Figures 2 and 3 show the hardware configuration of the current t-Room system. A single t-Room consists of eight building modules (called Monoliths) arranged polygonally. With this setup, t-Room encloses a user space with surrounding LCD displays showing life-sized images. The enclosed space is shared with other enclosed spaces by overlaying it onto them. As a result, users can freely come from and go into others’ spaces, since there is no spatial barrier separating users such as the screen in a conventional videoconferencing system. Consequently, the overlaying enclosed spaces can provide full sharedness and minimum exclusiveness.
We installed three nearly identical t-Rooms in our labs located in Atsugi City and Kyoto Prefecture (Atsugi is in the Tokyo area, and Kyoto is approximately 400 km away from Tokyo). Currently, commercially available 100-Mbps optical fiber lines connect the one at Atsugi and the two at Kyoto.
 Hirata, K., Harada, Y., Takada, T., Yamashita, N., Aoyagi, S., Shirai, Y., Kaji, K., Yamato, J., and Nakazawa, K. Video Communication System Supporting Spatial Cues of Mobile Users. In Proceedings of CollabTech 2008, pp.122-127, IPSJ.