Research

Inter-reflection Compensation of Immersive Projection Display by Spatio-Temporal Screen Reflectance Modulation

  • Shoichi Takeda, Daisuke Iwai, and Kosuke Sato : Inter-reflection Compensation of Immersive Projection Display by Spatio-Temporal Screen Reflectance Modulation, IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE Virtual Reality 2016), Vol. 22, No. 4, pp. 1424-1431, 2016.

Proposed system   We propose a novel inter-reflection compensation technique for immersive projection displays wherein we spatially modulate the reflectance pattern on the screen to improve the compensation performance of conventional methods. As the luminance of light reflected on a projection surface is mathematically represented as the multiplication of the illuminance of incident light and the surface reflectance, we can reduce undesirable intensity elevation because of inter-reflections by decreasing surface reflectance. Based on this principle, we improve conventional inter-reflection compensation techniques by applying reflectance pattern modulation. We realize spatial reflectance modulation of a projection screen by painting it with a photochromic compound, which changes its color (i.e., the reflectance of the screen) when ultraviolet (UV) light is applied and by controlling UV irradiation with a UV LED array placed behind the screen. The main contribution of this paper is a computational model to optimize a reflectance pattern for the accurate reproduction of a target appearance by decreasing the intensity elevation caused by inter-reflection while maintaining the maximum intensity of the target appearance. Through simulation and physical experiments, we demonstrate the feasibility of the proposed model and confirm its advantage over conventional methods.   

Return To The Top Page

Reducing Motion Blur Artifact of Foveal Projection for Dynamic Focus-Plus-Context Display

  • Daisuke Iwai, Kei Kodama, and Kosuke Sato : Reducing Motion Blur Artifact of Foveal Projection for Dynamic Focus-Plus-Context Display, IEEE Transactions on Circuits and Systems for Video Technology, Vol. 25, No. 4, pp. 547-556, 2015.

minimize the error   This paper presents a novel technique to reduce the motion blur artifacts of foveal projection in a dynamic focus-pluscontext (DF+C) display. The DF+C display is generally configured with multiple projectors and provides a nonuniform spatial resolution that consists of high-resolution (hi-res) regions (foveal projection) and low-resolution regions (peripheral projection). A serious problem of the DF+C display is motion blur, which inevitably occurs when a foveal projection is moved by a pan-tilt mirror or gantry. We propose a solution that reduces the motion blur artifacts, and evaluate how this solution improves the image quality using both qualitative and quantitative experiments. Our proposed method defines an error function to assess the displayed image quality as the difference between an original hi-res image and the displayed image by considering the nonuniform spatial property of human visual acuity. Then, it decides the set of positions and moving techniques of foveal projections so that the sum of errors during a video sequence is minimized. Through experiment, we confirmed that the proposed method can provide a better image quality and significantly improve the motion blur artifacts when compared with a conventional DF+C display.   

Return To The Top Page

Projection-based Visualization of Tangential Deformation of Nonrigid Surface by Deformation Estimation Using Infrared Texture

  • Parinya Punpongsanon, Daisuke Iwai, and Kosuke Sato : Projection-based Visualization of Tangential Deformation of Nonrigid Surface by Deformation Estimation Using Infrared Texture, Virtual Reality, Vol. 19, No. 1, pp. 45-56, 2015.

System concept   In this paper, we propose a projection-based mixed reality system that visualizes the tangential deformation of a nonrigid surface by superimposing graphics directly onto the surface by projected imagery. The superimposed graphics are deformed according to the surface deformation. To achieve this goal, we develop a computer vision technique that estimates the tangential deformation by measuring the frame-by-frame movement of an infrared (IR) texture on the surface. IR ink, which can be captured by an IR camera under IR light, but is invisible to the human eye, is used to provide the surface texture. Consequently, the texture does not degrade the image quality of the augmented graphics. The proposed technique measures individually the surface motion between two successive frames. Therefore, it does not suffer from occlusions caused by interactions and allows touching, pushing, pulling, and pinching, etc. The moving least squares technique interpolates the measured result to estimate denser surface deformation. The proposed method relies only on the apparent motion measurement; thus, it is not limited to a specific deformation characteristic, but is flexible for multiple deformable materials, such as viscoelastic and elastic materials. Experiments confirm that, with the proposed method, we can visualize the surface deformation of various materials by projected illumination, even when the user’s hand occludes the surface from the camera.   

Return To The Top Page

Robust, Error-Tolerant Photometric Projector Compensation

  • Anselm Grundhöfer and Daisuke Iwai : Robust, Error-Tolerant Photometric Projector Compensation, IEEE Transactions on Image Processing, Vol. 24, No. 12, pp. 5086--5099, 2015.

Projection by proposed method   We propose a novel error tolerant optimization approach to generate a high-quality photometric compensated projection. The application of a non-linear color mapping function does not require radiometric pre-calibration of cameras or projectors. This characteristic improves the compensation quality compared with related linear methods if this approach is used with devices that apply complex color processing, such as single-chip digital light processing projectors. Our approach consists of a sparse sampling of the projector’s color gamut and non-linear scattered data interpolation to generate the per-pixel mapping from the projector to camera colors in real time. To avoid out-of-gamut artifacts, the input image’s luminance is automatically adjusted locally in an optional offline optimization step that maximizes the achievable contrast while preserving smooth input gradients without significant clipping errors. To minimize the appearance of color artifacts at high-frequency reflectance changes of the surface due to usually unavoidable slight projector vibrations and movement (drift), we show that a drift measurement and analysis step, when combined with per-pixel compensation image optimization, significantly decreases the visibility of such artifacts.   

Return To The Top Page

SoftAR: Visually Manipulating Haptic Softness Perception in Spatial Augmented Reality

  • Parinya Punpongsanon, Daisuke Iwai, and Kosuke Sato : SoftAR: Visually Manipulating Haptic Softness Perception in Spatial Augmented Reality, IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE International Symposium on Mixed and Augmented Reality), Vol. 21, No. 11, pp. 1279-1288, 2015.

System overflow   We present SoftAR, a novel spatial augmented reality (AR) technique based on a pseudo-haptics mechanism that visually manipulates the sense of softness perceived by a user pushing a soft physical object. Considering the limitations of projection-based approaches that change only the surface appearance of a physical object, we propose two projection visual effects, i.e., surface deformation effect (SDE) and body appearance effect (BAE), on the basis of the observations of humans pushing physical objects. The SDE visualizes a two-dimensional deformation of the object surface with a controlled softness parameter, and BAE changes the color of the pushing hand. Through psychophysical experiments, we confirm that the SDE can manipulate softness perception such that the participant perceives significantly greater softness than the actual softness. Furthermore, fBAE, in which BAE is applied only for the finger area, significantly enhances manipulation of the perception of softness. We create a computational model that estimates perceived softness when SDE+fBAE is applied. We construct a prototype SoftAR system in which two application frameworks are implemented. The softness adjustment allows a user to adjust the softness parameter of a physical object, and the softness transfer allows the user to replace the softness with that of another object.   

Return To The Top Page

Radiometric Compensation for Cooperative Distributed Multi-Projection System through 2-DOF Distributed Control

  • Jun Tsukamoto, Daisuke Iwai, and Kenji Kashima : Radiometric Compensation for Cooperative Distributed Multi-Projection System through 2-DOF Distributed Control, IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE International Symposium on Mixed and Augmented Reality), Vol. 21, No. 11, pp. 1221-1229, 2015.

projection result of proposed method   This paper proposes a novel radiometric compensation technique for cooperative projection system based-on distributed optimization. To achieve high scalability and robustness, we assume cooperative projection environments such that 1. each projector does not have information about other projectors as well as target images, 2. the camera does not have information about the projectors either, while having the target images, and 3. only a broadcast communication from the camera to the projectors is allowed to suppress the data transfer bandwidth. To this end, we first investigate a distributed optimization based feedback mechanism that is suitable for the required decentralized information processing environment. Next, we show that this mechanism works well for still image projection, however not necessary for moving images due to the lack of dynamic responsiveness. To overcome this issue, we propose to implement an additional feedforward mechanism. Such a 2 Degree Of Freedom (2-DOF) control structure is wellknown in control engineering community as a typical method to enhance not only disturbance rejection but also reference tracking capability, simultaneously. We theoretically guarantee and experimentally demonstrate that this 2-DOF structure yields the moving image projection accuracy that is overwhelming the best achievable performance only by the distributed optimization mechanisms.   

Return To The Top Page

Extended Depth-of-Field Projector by Fast Focal Sweep Projection

  • Daisuke Iwai, Shoichiro Mihara, and Kosuke Sato : Extended Depth-of-Field Projector by Fast Focal Sweep Projection, IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE Virtual Reality 2015), Vol. 21, No. 4, pp. 462-470, 2015.

Extended Depth-of-Field Projector by Fast Focal Sweep Projection   A simple and cost-efficient method for extending a projector's depth-of-field (DOF) is proposed. By leveraging liquid lens technology, we can periodically modulate the focal length of a projector at a frequency that is higher than the critical flicker fusion (CFF) frequency. Fast periodic focal length modulation results in forward and backward sweeping of focusing distance. Fast focal sweep projection makes the point spread function (PSF) of each projected pixel integrated over a sweep period (IPSF; integrated PSF) nearly invariant to the distance from the projector to the projection surface as long as it is positioned within sweep range. This modulation is not perceivable by human observers. Once we compensate projection images for the IPSF, the projected results can be focused at any point within the range. Consequently, the proposed method requires only a single offline PSF measurement; thus, it is an open-loop process. We have proved the approximate invariance of the projector's IPSF both numerically and experimentally. Through experiments using a prototype system, we have confirmed that the image quality of the proposed method is superior to that of normal projection with fixed focal length. In addition, we demonstrate that a structured light pattern projection technique using the proposed method can measure the shape of an object with large depth variances more accurately than normal projection techniques.   

Return To The Top Page

Shadow Removal of Projected Imagery by Occluder Shape Measurement in a Multiple Overlapping Projection System

  • Daisuke Iwai, Momoyo Nagase, and Kosuke Sato : Shadow Removal of Projected Imagery by Occluder Shape Measurement in a Multiple Overlapping Projection System, Virtual Reality, Vol. 18, No. 4, pp. 245-254, 2014.

Shadow Removal of Projected Imagery by Occluder Shape Measurement in a Multiple Overlapping Projection System   This paper presents a shadow removal technique for a multiple overlapping projection system. In particular, this paper deals with situations where cameras cannot be placed between the occluder and projection surface. We apply a synthetic aperture capturing technique to estimate the appearance of the projection surface, and a visual hull reconstruction technique to measure the shape of the occluder. Once the shape is acquired, shadow regions on the surface can be estimated. The proposed shadow removal technique allows users to balance between the following two criteria: the likelihood of new shadow emergence and the spatial resolution of the projected results. Through a real projection experiment, we evaluate the proposed shadow removal technique   

Return To The Top Page

Combining colour and temperature: A blue object is more likely to be judged as warm than a red object

  • Hsin-Ni Ho, Daisuke Iwai, Yuki Yoshikawa, Junji Watanabe, and Shin'ya Nishida : Combining colour and temperature: A blue object is more likely to be judged as warm than a red object, Scientific Reports, Vol. 4, Article No. 5527, 2014.

Combining colour and temperature: A blue object is more likely to be judged as warm than a red object   It is commonly believed that reddish colour induces warm feelings while bluish colour induces cold feelings. We, however, demonstrate an opposite effect when the temperature information is acquired by direct touch. Experiment 1 found that a red object, relative to a blue object, raises the lowest temperature required for an object to feel warm, indicating that a blue object is more likely to be judged as warm than a red object of the same physical temperature. Experiment 2 showed that hand colour also affects temperature judgment, with the direction of the effect opposite to object colours. This study provides the first demonstration that colour can modulate temperature judgments when the temperature information is acquired by direct touch. The effects apparently oppose the common conception of red-hot/blue-cold association. We interpret this phenomenon in terms of “Anti-Bayesian” integration, which suggests that the brain integrates direct temperature input with prior expectations about temperature relationship between object and hand in a way that emphasizes the contrast between the two.   

Return To The Top Page

Projection Screen Reflectance Control for High Contrast Display using Photochromic Compounds and UV LEDs

  • Daisuke Iwai, Shoichi Takeda, Naoto Hino, and Kosuke Sato : Projection Screen Reflectance Control for High Contrast Display using Photochromic Compounds and UV LEDs, Optics Express, Vol. 22, No. 11, pp. 13492-13506, 2014.

Projection Screen Reflectance Control for High Contrast Display using Photochromic Compounds and UV LEDs   This paper presents the first proof-of-concept implementation and the principle that realizes a projection display whose contrast does not decrease even with existing inter-reflection of projection light or environmental light. We propose the use of photochromic compounds (PhC) to control reflectance of a projection surface. PhC changes color chemically when exposed to UV light. A PhC is applied to a surface to control its reflectance by radiating UV light from a UV-LED array. An image is projected from a visible projector onto the surface to boost the contrast. The proof-of-concept experiment shows that the prototype system achieves approximately three times higher contrast than a projection-only system under natural light.   

Return To The Top Page

Artifact Reduction in Radiometric Compensation of Projector-Camera Systems for Steep Reflectance Variations

  • Shoichiro Mihara, Daisuke Iwai, and Kosuke Sato : Artifact Reduction in Radiometric Compensation of Projector-Camera Systems for Steep Reflectance Variations, IEEE Transactions on Circuits and Systems for Video Technology, Vol. 24, No. 9, pp. 1631-1638, 2014.

Artifact Reduction in Radiometric Compensation of Projector-Camera Systems for Steep Reflectance Variations   In this paper, we propose a novel radiometric compensation method that applies a high-spatial-resolution camera to a projector-camera system to reduce the artifacts around the regions where the reflectance of the projection surface changes steeply. The proposed method measures the reflection in the region of a single projector pixel on a projection surface with multiple camera pixels. From the measurement, it computes multiple color-mixing matrices, each of which represents a color space conversion between each camera and the projector pixels. Using these matrices, we calculate the optimal projection color by applying the linear least squares method, so that the displayed color in the projector pixel region is as close as possible to the target appearance. Through projection experiments, we confirm that our proposed method reduces the artifacts around the regions where the reflectance changes steeply, when compared with other conventional compensation methods.   

Return To The Top Page

Augmenting Physical Avatars Using Projector Based Illumination

  • TAmit Bermano, Philipp Bruschweiler, Anselm Grundhofer, Daisuke Iwai, Bernd Bickel, and Markus Gross : Augmenting Physical Avatars Using Projector Based Illumination, ACM Transactions on Graphics, vol. 32, no. 6, Article 189, 10 pages, 2013. (Proceedings of Siggraph Asia)

Augmenting Physical Avatars Using Projector Based Illumination   Animated animatronic figures are a unique way to give physical presence to a character. However, their movement and expressions are often limited due to mechanical constraints. In this paper, we propose a complete process for augmenting physical avatars using projector-based illumination, significantly increasing their expressiveness. Given an input animation, the system decomposes the motion into low-frequency motion that can be physically reproduced by the animatronic head and high-frequency details that are added using projected shading. At the core is a spatio-temporal optimization process that compresses the motion in gradient space, ensuring faithful motion replay while respecting the physical limitations of the system. We also propose a complete multi-camera and projection system, including a novel defocused projection and subsurface scattering compensation scheme. The result of our system is a highly expressive physical avatar that features facial details and motion otherwise unattainable due to physical constraints.   

Return To The Top Page

View Management of Projected Labels on Nonplanar and Textured Surfaces

  • Daisuke Iwai, Tatsunori Yabiki, Kosuke Sato : View Management of Projected Labels on Nonplanar and Textured Surfaces, IEEE Transactions on Visualization and Computer Graphics, vol. 19, no. 8, pp. 1415-1424, Aug. 2013.

View Management of Projected Labels on Nonplanar and Textured Surfaces   This paper presents a new label layout technique for projection-based augmented reality (AR) that determines the placement of each label directly projected onto an associated physical object with a surface that is normally inappropriate for projection (i.e., nonplanar and textured). Central to our technique is a new legibility estimation method that evaluates how easily people can read projected characters from arbitrary viewpoints. The estimation method relies on the results of a psychophysical study that we conducted to investigate the legibility of projected characters on various types of surfaces that deform their shapes, decrease their contrasts, or cast shadows on them. Our technique computes a label layout by minimizing the energy function using a genetic algorithm (GA). The terms in the function quantitatively evaluate different aspects of the layout quality. Conventional label layout solvers evaluate anchor regions and leader lines. In addition to these evaluations, we design our energy function to deal with the following unique factors, which are inherent in projection-based AR applications: the estimated legibility value and the disconnection of the projected leader line. The results of our subjective experiment showed that the proposed technique could significantly improve the projected label layout.   

Return To The Top Page

Dynamic Defocus and Occlusion Compensation of Projected Imagery by Model-Based Optimal Projector Selection in Multi-projection Environment

  • Momoyo Nagase, Daisuke Iwai, and Kosuke Sato : Dynamic Defocus and Occlusion Compensation of Projected Imagery by Model-Based Optimal Projector Selection in Multi-projection Environment, Virtual Reality, Springer-Verlag London Limited, Vol.15, No.2, pp.119-132, 2011. (Online First, 2010)

Dynamic Defocus and Occlusion Compensation of Projected Imagery by Model-Based Optimal Projector Selection in Multi-projection Environment   This paper presents a novel model-based approach of dynamic defocus and occlusion compensation method in a multi-projection environment. Conventional defocus compensation research applies appearance-based method, which needs a point spread function (PSF) calibration when either position or orientation of an object to be projected is changed, thus cannot be applied to interactive applications in which the object dynamically moves. On the other hand, we propose a model-based method in which PSF and geometric calibrations are required only once in advance, and projector’s PSF is computed online based on geometric relationship between the projector and the object without any additional calibrations. We propose to distinguish the oblique blur (loss of high-spatial-frequency components according to the incidence angle of the projection light) from the defocus blur and to introduce it to the PSF computation. For each part of the object surfaces, we select an optimal projector that preserves the largest amount of high-spatial-frequency components of the original image to realize defocus-free projection. The geometric relationship can also be used to eliminate the cast shadows of the projection images in multi-projection environment. Our method is particularly useful in the interactive systems because the movement of the object (consequently geometric relationship between each projector and the object) is usually measured by an attached tracking sensor. This paper describes details about the proposed approach and a prototype implementation. We performed two proof-of-concept experiments to show the feasibility of our approach.
  

Return To The Top Page

Interactive Bookshelf Surface for In Situ Book Searching and Storing Support

  • Kazuhiro Matsushita, Daisuke Iwai, and Kosuke Sato, "Interactive Bookshelf Surface for In Situ Book Searching and Storing Support", In Proceedings of Augmented Human International Conference, pp.2:1-2:8, 2011.

Interactive Bookshelf Surface for In Situ Book Searching and Storing SupportWe propose an interactive bookshelf surface to augment ahuman ability for in situ book searching and storing. In book searching support, when a user touches the edge of the bookshelf, the cover image of a stored book located above the touched position is projected directly onto the book spine. As a result, the user can search for a desired book by sliding his (or her) finger across the shelf edge. In book storing support, when a user brings a book close to the bookshelf, the place where the book should be stored is visually highlighted by a projection light. This paper also presents sensing technologies to achieve the above mentioned interactive techniques. In addition, by considering the properties of the human visual system, we propose a simple visual effect to reduce the legibility degradation of the projected image contents by the complex textures and geometric irregularities of the spines. We confirmed the feasibility of the system and the effectiveness of the proposed interaction techniques through user studies.

Return To The Top Page

Snail Light Projector: Interaction with Virtual Projection Light in Hyper-Slow Propagation Speed

  • Keisuke Matsuzaki, Daisuke Iwai, and Kosuke Sato, "Snail Light Projector: Interaction with Virtual Projection Light in Hyper-Slow Propagation Speed", In Proc. of international Conference on Advanced in Computer Entertainment Technology (ACE) (poster), 2010.

Snail Light Projector: Interaction with Virtual Projection Light in Hyper-Slow Propagation SpeedWe propose a novel video projection system, the Snail Light Projector, where light emitted from a video projector travels at virtual hyper-slow speed. We define hyper-slow light as the light that creates a number of spatio-temporal video containing images with emitting time that differ between the video projector lens and the projection screen. As a result, the user can browse the images by moving a projection screen, and control the emitted light by moving a handheld projector. We believe the Snail Light Projector can be an innovative entertainment system, achieving interaction by using hyper-slow speed light, which enables the user to experience virtual physical phenomena directly.

Return To The Top Page

Document Search Support by Making Physical Documents Transparent in Projection-Based Mixed Reality

  • Daisuke Iwai and Kosuke Sato, "Document Search Support by Making Physical Documents Transparent in Projection-Based Mixed Reality", Virtual Reality, Springer-Verlag London Limited, 2010. (Online First)

Document Search Support by Making Physical Documents Transparent in Projection-Based Mixed Reality   This paper presents Limpid Desk, which supports document search on a physical desktop by making the upper layer of a document stack transparent in a projection-based mixed reality environment. A user can visually access a lower-layer document without physically removing the upper documents. This is accomplished by superimposition of cover textures of lowerlayer documents on the upper documents by projected imagery. This paper introduces a method of generating projection images that make physical documents transparent. Furthermore, a touch sensing method based on thermal image processing is proposed for the system’s input interface. Areas touched by a user on physical documents can be detected without any user-worn or handheld devices. This interface allows a user to select a stack to be made transparent by a simple touch gesture. Three document search support techniques are realized using the system. User studies are conducted and the results show the effectiveness of the proposed techniques.

Return To The Top Page

Optical Superimposition of Infrared Thermography through Video Projection

  • Daisuke Iwai and Kosuke Sato, "Optical Superimposition of Infrared Thermography through Video Projection", Infrared Physics & Technology, Elsevier, Vol.53, No.3, pp.162-172, (2010).

Optical Superimposition of Infrared Thermography through Video Projection   This research aims to explores a novel infrared thermography visualization technique where a sequence of captured thermal images is optically and simultaneously superimposed onto the target object via video projection in real time. In conventional thermography visualization, observers have to frequently move their eyes from the object to a 2D screen where a thermal image is displayed. In contrast, the heat distribution of the object's surface emerges directly onto its physical surface in the proposed method. As a result, the observer can intuitively understand the object's heat information just by looking at it in the real space. This paper explains the methods of geometric registration and radiometric compensation of the captured thermal image, which are required before video projection. Furthermore, several projection results are shown to validate the intuitiveness and usefulness of the proposed visualization method.

Return To The Top Page

Dynamic Defocus and Occlusion Compensation of Projected Imagery by Model-Based Optimal Projector Selection in Multi-projection Environment

  • Momoyo Nagase, Daisuke Iwai, and Kosuke Sato, "Dynamic Defocus and Occlusion Compensation of Projected Imagery by Model-Based Optimal Projector Selection in Multi-projection Environment", Virtual Reality, Springer-Verlag London Limited, , 2010. (Online First).

Dynamic Defocus and Occlusion Compensation of Projected Imagery by Model-Based Optimal Projector Selection in Multi-projection Environment   This paper presents a novel model-based approach of dynamic defocus and occlusion compensation method in a muti-projection environment. Conventional defocus compensation research applies appearancebased method which needs a PSF (point spread function) calibration when either position or orientation of an object to be projected is changed, thus cannot be applied to interactive applications in which the object dynamically moves. On the other hand, we propose a model-based method in which PSF and geometric calibrations are required only once in advance, and projector’s PSF is computed online based on geometric relationship between the projector and the object without any additional calibrations. We propose to distinguish the oblique blur (loss of high-spatial-frequency components according to the incidence angle of the projection light) from the defocus blur and to introduce it to the PSF computation. For each part of the object surfaces, we select an optimal projector which preserves the largest amount of high-spatial-frequency components of the original image to realize defocus-free projection. The geometric relationship can also be used to eliminate the cast shadows of the projection images in multi-projection environment. Our method is particularly useful in the interactive systems because the movement of the object (consequently geometric relationship between each projector and the object) is usually measured by an attached tracking sensor. This paper describes details about the proposed approach and a prototype implementation. We performed two proof-of-concept experiments to show the feasibility of our approach.

Return To The Top Page

Dynamic Control of Multiple Focal-Plane Projections for Eliminating Defocus and Occlusion

  • Momoyo Nagase, Daisuke Iwai, Kosuke Sato, "Dynamic Control of Multiple Focal-Plane Projections for Eliminating Defocus and Occlusion," In Proc. of IEEE VR (poster), pp.293-294, 2010.

Dynamic Control of Multiple Focal-Plane Projections for Eliminating Defocus and Occlusion
  This paper presents a novel dynamic control of multiple focal-planeprojections. Our approach multiplexes the projectors’ focal-planesso that all the displayed images are focused. The projectors areplaced at various locations with various directions in the scene sothat there is no occlusion of the projection light even for dynami-cally moving objects. Our approach compensates for defocus blur,oblique blur (loss of high-spatial-frequency components accordingto the incidence angle of the projection light), and cast shadows ofprojected images. This is achieved by selecting an optimal projectorthat can display the finest image for each point of the object surfaceby comparing the projectors’ point spread functions (PSFs). Theproposed approach requires geometric calibration to be performedonly once in advance. Our approach can compute PSFs without anyadditional calibrations even when the surface moves if it is trackedby an external sensor. The method is particularly useful in interac-tive systems in which the object to be projected is tracked.
  

Return To The Top Page

PALMbit-Silhouette:A User Interface by Superimposing Palm-Silhouette to Access Wall Displays

  • Goshiro Yamamoto, Huichuan Xu, Kazuto Ikeda, and Kosuke Sato: PALMbit-Silhouette: A User Interface by Superimposing Palm-Silhouette to Access Wall Displays, Lecture Notes in Computer Science (LNCS), vol.5611, pp.281-290, 2009. (In Proceedings of HCI International 2009, Part I)

PALMbit-Silhouette:A User Interface by Superimposing Palm-Silhouette to Access Wall Displays
  In this paper, we propose a new type of user interface usingpalm-silhouette, which realizes intuitive interaction on ubiquitousdisplays located at far from user’s location or interfered in direct operation.In the area of augmented reality based user interface, besidesthe interface which allows users to operate virtual objects directly, theinterface which lets users to interact remotely from a short-distant locationis necessary, especially in the environment where multi-displaysare shared in public areas. We propose two gesture-related functions usingthe palm-silhouette interface: grasp-and-release operation and wristrotating operation which represent “selecting” and “adjustment” respectively.The usability of proposed palm-silhouette interface was evaluatedby experiment comparison with a conventional arrowhead pointer. Westudied and concluded the design rationale to realize a rotary switch operationby utilizing pseudo-haptic visual cue.
  

Return To The Top Page

Luminance Distribution Control based on the Separation of Direct and Indirect Components

  • Osamu Nasu, Shinsaku Hiura and Kosuke Sato,Luminance Distribution Control based on the Separation of Direct and Indirect Components,Proc. PROCAMS2009, pp. 1-2, 2009.

Luminance Distribution Control based on the Separation of Direct and Indirect Components
  We propose a method to control the luminancedistribution on a scene by modeling the light propagationwith direct and indirect components separately. To reducethe measurement time and amount of data, we incorporategeometric locality of direct component and the narrow spatialbandwidth of indirect component into the light transportmodel. Since the luminance distribution of the scene for thegiven illumination pattern is reproduced quickly and precisely,we can compensate the illumination pattern to generatethe required luminance distribution of the scene withoutactual projection.
  

Return To The Top Page

Superimposing Dynamic Range

  • Oliver Bimber and Daisuke Iwai, "Superimposing Dynamic Range", ACM Transactions on Graphics, vol.27, no.5, pp.150:1-150:8 (2008) (Proceedings of SIGGRAPH ASIA)

Superimosing Dynamic Range   It was only recently that high dynamic range (HDR) displays were introduced which could present content over several orders of magnitude between minimum and maximum luminance (e.g., [Seetzen et al. 2004]). All of the approaches share three common properties: These are firstly that they apply a transmissive image modulation (either through transparencies or LCD/LCoS panels) and consequently suffer from a relatively low light-throughput (e.g., regular color / monochrome LCD panels transmit less than 3-6% / 15-30% of light) and therefore require exceptionally bright backlights. Secondly, one of the two modulation images is of low-resolution and blurred inorder to avoid artifacts such as moir´e patterns due to the misalignment of two modulators, as well as to realize acceptable frame-rates. Thus, high contrast values can only be achieved in a resolution of the low-frequency image. Thirdly, since one of the two images is monochrome (mainly to reach a high peak luminance), only luminance is modulated, while chrominance modulation for extending the color space is in some cases considered future work. We present a simple and low-cost method of viewing static HDR content based on reflective image modulation.

Return To The Top Page

Limpid Desk: See-Through Access to Disorderly Desktop in Projection-Based Mixed Reality

  • Daisuke Iwai and Kosuke Sato, "Limpid Desk: See-Through Access to Disorderly Desktop in Projection-Based Mixed Reality", In Proc. of VRST'06, ACM, pp.112-115, 2006.

Limpid Desk: See-Through Access to Disorderly Desktop in Projection-Based Mixed Reality
  In computer vision, background subtraction methodis widely used to extract a changing region in a scene.However, it is difficult to simply apply this method to ascene with moving background object, because such objectmay be extracted as a changing region. Therefore,a method has been proposed to estimate both currentbackground image and occluding object region simultaneouslyby using eigenspace-based background representation.On the other hand, image completion methodusing eigenspace have been extended to non-linear subspaceusing kernel trick, however, such existing methodtakes large computational cost. Therefore, in this paper,we propose a method for rapid simultaneous estimationof a background image and occluded regionin non-linear space, using the kernel trick and iterativeprojection.
  

Return To The Top Page

・ソPALMbit: A Interface using Own Palms

  • Goshiro Yamamoto and Kosuke Sato, "PALMbit: A Interface using Own Palms" , World Haptics 2007, HOD12, 2007.

・ソPALMbit: A Interface using Own Palms
  The PALMbit, which has haptic feedback with user's palms, provides natural operations without hand-worn nor device-grasping by utilizing palms as an I/O interface. The operation and information display on the palm are realized with a camera and a tracking projector. PALMbit recognizes actions of fingertip-to-fingertip contact as input operation. The user is able to operate intuitively according to one's own mental body image and the natural haptic feedback. A man/woman has not only his/her physical body, but also own mental body image and superior sensoriums. In this research, a user interface has been designed in view of these features. Applications, an image viewer and a controller of appliances, demonstrate intuitive hand operations without special gloves.
  

Return To The Top Page

PALMbit: A PALM Interface with a Projector-Camera System

  • Goshiro Yamamoto and Kosuke Sato,"PALMbit: A PALM Interface with Projector-Camera System", UbiComp 2007 Adjunct Proceedings, pp.276-279, Innsbruck, Austria, September 2007

PALMbit: A PALM Interface with a Projector-Camera System
  We propose a ubiquitous projector-camera system,“PALMbit”,that enables the user to input several control events using the palmof his or her hand and to output graphic information. In the proposedsystem, we developed an I/O method for operating digital appliances,i.e., an optical projection onto the palm of the user, which is registeredby image processing, and a gesture interface based on finger movements.By projecting CG images of a virtual remote controller onto the palm ofthe user, the user is able to operate digital equipment intuitively usinghis/her own mental body image and haptic feedback.
  

Return To The Top Page

Assistance system for designing mirrored surface using projector

  • Daisuke Nakamura, Shinsaku Hiura and Kosuke Sato : Assistance system for designing mirrored surface using projector, proc. SICE2007, pp. 1489-1492, 2007

Assistance system for designing mirrored surface using projector
  We propose a system with camera and projector to show specular reflection of surrounding environment onthe real object. At first, active shape measurement with pattern light projection is performed to obtain precise shape of theobject rapidly. Then, rendered specular pattern using the shape information is projected on the real object. Our systemalso recognizes the status of work operated by the designer, and shape measurement and visualization are done only whenthe worker wants to evaluate the shape. The view point of the evaluator is measured using LED marker and sphericalmirror, and the projected reflection pattern is adjusted for the viewpoint adequately in realtime.
  

Return To The Top Page

User Interface by Virtual Shadow Projection

  • Huichuan Xu, Daisuke Iwai, Shinsaku Hiura and Kosuke Sato : User Interface by Virtual Shadow Projection, SICE-ICASE International Joint Conference 2006, SA14-1, A0897 (2006)

User Interface by Virtual Shadow Projection
  This paper introduced a user interface based on virtual shadow derived by projector for Spatial Augmented Reality (SAR) environment. Shadow is a daily phenomenon in our daily life and may contribute to build an effective and intuitive connection between physical and digital world. Taking advantage of spatial and optical characteristics of shadow, user can realize remote interaction for ubiquitous interface. The prototype system is implemented and adaptive image processing algorithms for shadow are proposed. The authors has demonstrated in this paper that shadow has possibility to realize simple but effective interface between human and computer systems, which may yieldto many useful applications without massive device resources.
  

Return To The Top Page

User Interface by Real and Artificial Shadow

  • Huichuan Xu, Ichi Kanaya, Shinsaku Hiura and Kosuke Sato : User Interface by Real and Artificial Shadow, SIGGRAPH2006

User Interface by Real and Artificial Shadow 
  This poster proposes concept and prototype of an intuitive userinterface based on shadow for indoor environment. Shadow is acommon phenomenon in our daily life where there is light source. Italways exists but we ignored the potential of shadow for connectingdigital and physical world. There are several merits of shadowinterface: first, shadow is a daily life familiarity and it builds anatural bridge between digital and physical worlds; second, theshadow based interaction system is simple and does not requireexpensive devices; third, shadow itself is a strong and naturalvisual feedback cue for the user to take good command ofapplications.
  

Return To The Top Page

The HYPERREAL Design System

  • Masaru Hisada, Kazuhiko Takase, Keiko Yamamoto, Ichiroh Kanaya, Kosuke Sato: The HYPERREAL Design System; IEEE VR, 2006.

The HYPERREAL Design System
  This paper presents a novel mixed reality (MR) system for virtually modifying (e.g., denting, engraving, swelling) shape of real objects by using projection of computer-generated shade. Users of this system, which we call HYPERREAL, perceive as if the real object is actually being deformed when they operate the system to modify the shape of the object while only the illumination pattern of the real object is changed. The authors are aiming to apply this technology to product designing field: designers would be able to evaluate and modify form of their product more efficiently and effectively in an intuitive manner using HYPERREAL than conventional design process (typically, computer aided design, or CAD, systems and solid mock-ups) since the system is able to provide users with actuality/presence of real mock-up and flexibility of shape data on a computer system, such as CAD system, all at once.
  

Return To The Top Page

Free-form Shape Design System using Stereoscopic Projector .HYPERREAL 2.0-

  • Masaru Hisada, Keiko Yamamoto, Ichiroh Kanaya and Kosuke Sato : Free-form Shape Design System using Stereoscopic Projector - HYPERREAL 2.0 -, SICE-ICASE International Joint Conference 2006

Free-form Shape Design System using Stereoscopic Projector .HYPERREAL 2.0-
  This paper presents a novel mixed reality (MR) system for visual shape modification (e.g.: denting,engraving, swelling, etc) of physical objects by using projection of computer-generated shade. Users of this system,which we call HYPERREAL 2.0, perceive as if the real object is actually being deformed when they operate the systemto modify the shape of the object while only the illumination pattern of the real object has been changed. The authorsare aiming to apply this technology to product designing field: designers would be able to evaluate and modify form oftheir product more efficiently and effectively in an intuitive manner using HYPERREAL 2.0 than conventional designprocess (typically, computer aided design, or CAD, systems and solid mock-ups) since the system is able to provideusers with actuality/presence of physical mock-up and flexibility of shape data on a computer system, such as CADsystem, all at once.
  

Return To The Top Page

Heat Sensation in Image Creation with Thermal Vision

  • Daisuke Iwai and Kosuke Sato, Heat Sensation in Image Creation with Thermal Vision, ACM SIGCHI International Conference on Advances in Computer Entertainment Technology (ACE2005), pp. 213-216 (2005).

Heat Sensation in Image Creation with Thermal Vision
  We introduce how to involve the heat sensation in image cre-ation by using thermal vision. We develop "ThermoTablet" which can detect touch regions of physical input objects on a sensing surface using changes of a temperature distribu-tion of the surface when those objects are hotter or colder than the surface. Image creation applications of painting, image modi_cations are implemented on it. In the paint- ing application, users can directly use physical paintbrushes and airbrushes with hot water in spite of paint, and even use their own _ngers, hands, and breaths directly. "ThermoInk" transforms the temperature information into a variation of painting. Image modi_cation application aims an intuitive user interface for color saturation modi_cation, hue mod-i_cation, shape modi_cation and CAD model deformation which are strongly related to heat sensations.
  

Return To The Top Page

A Wearable Mixed Reality with an On-board ProjectorYouichi

  • Toshikazu Karitsuka, Kosuke Sato, A Wearable Mixed Reality with an On-board Projector, The 2nd International Symposium on Mixed and Augmented Reality (ISMAR 2003), pp. 321-322 (2003).

A Wearable Mixed Reality with an On-board ProjectorYouichi
  One of methods achieving Mixed Reality (MR)displays is the texture projection method using projectors.Another kind of emerging information environments is awearable information device, which realizes ubiquitouscomputing. It is very promising to integrate thesetechnologies. Using this kind of fusion system, two ormore users can get the same MR environments withoutusing HMD at the same moment. In this demonstration,we propose a wearable MR system with an on-boardprojector and introduce some applications with thissystem.
  

Return To The Top Page