Research

Extended Depth-of-Field Projector by Fast Focal Sweep Projection

  • Daisuke Iwai, Shoichiro Mihara, and Kosuke Sato : Extended Depth-of-Field Projector by Fast Focal Sweep Projection, IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE Virtual Reality 2015), Vol. 21, No. 4, pp. 462-470, 2015.

Extended Depth-of-Field Projector by Fast Focal Sweep Projection   A simple and cost-efficient method for extending a projector's depth-of-field (DOF) is proposed. By leveraging liquid lens technology, we can periodically modulate the focal length of a projector at a frequency that is higher than the critical flicker fusion (CFF) frequency. Fast periodic focal length modulation results in forward and backward sweeping of focusing distance. Fast focal sweep projection makes the point spread function (PSF) of each projected pixel integrated over a sweep period (IPSF; integrated PSF) nearly invariant to the distance from the projector to the projection surface as long as it is positioned within sweep range. This modulation is not perceivable by human observers. Once we compensate projection images for the IPSF, the projected results can be focused at any point within the range. Consequently, the proposed method requires only a single offline PSF measurement; thus, it is an open-loop process. We have proved the approximate invariance of the projector's IPSF both numerically and experimentally. Through experiments using a prototype system, we have confirmed that the image quality of the proposed method is superior to that of normal projection with fixed focal length. In addition, we demonstrate that a structured light pattern projection technique using the proposed method can measure the shape of an object with large depth variances more accurately than normal projection techniques.   

Return To The Top Page

Shadow Removal of Projected Imagery by Occluder Shape Measurement in a Multiple Overlapping Projection System

  • Daisuke Iwai, Momoyo Nagase, and Kosuke Sato : Shadow Removal of Projected Imagery by Occluder Shape Measurement in a Multiple Overlapping Projection System, Virtual Reality, Vol. 18, No. 4, pp. 245-254, 2014.

Shadow Removal of Projected Imagery by Occluder Shape Measurement in a Multiple Overlapping Projection System   This paper presents a shadow removal technique for a multiple overlapping projection system. In particular, this paper deals with situations where cameras cannot be placed between the occluder and projection surface. We apply a synthetic aperture capturing technique to estimate the appearance of the projection surface, and a visual hull reconstruction technique to measure the shape of the occluder. Once the shape is acquired, shadow regions on the surface can be estimated. The proposed shadow removal technique allows users to balance between the following two criteria: the likelihood of new shadow emergence and the spatial resolution of the projected results. Through a real projection experiment, we evaluate the proposed shadow removal technique   

Return To The Top Page

Combining colour and temperature: A blue object is more likely to be judged as warm than a red object

  • Hsin-Ni Ho, Daisuke Iwai, Yuki Yoshikawa, Junji Watanabe, and Shin'ya Nishida : Combining colour and temperature: A blue object is more likely to be judged as warm than a red object, Scientific Reports, Vol. 4, Article No. 5527, 2014.

Combining colour and temperature: A blue object is more likely to be judged as warm than a red object   It is commonly believed that reddish colour induces warm feelings while bluish colour induces cold feelings. We, however, demonstrate an opposite effect when the temperature information is acquired by direct touch. Experiment 1 found that a red object, relative to a blue object, raises the lowest temperature required for an object to feel warm, indicating that a blue object is more likely to be judged as warm than a red object of the same physical temperature. Experiment 2 showed that hand colour also affects temperature judgment, with the direction of the effect opposite to object colours. This study provides the first demonstration that colour can modulate temperature judgments when the temperature information is acquired by direct touch. The effects apparently oppose the common conception of red-hot/blue-cold association. We interpret this phenomenon in terms of “Anti-Bayesian” integration, which suggests that the brain integrates direct temperature input with prior expectations about temperature relationship between object and hand in a way that emphasizes the contrast between the two.   

Return To The Top Page

Projection Screen Reflectance Control for High Contrast Display using Photochromic Compounds and UV LEDs

  • Daisuke Iwai, Shoichi Takeda, Naoto Hino, and Kosuke Sato : Projection Screen Reflectance Control for High Contrast Display using Photochromic Compounds and UV LEDs, Optics Express, Vol. 22, No. 11, pp. 13492-13506, 2014.

Projection Screen Reflectance Control for High Contrast Display using Photochromic Compounds and UV LEDs   This paper presents the first proof-of-concept implementation and the principle that realizes a projection display whose contrast does not decrease even with existing inter-reflection of projection light or environmental light. We propose the use of photochromic compounds (PhC) to control reflectance of a projection surface. PhC changes color chemically when exposed to UV light. A PhC is applied to a surface to control its reflectance by radiating UV light from a UV-LED array. An image is projected from a visible projector onto the surface to boost the contrast. The proof-of-concept experiment shows that the prototype system achieves approximately three times higher contrast than a projection-only system under natural light.   

Return To The Top Page

Tracking People with Active Cameras Using Variable Time-Step Decisions

  • Alparslan Yildiz, Noriko Takemura, Maiya Hori, Yoshio Iwai, and Kosuke Sato : Tracking People with Active Cameras Using Variable Time-Step Decisions, IEICE TRANSACTIONS on Information and Systems, Vol.E97-D, No.8, pp. 2124-2130, 2014.

Tracking People with Active Cameras Using Variable Time-Step Decisions   In this study, we introduce a system for tracking multiple people using multiple active cameras. Our main objective is to surveille as many targets as possible, at any time, using a limited number of active cameras. In our context, an active camera is a statically located pan-tilt-zoom camera. In this research, we aim to optimize the camera configuration to achieve maximum coverage of the targets. We first devise a method for efficient tracking and estimation of target locations in the environment. Our tracking method is able to track an unknown number of targets and easily estimate multiple future time-steps, which is a requirement for active cameras. Next, we present an optimization of camera configuration with variable time-step that is optimal given the estimated object likelihoods for multiple future frames. We confirmed our results using simulation and real videos, and show that without introducing any significant computational complexities, it is possible to use active cameras to the point that we can track and observe multiple targets very effectively.   

Return To The Top Page

Making Graphical Information Visible in Real Shadows on Interactive Tabletops

  • Mariko Isogawa, Daisuke Iwai, and Kosuke Sato : Making Graphical Information Visible in Real Shadows on Interactive Tabletops, IEEE Transactions on Visualization and Computer Graphics, Vol. 20, No. 9, pp. 1293-1302, 2014.?

Making Graphical Information Visible in Real Shadows on Interactive Tabletops   We introduce a shadow-based interface for interactive tabletops. The proposed interface allows a user to browse graphical information by casting the shadow of his/her body, such as a hand, on a tabletop surface. Central to our technique is a new optical design that utilizes polarization in addition to the additive nature of light so that the desired graphical information is displayed only in a shadow area on a tabletop surface. In other words, our technique conceals the graphical information on surfaces other than the shadow area, such as the surface of the occluder and non-shadow areas on the tabletop surface. We combine the proposed shadow-based interface with a multi-touch detection technique to realize a novel interaction technique for interactive tabletops. We implemented a prototype system and conducted proof-of-concept experiments along with a quantitative evaluation to assess the feasibility of the proposed optical design. Finally, we showed implemented application systems of the proposed shadow-based interface.   

Return To The Top Page

Artifact Reduction in Radiometric Compensation of Projector-Camera Systems for Steep Reflectance Variations

  • Shoichiro Mihara, Daisuke Iwai, and Kosuke Sato : Artifact Reduction in Radiometric Compensation of Projector-Camera Systems for Steep Reflectance Variations, IEEE Transactions on Circuits and Systems for Video Technology, Vol. 24, No. 9, pp. 1631-1638, 2014.

Artifact Reduction in Radiometric Compensation of Projector-Camera Systems for Steep Reflectance Variations   In this paper, we propose a novel radiometric compensation method that applies a high-spatial-resolution camera to a projector-camera system to reduce the artifacts around the regions where the reflectance of the projection surface changes steeply. The proposed method measures the reflection in the region of a single projector pixel on a projection surface with multiple camera pixels. From the measurement, it computes multiple color-mixing matrices, each of which represents a color space conversion between each camera and the projector pixels. Using these matrices, we calculate the optimal projection color by applying the linear least squares method, so that the displayed color in the projector pixel region is as close as possible to the target appearance. Through projection experiments, we confirm that our proposed method reduces the artifacts around the regions where the reflectance changes steeply, when compared with other conventional compensation methods.   

Return To The Top Page

Tracking People With Active Cameras via Bayesian Risk Formulation

  • Alparslan Yildiz, Noriko Takemura, Yoshio Iwai, and Kosuke Sato : Tracking People With Active Cameras via Bayesian Risk Formulation, IEEJ Transactions on Electronics, Information and Systems (C), Vol. 134, No. 6, pp. 870-877, 2014.

Tracking People With Active Cameras via Bayesian Risk Formulation   In this study, we introduce a system for tracking multiple people using multiple active cameras. Our main objective is to capture as many targets as possible at any time, using a limited number of active cameras. In our context, an active camera is a statically located pan-tilt-zoom camera.The use of active cameras for tracking has not been thoroughly researched, because it is relatively easier to set up and use static cameras. However, there are many properties of active cameras that we can exploit. Our results show that an approximately two-fold increase in relative accuracy can be achieved without any significant increases in computational costs.Our main contributions include removing the necessity for the individual detection of each tracked target, estimating the future states of the system using a simplified fluid simulation, and finally unifying the active camera tracking method using a minimum risk formulation. We also improved the accuracy by developing an efficient method for attracting cameras towards targets located far away from the present camera configuration.   

Return To The Top Page

Augmenting Physical Avatars Using Projector Based Illumination

  • TAmit Bermano, Philipp Bruschweiler, Anselm Grundhofer, Daisuke Iwai, Bernd Bickel, and Markus Gross : Augmenting Physical Avatars Using Projector Based Illumination, ACM Transactions on Graphics, vol. 32, no. 6, Article 189, 10 pages, 2013. (Proceedings of Siggraph Asia)

Augmenting Physical Avatars Using Projector Based Illumination   Animated animatronic figures are a unique way to give physical presence to a character. However, their movement and expressions are often limited due to mechanical constraints. In this paper, we propose a complete process for augmenting physical avatars using projector-based illumination, significantly increasing their expressiveness. Given an input animation, the system decomposes the motion into low-frequency motion that can be physically reproduced by the animatronic head and high-frequency details that are added using projected shading. At the core is a spatio-temporal optimization process that compresses the motion in gradient space, ensuring faithful motion replay while respecting the physical limitations of the system. We also propose a complete multi-camera and projection system, including a novel defocused projection and subsurface scattering compensation scheme. The result of our system is a highly expressive physical avatar that features facial details and motion otherwise unattainable due to physical constraints.   

Return To The Top Page

View Management of Projected Labels on Nonplanar and Textured Surfaces

  • Daisuke Iwai, Tatsunori Yabiki, Kosuke Sato : View Management of Projected Labels on Nonplanar and Textured Surfaces, IEEE Transactions on Visualization and Computer Graphics, vol. 19, no. 8, pp. 1415-1424, Aug. 2013.

View Management of Projected Labels on Nonplanar and Textured Surfaces   This paper presents a new label layout technique for projection-based augmented reality (AR) that determines the placement of each label directly projected onto an associated physical object with a surface that is normally inappropriate for projection (i.e., nonplanar and textured). Central to our technique is a new legibility estimation method that evaluates how easily people can read projected characters from arbitrary viewpoints. The estimation method relies on the results of a psychophysical study that we conducted to investigate the legibility of projected characters on various types of surfaces that deform their shapes, decrease their contrasts, or cast shadows on them. Our technique computes a label layout by minimizing the energy function using a genetic algorithm (GA). The terms in the function quantitatively evaluate different aspects of the layout quality. Conventional label layout solvers evaluate anchor regions and leader lines. In addition to these evaluations, we design our energy function to deal with the following unique factors, which are inherent in projection-based AR applications: the estimated legibility value and the disconnection of the projected leader line. The results of our subjective experiment showed that the proposed technique could significantly improve the projected label layout.   

Return To The Top Page

Recognizing the ID of Modulated LED Tube Lights by Using Camera Motion Blur

  • Li Chang, Daisuke Iwai, and Kosuke Sato : Recognizing the ID of Modulated LED Tube Lights by Using Camera Motion Blur, IEEJ Transactions on Electrical and Electronic Engineering, Vol. 7, No. S1, pp. S96-S104, 2012.

Recognizing the ID of Modulated LED Tube Lights by Using Camera Motion Blur   In this paper, we present an image‐based ID recognition method for ID‐modulated light‐emitting diode (LED) tube lights by using the motion blur captured by a moving camera. This method is applied to a novel camera‐based indoor positioning system, which can provide exact location for mobile users. In this system, high‐intensity LED tubes are used concurrently as the illumination devices and optical markers. The flashing of each LED lamp is modulated, and the entire tube expresses an ID message, which can be captured by a normal camera installed in a mobile terminal. The flashing occurs at a high frequency and without degrading the illumination function. However, when the exposure time of the camera is longer than the flicker period of the LED lamps, it is difficult to capture the ID pattern. We propose a method that uses motion blur to overcome this limitation. During the period of exposure, if the user manually shakes the camera in the proper direction, a streaked pattern is developed on the captured image frame, which can be used for retrieving an ID number. Moreover, we can also obtain position estimation of the terminal from motion‐blurred images. Experimental results show that with careful operation it is feasible to recognize the ID of LED tubes successfully. c 2012 Institute of Electrical Engineers of Japan. Published by John Wiley & Sons, Inc.   

Return To The Top Page

Indoor Navigation System using ID Modulated LED Tube Lights

  • Li Chang and Kosuke Sato : Indoor Navigation System using ID Modulated LED Tube Lights, IEEJ Transactions on Electrical and Electronic Engineering, Vol. 7, No. 5, pp. 514-520, 2012.

Indoor Navigation System using ID Modulated LED Tube Lights   This paper presents the design, implementation, and evaluation of a novel camera-based information transmission system for indoor positioning and navigation. This system, which avoids the use of any expensive equipment, is of particular benefit for infrastructure consumption and is completely portable using a handheld terminal. A high-intensity light-emitting diode (LED) tube, which is expected to be the main illumination source for the next generation due to its lower power cost and longer lifetime, is utilized as an optical beacon in the system. LED tubes are encoded to transmit ID information, which is received by a single normal camera. The handheld computing terminal, a mobile phone for example, can query a database after performing ID recognition and, finally, the position and orientation of the terminal can be obtained. Experimental results confirm the accuracy of the system in terms of both position and orientation, and it is suitable for many indoor navigation applications. c 2012 Institute of Electrical Engineers of Japan. Published by John Wiley & Sons, Inc.
  

Return To The Top Page

Estimation of Subjective Difficulty and Psychological Stress by Ambient Sensing of Desk Panel Vibrations

  • Nana Hamaguchi, Keiko Yamamoto, Daisuke Iwai, and Kosuke Sato : Estimation of Subjective Difficulty and Psychological Stress by Ambient Sensing of Desk Panel Vibrations, SICE Journal of Control, Measurement, and System Integration (JCMSI), Vol.5, No.1, pp.2-7, 2012.

Estimation of Subjective Difficulty and Psychological Stress by Ambient Sensing of Desk Panel Vibrations   We investigate ambient sensing techniques that recognize writer's psychological states by measuring vibrations of handwriting on a desk panel using a piezoelectric contact sensor attached to its underside. In particular, we describe a technique for estimating the subjective difficulty of a question for a student as the ratio of the time duration of thinking to the total amount of time spent on the question. Through experiments, we confirm that our technique correctly recognizes whether or not a person writes something down on paper by measured vibration data at the accuracy of over 80 %, and that the order of computed subjective difficulties of three questions is coincident with that reported by the subject in 60 % of experiments. We also propose a technique to estimate a writer's psychological stress by using the standard deviation of the spectrum of the measured vibration. Results of a proof-of-concept experiment show that the proposed technique correctly estimates whether or not the subject feels stress at least 90 % of the time.
  

Return To The Top Page

Robust Estimation of Light Directions and Albedo Map of an Object of Known Shape

  • Takehiro Tachikawa, Shinsaku Hiura and Kosuke Sato : Robust Estimation of Light Directions and Albedo Map of an Object of Known Shape, IPSJ Transactions on Computer Vision and Applications, Vol. 3, pp. 172-185 (Dec. 2011)

  We propose a method to determine the light direction and diffuse reflectance property from two images with different light conditions. In our method, it is assumed that the shape of the target object is given. Using the relationships between light direction and diffuse reflectance, we can estimate both of them simultaneously from more than five points on two images. While speculars and shadows affect the estimation as outlier and cause errors, we can avoid these outliers by robust estimation using Random Sample Consensus (RANSAC). The good nature of our method in terms of feasibility and stability is shown in experimental results with both simulation and real images.
  

Return To The Top Page

Dynamic Defocus and Occlusion Compensation of Projected Imagery by Model-Based Optimal Projector Selection in Multi-projection Environment

  • Momoyo Nagase, Daisuke Iwai, and Kosuke Sato : Dynamic Defocus and Occlusion Compensation of Projected Imagery by Model-Based Optimal Projector Selection in Multi-projection Environment, Virtual Reality, Springer-Verlag London Limited, Vol.15, No.2, pp.119-132, 2011. (Online First, 2010)

Dynamic Defocus and Occlusion Compensation of Projected Imagery by Model-Based Optimal Projector Selection in Multi-projection Environment   This paper presents a novel model-based approach of dynamic defocus and occlusion compensation method in a multi-projection environment. Conventional defocus compensation research applies appearance-based method, which needs a point spread function (PSF) calibration when either position or orientation of an object to be projected is changed, thus cannot be applied to interactive applications in which the object dynamically moves. On the other hand, we propose a model-based method in which PSF and geometric calibrations are required only once in advance, and projector’s PSF is computed online based on geometric relationship between the projector and the object without any additional calibrations. We propose to distinguish the oblique blur (loss of high-spatial-frequency components according to the incidence angle of the projection light) from the defocus blur and to introduce it to the PSF computation. For each part of the object surfaces, we select an optimal projector that preserves the largest amount of high-spatial-frequency components of the original image to realize defocus-free projection. The geometric relationship can also be used to eliminate the cast shadows of the projection images in multi-projection environment. Our method is particularly useful in the interactive systems because the movement of the object (consequently geometric relationship between each projector and the object) is usually measured by an attached tracking sensor. This paper describes details about the proposed approach and a prototype implementation. We performed two proof-of-concept experiments to show the feasibility of our approach.
  

Return To The Top Page

Document Search Support by Making Physical Documents Transparent in Projection-Based Mixed Reality, Virtual Reality

  • Daisuke Iwai and Kosuke Sato : Document Search Support by Making Physical Documents Transparent in Projection-Based Mixed Reality, Springer-Verlag London Limited, Vol.15, No.2, pp.147-160, 2011. (Online First, 2010)

Document Search Support by Making Physical Documents Transparent in Projection-Based Mixed Reality, Virtual Reality   This paper presents Limpid Desk that supports document search on a physical desktop by making the upper layer of a document stack transparent in a projection-based mixed reality environment. A user can visually access a lower-layer document without physically removing the upper documents. This is accomplished by superimposition of cover textures of lower-layer documents on the upper documents by projected imagery. This paper introduces a method of generating projection images that make physical documents transparent. Furthermore, a touch sensing method based on thermal image processing is proposed for the system’s input interface. Areas touched by a user on physical documents can be detected without any user-worn or handheld devices. This interface allows a user to select a stack to be made transparent by a simple touch gesture. Three document search support techniques are realized using the system. User studies are conducted, and the results show the effectiveness of the proposed techniques.
  

Return To The Top Page

Identification of Motion Features Affecting Perceived Rhythmic Sense of Virtual Characters through Comparison of Latin American and Japanese Dances

  • Daisuke Iwai, Toro Felipe, Noriko Nagata and Seiji Inokuchi : Identification of Motion Features Affecting Perceived Rhythmic Sense of Virtual Characters through Comparison of Latin American and Japanese Dances, The Journal of The Institute of Image Information and Television Engineers, Vol.65, No.2, pp.203-210, 2011.

Identification of Motion Features Affecting Perceived Rhythmic Sense of Virtual Characters through Comparison of Latin American and Japanese Dances   Physical motion features that cause a difference in the perceived rhythmic sense of a dancer were identified. We compared the rhythmical movements of Latin American and Japanese people to find such features. A 2-D motion capture system was used to measure rhythmical movement, and two motion features were identified. The first was the phase shift of the rotational angles between the hips and chest, and the second was the phase shift between the hips' rotational angle and horizontal position. A psychophysical study demonstrated whether these features affected the perceived rhythmic sense of a dancer. The results showed that both motion features significantly affected perceived sense, and one of them also affected how a viewer guessed the home country or area of a dancer. Consequently, the rhythmic sense of a virtual character can be controlled easily by adding and removing features to and from the character's synthesized motion.
  

Return To The Top Page

Luminance Distribution Control based on the Separation of Direct and Indirect Components

  • Osamu Nasu, Shinsaku Hiura and Kosuke Sato,Luminance Distribution Control based on the Separation of Direct and Indirect Components,Proc. PROCAMS2009, pp. 1-2, 2009.

Luminance Distribution Control based on the Separation of Direct and Indirect Components
  We propose a method to control the luminancedistribution on a scene by modeling the light propagationwith direct and indirect components separately. To reducethe measurement time and amount of data, we incorporategeometric locality of direct component and the narrow spatialbandwidth of indirect component into the light transportmodel. Since the luminance distribution of the scene for thegiven illumination pattern is reproduced quickly and precisely,we can compensate the illumination pattern to generatethe required luminance distribution of the scene withoutactual projection.
  

Return To The Top Page

Assistance system for designing mirrored surface using projector

  • Daisuke Nakamura, Shinsaku Hiura and Kosuke Sato : Assistance system for designing mirrored surface using projector, proc. SICE2007, pp. 1489-1492, 2007

Assistance system for designing mirrored surface using projector
  We propose a system with camera and projector to show specular reflection of surrounding environment onthe real object. At first, active shape measurement with pattern light projection is performed to obtain precise shape of theobject rapidly. Then, rendered specular pattern using the shape information is projected on the real object. Our systemalso recognizes the status of work operated by the designer, and shape measurement and visualization are done only whenthe worker wants to evaluate the shape. The view point of the evaluator is measured using LED marker and sphericalmirror, and the projected reflection pattern is adjusted for the viewpoint adequately in realtime.
  

Return To The Top Page

Limpid Desk: See-Through Access to Disorderly Desktop in Projection-Based Mixed Reality

  • Daisuke Iwai and Kosuke Sato, "Limpid Desk: See-Through Access to Disorderly Desktop in Projection-Based Mixed Reality", In Proc. of VRST'06, ACM, pp.112-115, 2006.

Limpid Desk: See-Through Access to Disorderly Desktop in Projection-Based Mixed Reality
  In computer vision, background subtraction methodis widely used to extract a changing region in a scene.However, it is difficult to simply apply this method to ascene with moving background object, because such objectmay be extracted as a changing region. Therefore,a method has been proposed to estimate both currentbackground image and occluding object region simultaneouslyby using eigenspace-based background representation.On the other hand, image completion methodusing eigenspace have been extended to non-linear subspaceusing kernel trick, however, such existing methodtakes large computational cost. Therefore, in this paper,we propose a method for rapid simultaneous estimationof a background image and occluded regionin non-linear space, using the kernel trick and iterativeprojection.
  

Return To The Top Page

User Interface by Real and Artificial Shadow

  • Huichuan Xu, Ichi Kanaya, Shinsaku Hiura and Kosuke Sato : User Interface by Real and Artificial Shadow, SIGGRAPH2006

User Interface by Real and Artificial Shadow 
  This poster proposes concept and prototype of an intuitive userinterface based on shadow for indoor environment. Shadow is acommon phenomenon in our daily life where there is light source. Italways exists but we ignored the potential of shadow for connectingdigital and physical world. There are several merits of shadowinterface: first, shadow is a daily life familiarity and it builds anatural bridge between digital and physical worlds; second, theshadow based interaction system is simple and does not requireexpensive devices; third, shadow itself is a strong and naturalvisual feedback cue for the user to take good command ofapplications.
  

Return To The Top Page

Free-form Shape Design System using Stereoscopic Projector .HYPERREAL 2.0-

  • Masaru Hisada, Keiko Yamamoto, Ichiroh Kanaya and Kosuke Sato : Free-form Shape Design System using Stereoscopic Projector - HYPERREAL 2.0 -, SICE-ICASE International Joint Conference 2006

Free-form Shape Design System using Stereoscopic Projector .HYPERREAL 2.0-
  This paper presents a novel mixed reality (MR) system for visual shape modification (e.g.: denting,engraving, swelling, etc) of physical objects by using projection of computer-generated shade. Users of this system,which we call HYPERREAL 2.0, perceive as if the real object is actually being deformed when they operate the systemto modify the shape of the object while only the illumination pattern of the real object has been changed. The authorsare aiming to apply this technology to product designing field: designers would be able to evaluate and modify form oftheir product more efficiently and effectively in an intuitive manner using HYPERREAL 2.0 than conventional designprocess (typically, computer aided design, or CAD, systems and solid mock-ups) since the system is able to provideusers with actuality/presence of physical mock-up and flexibility of shape data on a computer system, such as CADsystem, all at once.
  

Return To The Top Page

A Wearable Mixed Reality with an On-board ProjectorYouichi

  • Toshikazu Karitsuka, Kosuke Sato, A Wearable Mixed Reality with an On-board Projector, The 2nd International Symposium on Mixed and Augmented Reality (ISMAR 2003), pp. 321-322 (2003).

A Wearable Mixed Reality with an On-board ProjectorYouichi
  One of methods achieving Mixed Reality (MR)displays is the texture projection method using projectors.Another kind of emerging information environments is awearable information device, which realizes ubiquitouscomputing. It is very promising to integrate thesetechnologies. Using this kind of fusion system, two ormore users can get the same MR environments withoutusing HMD at the same moment. In this demonstration,we propose a wearable MR system with an on-boardprojector and introduce some applications with thissystem.
  

Return To The Top Page