AUGMENTED REALITY WITH KINECT PDF

adminComment(0)

PDF | This project was created to provide a spectacular and interactive way of Keywords — Microsoft Kinect, augmented reality, geography. technologies: two kinect systems for motion capture, depth map and real In this context we have developed an Augmented Reality (AR) application that. If you know C/C++ programming, then this book will give you the ability to develop augmented reality applications with Microsoft's Kinect. By the.


Augmented Reality With Kinect Pdf

Author:ELLA SPRUELL
Language:English, Dutch, Portuguese
Country:Botswana
Genre:Technology
Pages:485
Published (Last):19.10.2015
ISBN:722-8-48015-643-7
ePub File Size:30.49 MB
PDF File Size:13.24 MB
Distribution:Free* [*Registration needed]
Downloads:22891
Uploaded by: JESSE

Abstract: In this paper an Augmented Reality system for teaching key developmental abilities for individuals with. ASD is described. The system has been. reality based system using Kinect to build a virtual augmented mirror. Where there would .. [11] [online] raudone.info~battiato/CVision/Kinect. pdf. Theory: Augmented reality is a process by which data from the real world (for example video and The Kinect is a very useful tool for working with augmented.

Small circles show the estimates of their centres whereas the large circle is the particle with largest weight.

Open image in new window Augmenting participants This section describes the use of the skeleton tracking features of Kinect to augment a participant with clothes from ancient times. Figure 6 Skeleton joints identified by OpenNI. As each joint is recognized by OpenNI, its position and orientation are returned. However, when using video see-through displays or a camera as the input source, the images of the real environment must also be rendered together with other graphics in order to generate the final image.

The approach commonly adopted is to convert camera images to the texture format of the rendering engine Irrlicht in our case ; then, at each call to the function that draws the whole scene, the camera image is first rendered and then the computer models are superimposed on it.

Creating 3D models The approach followed for creating the models used for AR applications is presented briefly in this section.

These profiles were later exported to 3D Max. For this reason, created models were optimized using the MultiRes modifier, which works by first computing the number of vertices and faces in a model, then allows the user to eliminate some of them manually.

This method proved to be effective for 3D models consisting of thousands of faces. There are several ways to perform this; the following method was found to produce the best results. To produce a single texture from several materials applied to a model, one first uses an Unwrap UVW modifier to store the current material map—this defines how the texture must wrap around an object that has a complex structure.

Figure 8 Unwrapping the faces of a model for texture baking.

When rendering, a diffuse map was selected instead of the complete map model, which included maps for lighting or surface normals and failed to create the texture for the invisible side of the model.

Later, this single texture was applied to the model again, by removing the previous material. This projects identify the aforementioned problems and aim to concept defines the outcome attained when calibrating a calculate the intrinsic and extrinsic parameters of the projector, scenario without measuring it, thus working for whichever in the case of Kinect, its IR and RGB cameras are both objects placed at the scene, regardless whether they were added calibrated.

The initial part of this research consisted on studying the In the context of this research, a framework was developed four calibration methods above in order to better understand in order to take the calibration data into account and achieve a their differences, pros and cons.

After this, it was necessary to dynamic mapping using the Microsoft Kinect. ISSN: SBC Journal on Interactive Systems, volume 5, number 3, 5 create a solution to generate a dynamic mapping, thus A chessboard, with known dimensions, as seen in Figure 1, is validating the obtained calibration. However, previously, in order to receive a positions and rotation of the chessboard. The most used proper projection, the objects had to be known in advance or algorithms to perform this processing are the ones developed had to go through a reconstruction phase, which, besides by Zhang [22] and Tsai [23].

One of the most traditional works on irregular surfaces image projection is the iLamps [18], where the authors present techniques for adaptive projection on non-planar surfaces, using a textured projection algorithm.

With a different approach, the work developed at the Sunnybrook Health Sciences Center uses markerless tracking devices in order to see and navigate in tomography and magnetic resonance imaging data without the need of a keyboard and mouse [19], Figure 1 — Standard chessboard pattern 9x7 using hand gestures to browse the data layers to be visualized. The below, so that the camera transformation can be defined as the masseur uses a projector to project flow lines on a human body.

In this system, a spatially augmented reality surgical environment is constructed directly on the patient body. The registration method proposed in that paper uses fiducial markers attached onto the patient's skin in order to produce an accurate AR display on the physical body of the A B C patient. Equation 1 — Affine transformation In spite of the variety of proposals in the area of visualization integrated to projection, we found only very few systems able to solve the problem of dynamic projection Equation 1 defines the following transformation: mapping with Kinect and with the flexibility to be used with C Multiplication by the RT matrix defines the most of the existing calibration methods.

Camera calibration, whose purpose is to extract the extrinsic and intrinsic parameters of a camera, is a major Where in the equation: difficulty in Computer Vision. The obtained parameters define the 3D position, rotation, focal length, center of image, and f is the focal length; distortion coefficient of a camera.

These data are necessary to sx and sy are the amount of pixels per length in both axis; insert virtual models in a real environment captured by a uc and vc are the orthogonal projection coordinates of the camera. Single camera calibration makes with the perpendicular lines; The first calibration method is that of a single camera, R and T correspond to the rotation matrix and the being required to estimate, at the image obtained from the translation vector, respectively.

Once the to note that the virtual points the colored circles and lines are points are found, it is possible to calculate the intrinsic and overlapped to the inner pattern corners. ISSN: 6 SBC Journal on Interactive Systems, volume 5, number 3, cameras or the camera-projector pair demands to be calibrated regarding a common reference system in the world.

The calibration allows correlating the points in the world to the captured points; therefore both cameras are positioned to capture the same scene. The difference between calibrating two cameras and calibrating a camera-projector pair is that the projector does not capture points in a WCS, so the idea is to project a known image and determine the points found in the scene, as seen in Figure 3. A convenient way to do it is by using a virtual checkered pattern projected over the same plane where the physical Figure 2 — Photo of calibration on a standard chessboard.

The inverse transformation of the projector is applied to the transformation In spite of working for a simple configuration, this of the camera to obtain the coordinates on the WCS from a approach is not enough for the goal of the present research, but point visualized by the projector.

A dynamic-mapped system consists primarily of a camera, used to acquire the correspondence from the points at the WCS and the PCS, but it is still needed to present the user with the virtual data overlapped to the calibrated scene.

To achieve such task, it is necessary to utilize a camera-projector pair, where the projector can be interpreted as the dual of a camera, thus requiring the calibration of two cameras.

Augmented Reality with Kinect

However, to truthfully reproduce a 2D image over a real 3D object, it is necessary to compute the already at the desired position. Based on that, in the same way texture extracted from the real world scene, to then apply the that a camera calibration is made by acquiring the points from 2D image over the texture and finally project the result.

To the CCS in relation to the WCS, the calibration of two cameras simplify the process, the Kinect was chosen as camera since it is made by acquiring the points from CCS2 in relation to already has a system to capture the depth of an image. In order to obtain the extrinsic parameters of the second D.

Taken its low cost in comparison to rotation and translation matrix of a coordinate system in commercial RGB-D systems D means depth this system has relation to each other.

1. Introduction

Coding [24], the Kinect is capable of detecting its distance from a point, turning it into a 3D runtime measuring device. Calibration of a Camera-Projector The Light Coding, presented in Figure 4, encodes information on The internal optics of a camera is close to that of a light patterns leaving the IR projected image over any surface, projector, thus both can be modeled in the same way.

This offers the The camera-projector calibration is achieved in a similar necessary information to calculate the distance for the 3D way to a calibration between two cameras, in which both image. The upper right image convertor ADC of 24 bits. What was previously summarized as: IV. Aiming to achieve this goal, traditional point cloud visualization was Now, with the Kinect, becomes: adopted, shown in Figure 6, consisting of a collection of 3D points that are not connected, providing a sparse visualization.

In order to use OpenGL routines, the following spaces coordinates need to be Regardless of following the same logic of calibrating two defined. The interval of values gets normalized between -1 and 1 in the 3 axes.

Figure 6 - Different viewing angles of the same point cloud. The m15 element is a homogeneous coordinate used for projective transformation. This is the matrix found before any transformation is glScalef , while this sets being combined with the elements applied over this object.

Projection Matrix! The division of the Clipping Coordinate System obtains the This frustum determinates which objects will be cut from the Normalized Coordinate System by the w component, known as scene. Also, it determines how the 3D scene is projected on the perspective division, as observed in Equation 3.

Augmented reality applications for cultural heritage using Kinect

The estimated time to make a calibration was interval of its coordinates in the X axis changes from [left, between 5 and 15 minutes for a set of 30 images, which not right] to [-1,1], in the Y axis changes from [bottom, top] to [- always had a coherent result, being then necessary to repeat the 1,1] and in Z axis changes from [near, far] to [-1,1] Figure 8. By the end of the whole process, the consumed time could vary from 40 to 90 minutes, where the time taken by the algorithm to process all the images and generate the calibration matrices lasted about 10 minutes, while the rest of the process was used for the positioning of the chessboard.

CameraProjectorCalibration This method [17] aims to be fully automatized, not requiring that the user provides commands through the keyboard to select an image, taking less time than the RGBDemo. Even though this Normalized Coordinate System. Open Kinect [28]. The total calibration time varies from 25 to 50 minutes, but The Microsoft Kinect SDK is the most modular library to this method achieved a coherent result on the first or second be used in the development of new applications, presenting attempt, ensuring a depth allowed variation of almost two well-defined modules, a range of examples available and full meters.

Despite the total calibration time being shorter than the documentation. The Open Kinect, despite of being one of the first libraries to provide support to Kinect, does not have a large community, In spite of the differences between RBGDemo and the with about active users, and its advances depends on the CameraProjectorCalibration, the following conclusions were few users interested on improving it.

Taking presenting higher chances of bad outcomes. RGBDemo the following procedures to generate new images: RGBDemo [14] provides, among other functionalities, the Kinect camera calibration and the camera-projector system calibration.

The driver used to calibrate the Kinect was the libfreenect, while the driver used to calibrate the Kinect- Projector system was the OpenNI. Figure 10 — Different positions used for the Kinect-Projector System.

This program offers the main functions developed for the Kinect use, such as visualization of RGB Camera and IR, and also different depth maps, multiple visualizations, and possibility of audio and video recording, in addition to its management.

Despite the visualization not being in three dimensions it is possible to notice how the images under Depth Mapping present a better idea of depth than the one offered by the RGB Camera.

Microsoft shuts down the Kinect to concentrate on the HoloLens and augmented reality

Figure 9 - Different positions and angulations used during the calibration. Figure 11 on the left shows a Depth Mapping made with a single color scale, in which the more the contrast the color To reassure that the result of the calibration was consistent, presents, the closer the point is to the Kinect Device.

It was important to test these cases to prove the correctness of the calibration methods because the first case generates a system of equations more complex to be solved; therefore, it is more prone to failure.

Point Cloud Library achieved. The driver a Move the calibration pattern in front of the cameras. As dependencies, the Glut version 3.

OpenCV [34] version 2.

Core different positions and angulations.Chapter 2: ISSN: SBC Journal on Interactive Systems, volume 5, number 3, 5 create a solution to generate a dynamic mapping, thus A chessboard, with known dimensions, as seen in Figure 1, is validating the obtained calibration. Equation 1 — Affine transformation In spite of the variety of proposals in the area of visualization integrated to projection, we found only very few systems able to solve the problem of dynamic projection Equation 1 defines the following transformation: mapping with Kinect and with the flexibility to be used with C Multiplication by the RT matrix defines the most of the existing calibration methods.

Difficulties encountered 5 Since the visualization program uses the OpenNI driver Almost all of the problems found by the end of the system and the calibration program uses the Microsoft Kinect SDK development are part of the calibration process.

Bottom line

This projects identify the aforementioned problems and aim to concept defines the outcome attained when calibrating a calculate the intrinsic and extrinsic parameters of the projector, scenario without measuring it, thus working for whichever in the case of Kinect, its IR and RGB cameras are both objects placed at the scene, regardless whether they were added calibrated.

After a calibration procedure on the color camera and the infrared camera, depth information obtained by the infrared camera can thus be mapped to each pixel on the color image. In his spare time he also writes novels and is a guitar lover. By using a Kinect sensor to acquire the surface structure of the patient, image-to-patient registration is accomplished by an Enhanced Iterative Closest Point EICP algorithm automatically.

Not yet a member?

ELDEN from Lacey
Review my other posts. I have a variety of hobbies, like casino gambling. I relish studying docunments bravely .
>