EXAM Revision FOR Mixed Reality PDF

Title EXAM Revision FOR Mixed Reality
Course Software Project Management
Institution Brunel University London
Pages 5
File Size 88.2 KB
File Type PDF
Total Downloads 49
Total Views 179

Summary

Exam revision notes for advanced computer topics...


Description

EXAM REVISION FOR MIXED REALITY Virtual reality: in virtual reality you are fully immersed in an artificial world/total virtual environment. You are blocked from the real world. You may totally act differently in a completely different world. Virtual Augmented reality : extends the real world and makes it better. Examples are : snapchat, pokemon go. One of the disadvantages of AR is You cannot interact with visual elements or 3d objects with your bare hands. You can interact Mixed reality : A mix between augmented reality and virtual reality, here you have a virtual objects and virtual contents inside a real world and you are able to interact seamlessly with these objects. a total immersiveness with virtual interactivity while being in a real world environment. Mixed reality is an extension of augmented reality however , you can interact with 3d objects with your bare hands. Marker-based augmented reality Why is it easy to identify marker images or barcodes: because of they have high contrast, they have a simple computation which relies on edge detection and corner detection algorithms to easily process images. Algorithms that detect edges and corners are Harris edge and corner detection. Harris detects discontinuity in the surface colour and the discontinuity in the illumination. Harris edge detection easily detects It computes the gradient in the x and y direction because it has two dimension of an image to get the magnitude of the image. Advantages of marker based augmented reality 1. It’s easy to use and implement 2. Its efficient and works in real time 3. Its feature based tracking is very stable Disadvantages 1. Virtual content disappears when camera moves away from marker. 2. Markers must have strong contrast and boarders 3. Detection of markers is difficult when there is a reflected light 4. Does not work with occlusion Understand the main theory behind marker based augmented reality Image-based augmented reality Feature detection algorithm The most robust algorithm used in image-based augmented reality is surf(Speeded up Robust feature)-algorithm as it can extract huge number of features in real time. It also has a continuous tracking and tracking stability.

What are the challenges of using image-based augment reality. 1. Outliers 2. The camera quality is very important 3. Keeps continuous track of feature points in each frame with respect to nect frame 4. Keeps continuous track of image pose over time, thus detects outliers(pose calculation/pose estimation) 5. Frame rate should be slow, thus when rotating or translating the image to allow the system time to detect the features. the pose may change significantly between frames(augmentation jumps) Degree of freedom : the amount of freedom that you are to do internal transformations Having a problem with rotating the image around or the augmented object doesn’t appear/proportion is distorted? There is problem with position estimation or the computation or the metric is not correct. This a non-linear problem why? There is a lot of unknowns and there is no way to estimate all in one go, the equation has to be iteratively solved in order to estimate the value for the variables. In order to solve this equation you need to; Compute the target image position relative to the camera Then estimate the initial guess of the matrix , it could be a guess directly from a homography matrix which is specific matrix. Then use iterative refinement such as Guass-newton method: this is an iterative method that tries to find parameters. 6 parameters (3 for position, 3 for rotation) to refine. At each iteration we optimise on the error. It tries to iteratively find the correct answer for the translation and rotation in order to minimise error. Marker-less Augmented reality It is based on location, it relies on mixed and hybrid tracking technology that enables us not work with marker or image based augmented reality. It does track your location and tries to understand the scene by using optical tracking from the device camera(feature extraction). Active tracking : it consists of mechanical , magnetic, gps ,wifi, cell location. They actively detect location. Passive tracking : it consists of inertial sensors in device(could either acceleration or movement), computer vision (optical tracking based on marker or natural feature tracking : based on opening a device camera to detect some features Visual SLAM(Simultaneous localization and mapping : it tracks and extracts feature in an unknown environment. That is localizing and mapping the position using optical camera to scan the environment. What happens in visual slam: Step 1: tracking a set of points through camera frames. tracking them overtime and using an algorithms to match them

Step 2: Using the tracks to triangulate the 3d position to have 3d mesh. Step 3: simultaneously use the estimated point location to calculate the camera which could have observed them. When using visual slam you need to extract as many points as possible. The many points you extracts, the easier its to solve structure and motion(camera path and scene structure). It is important that when working with mixed reality headset, that you have wifi access for tracking location and camera access embedded especially depth camera that allows you to structure the scene. Challenges for Visual slam 1. Camera moves through an unchanged scene: it always have to scan the scene newly before its start working when positions and scenes are changed. 2. Not suitable for person tracking, gesture tracking: that is , your environment has to be static. 3. Its difficult to perform tracking outdoors due to elements such as lighting ,movements of people. Mixed Reality- Spatial Mapping The visual part of mixed reality; very similar to marker-less augmented reality. Spatial Mapping : the process of a mixed reality device mapping the real space, for the device to create an understanding of it. A mesh is created that lays over the real environment. A mesh looks like a series of triangle placed together , like a fishing net. Usefulness of spatial mapping: 1. It provides visual ability for a digital object to navigate around the real world space. 2. Virtual objects are to understand the real world environment which can help them perform realistic movements based on their understanding of the scene. This also helps with collision avoidance. Visualisation and navigation : to position and display virtual object correctly and aid the virtual object the ability to navigate around 3. Physics and occlusion : vital for depth of perception, how the virtual content is placed according to the environment taking into account shadows, collusion detection, bouncing of the object across real world scene which makes it more of a realistic experience. Mapping Recognition : The process of mapping , registration and recognition of non-static elements of the real world, which allows one to communicate between the real world and virtual objects. It allows one to interact with virtual objects mostly starting with the hands. How it happens : User’s hands are recognised and interpreted as left and right hand skeletal models( is the model of hand, the system understands that your hands perform a form rotation and transformation.

Five colliders are attached to the five fingertips of each hand skeletal model. The collider is a sphere collider which can be visually rendered to provide better cues for near targeting. The sphere’s diameter should match the thickness of the index finger to increase touch accuracy. Augmented Reality: A class of displays on the reality-virtuality continuum notes Seminar questions : In the article [Milgram, Paul, et al. "Augmented reality: A class of displays on the reality-virtuality continuum." Telemanipulator and telepresence technologies. Vol. 2351. International Society for Topic 3: Mixed reality 2020/2021 Updated March 2021 Page 2 of 2 Optics and Photonics, 1995.], the authors presented a three dimensional taxonomy of mixed reality. This three dimensional taxonomy is based on three factors. Name these three factors, and describe each one of them in one sentence 1. Reality : some environments are virtual that is digitally created by a computer whilst others are not virtual which can be classified as real world environments 2. Immersion : this refers to virtual or artificial environments and real world environments being able to be seen without an observer being fully immersed in the environment 3. Directness : this refers to whether the real world environment is viewed directly or through an artificial process. Extent of World Knowledge (EWK), 2.) Reproduction Fidelity (RF), 3.) Extent of Presence Metaphor (EPM). Describe each one of these dimensions and explain how each dimension impacts the mixed reality experience Extent of World Knowledge (EWK) : this refers how to our knowledge about virtual objects and environment in which they are shown or positioned. It has scale which varies from left to write right. On the extreme left will mean that there is little or no knowledge environment displayed. The extreme right of the scale shows a controlled environment where a computer creates a virtual world on the basis of having full knowledge of object and its position in the virtual world, the standpoint or view point and actions of the observer of objects within the virtual world Reproduction Fidelity: this refers to the quality of synthesised display or image is able to reproduce or look closely to the intended image or object displayed. Extent of metaphor : this relates to the extent by which an observer is intended to feel present or positioned in a displayed scene. Explain what is a "see-through" mixed reality display See through is a class of display where an observer is able to view through the world around them being displayed through a display medium which makes the observer feel as present in the world or environment they are seeing . Examples of devices used to achieve this effect are head-mounted displays. It gives full exposure to reality and the observer view is not blocked. Based on your reading of the article presented by Ernst et al., i.e. [Kruijff, Ernst, Edward Swan, J., and Feiner, Steven . "Perceptual issues in augmented reality revisited." 2010 IEEE International Symposium on Mixed and Augmented Reality. IEEE, 2010.]: What is a registration error? How does latency effect mixed reality, and how does realism impact latency? Registration error: refers to how accurate is the localisation and orientation. Drifting can cause registration error. It affects how you place a virtual content in a real world environment. Latency affect mixed reality: when the gpu and display are out of sync, they can cause discomfort and ghosting(blurry effect) to the user. The frame rate has to synchronise with the display in real time above 60 frames per second which is the rate by which your eyes can follow the display. You will have to optimise graphics and rendering . For instance when it comes resolution which may be changed due to settings for a user, you will have to compromise on how much can be rendered in each frame. Also when we have virtual objects , since they are made of polygonal mesh, you will have to reduce

the polygon count and level of detail in order to avoid latency. Texturing resolution of a material may need be reduced and kept at medium or low as a compromise to avoid latency and high performance. Lighting and shading are computationally difficult so it has be to kept a minimum and account for reflection and shadow. You will have to compromise on shading as well. You will need to have an approximation of physics with minimal computation needed. Use Cases YourMindXR can enable one-to-one therapy, where the patient meets with a virtual/computer generated human character (the therapist) in the patient's living room at home. This artificial therapist must look and move as believable as possible. What are the main concerns when designing and engineering the virtual human, in terms of appearance and body movement? 1. To avoid unrealistic movements and appearance such robotic movements. It should resemble human movements 2. Should mirror real interaction; you might need to employ motion capture data, using neural network algorithm, feed into the algorithm with the motion caption data which makes it learn believable movements in these situation in how human movements would look like. 3. Hair,clothes movements are secondary motions which needs to be achieved. You will need simulations/physics to achieve this effect. The concerns that comes with physics is latency. You may need to compromise on resolution to provide believable appearance. 2. Other important aspects are facial expression and nonverbal communications. What are the main considerations with respect to these aspects for the design of the interaction with the virtual character? See the video below for inspiration (https://www.youtube.com/watch?v=XNb42Lw0lBU). 1. In therapy Its important to incorporate nodding and eye contact which are non-verbal communication in the design of the system. A form of AI specifically speech recognition will be helpful in the design to analyse the dialogue between artificial therapist and the human to achieve this effect. 3. One of the most computationally demanding processes in this XR pipeline is producing real-time rendering. Explain why this is the case and explain how light reflection effects rendering computations. Rendering is the bottleneck of 3d graphics because of the layer you compute the rendering , This involves the texture, light , shadow, how the light bounces around. Having a room exposed to a lot of lighting will impact the mixed reality experience. This can bring reflection in the room which will cause the virtual objects to appear distorted....


Similar Free PDFs