Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
As of now, we do not know exactly what shape the Metaverse will take. That does not matter, either. What matters is that someday, a global network of spatially organized, predominantly 3D content will be available to all without restriction, for use in all human endeavors — a new and profoundly transformational medium, enabled by major innovations in hardware, human-computer interface, network infrastructure, creator tools and digital economies.
Please read the article The Seven Rules of the Metaverse to learn the basic concepts and language.
Learn Mixed Reality development using Azure Mixed Reality Services
Mixed Reality Curriculum: aka.ms/MixedRealityCurriculum
WebXR Lessons: www.learnwebxr.dev
Unity Lessons: aka.ms/MixedRealityUnityLessons
AI Lessons: www.learnaiml.dev
Unreal Lessons: aka.ms/MixedRealityUnrealLessons
This book is designed as a collection of classes that starts from basic concepts and builds a project over time. Each lesson can also be used as an individual workshop. Each class follows the below structure:
Core concepts and discussion points.
Project step-by-step walk-through.
What could go wrong. A section to discuss the common mistakes and issues.
Further reading resources.
Each class has questions as sections and builds the corresponding part of the project. If you feel you can correctly a question, feel free to move on to the next question or the next class.
If you have any questions, suggestions and improvements, please submit an issue here: https://github.com/Yonet/AzureMixedRealityDocs/issues.
We welcome your contributions. If you would like to contribute, check out how to in the contributing section.
Hope you enjoy developing your mixed reality application!
Lesson 1: Introduction to Mixed Reality Applications and Development.
Lesson 2: Introduction to Mixed Reality Developer Tools and 3D Concepts.
Lesson 3: Working with Hand Interactions.
Lesson 4: Eye and Head Gaze Tracking.
Lesson 5: Spatial Visualization using Bing Maps.
Lesson 6: Working with REST APIs.
Lesson 7: Azure Spatial Anchors and Backend Services.
Lesson 8: Displaying Spatial Anchors on a map.
Lesson 9: Working with QR codes.
Lesson 10: Working with Spatial Awareness and Scene Understanding.
Lesson 11: Getting Started with AI.
Lesson 12: Project Discussion and Case Studies.
Short link: aka.ms/MixedRealityCurriculum
Mixed Reality Curriculum Playlist: https://aka.ms/MixedRealityCurriculumVideos.
Code Samples: https://aka.ms/MixedRealityUnitySamples.
Github: https://github.com/Yonet/AzureMixedRealityDocs
Slack Channel: https://holodevelopers.slack.com/archives/G012X50UVML
Developing for Mixed Reality using Unity3D
Short link: aka.ms/MixedRealityUnityLessons
Lesson 1: Introduction to Mixed Reality Applications and Development.
Lesson 2: Introduction to Mixed Reality Developer Tools and 3D Concepts.
Lesson 3: Working with Hand Interactions and Controllers.
Lesson 4: Eye and Head Gaze Tracking.
Lesson 5: Spatial Visualization using Bing Maps.
Lesson 6: Working with REST APIs.
Lesson 7: Azure Spatial Anchors and Backend Services.
Lesson 8: Displaying Spatial Anchors on a map.
Lesson 9: Working with QR codes.
Lesson 10: Working with Scene Understanding.
Lesson 11: Getting Started with AI.
Lesson 12: Project Discussion and Case Studies.
Introduction to Mixed Reality Applications and Development
Short link:
You can jump directly into setting up your first project on the .
In this lesson, you will learn about the basic concepts of Mixed Reality and explore the applications of Mixed Reality in different industries.
Read through the questions below. If you feel comfortable with the answer, feel free to skip to project section or next chapters.
Augmented Reality(AR) is defined as a technology that superimposes a computer-generated image on a user's view of the real world, thus providing a composite view. Augmented Reality experiences are not limited to visual addition to our world. You can create augmented experiences that are only audio addition to your physical world or both audio and visual.
Augmented Reality experiences are also not limited to headsets like HoloLens. Today, millions of mobile devices have depth-sensing capabilities to augment your real world with digital information.
Virtual Reality(VR) is when you are absolutely immersed in a Virtual World by wearing a headset. In Virtual Reality you lose connection to the real world visually. Virtual Reality applications are great for training and for simulations where users would benefit from total immersion to replicate the real life situation. Some examples include training for firefighters, emergency room healthcare providers and flight simulations.
Mixed reality is the merging of real and virtual worlds to produce new environments and visualizations, where physical and digital objects co-exist and interact in real time.
We think of Mixed reality as a spectrum from the physical world to an augmented world to fully immersive virtual world and all the possibilities in between.
The first revolution in computing happened with the creation of mainframe computers: computers that, at times, occupied a whole room. Mainframes were used by large organizations such as NASA for critical applications that process data.
The second wave of computing is defined by the Personal Computers(PC) becoming widely available.
We believe third wave of computing is going to include many devices to manage data and include IoT sensors and Mixed Reality devices.
We have more data than ever before. To be able to process the data and make informed decisions, we need to have access to the data in the right time and right place. Mixed Reality is able to bring that data into our context, real world.
Design & Prototyping: Enables real-time collaborative iteration of 3D physical and virtual models across cross-functional teams and stakeholders.
Training & Development: Provides instructors with better tools to facilitate teaching/coaching sessions. It offers trainees an enhanced and engaging learning experiences through 3D visualizations and interactivity.
Geospatial Planning: Enables the assessment and planning of indoor and outdoor environments (i.e. future construction sites, new store locations, interior designs) and removing the need for manual execution.
Sales Assistance: Improves the effectiveness of individuals in sales-oriented roles by providing tools such as 3D catalogs and virtual product experiences that increase customer engagement and strengthen buyer confidence.
Field Service: Improves the first-visit resolution and customer satisfaction of customer support issues. It is typically used for complex products that would otherwise require a field visit. It can serve as a platform for targeted up-sell opportunities, as well.
Productivity & Collaboration: Transform the space around you into a shared augmented workplace. Remote users can collaborate, search, brainstorm and share content as if they were in the same room
Medical
Museums and Libraries
Mixed Reality Toolkit (MRTK) provides a set of components and features to accelerate cross-platform Mixed Reality application development in Unity. MRTK includes:
UI and interaction building blocks.
Tools.
Example Scenes.
You can learn more about the components at: aka.ms/MRTKGuides.
In this project we we will setup our development environment for Mixed Reality Development with Unity3d
Check your knowledge by answering below question before you move into the project. Feel free to skip sections you feel comfortable. Make sure you read through the first download section to make sure you have all the modules necessary.
Before you get started with developing for Mixed Reality for Unity, make sure to check everything in the below list and follow the instructions for each download.
Not following the instructions for specific download might result in errors while developing or building your application. Before you try to debug, check the list and detailed instructions.
Install the most recent version of Windows 10 Education or Pro Education so your PC's operating system matches the platform for which you are building mixed reality applications.
You can check your Windows version by typing "about" in the Windows search bar and selecting About your PC as shown in the below image.
You can learn more about upgrading your Windows 10 Home to Pro at aka.ms/WinHome2Pro.
We need to install and enable Hyper-V, which does not work on Windows Home. Make sure to upgrade to Education, Pro Education, Pro or Enterprise versions.
Go to: https://unity3d.com/get-unity/download page and download the Unity Hub instead of Unity Editor.
Do not use Beta software in general before you feel very comfortable with debugging, the software itself and your way around github issues and stackover. Don't learn this lesson the hard way! I have tried that for your benefit and/or my optimism.
Unity Hub allows you to download multiple Unity Editors and organize your projects in one place. Since Unity upgrades are not backward compatible, you have to open the projects with the same Unity version that it was created with. You can update the projects to the latest Unity version but that requires a lot of debugging usually. Easiest way to get going with a project is to keep the same version. I will show you how to debug to update your projects later in this chapter.
You will need to download Windows development related modules along with your Unity Editor. Make sure Universal Windows Platform Build Support and Windows Build Support is checked while downloading Unity Editor through Unity Hub or add it after by modifying the install.
You can add modules or check if you have them in your editor by clicking on the hamburger button for the Unity Editor version and checking the above module check-boxes.
If you would like to build for an Android or iOS mobile device, make sure the related modules are checked as well.
You can download Visual Studio by adding Microsoft Visual Studio 2019 module to your Unity Editor as shown in previous step or download it at aka.ms/VSDownloads.
Make sure to download Mixed Reality related modules along with Visual Studio.
You can always add the necessary workflows to Visual Studio after download:
In this section, you will learn Unity3D interface, tools and keyboard shortcuts.
The Unity Editor has four main sections:
This is where you can edit the current Scene by selecting and moving objects in the 3D space for the game. In this kit, the game level is contained in one Scene.
This is a list of all the GameObjects in a Scene. Every object in your game is a GameObject. These can be placed in a parent-child hierarchy, which lets you group objects — this means that when the parent object is moved, all of its children will move at the same time.
This display all settings related to the currently selected object. You will explore this window more during the walkthrough.
This is where you manage your Project Assets. Assets are the media files used in a Project (for example, images, 3D models and sound files). The Project window acts like a file explorer, and it can be used to explore and create folders on your computer. When the walkthrough asks you to find an Asset at a given file path, use this window.
TIP: If your Editor layout doesn’t match the image above, use the layout drop-down menu at the top right of the toolbar to select Default.
The toolbar includes a range of useful tool buttons to help you design and test your game.
Play is used to test the Scene which is currently loaded in the Hierarchy window, and enables you to try out your game live in the Editor.
Pause, as you have probably guessed, allows you to pause the game playing in the Game window. This helps you spot visual problems or gameplay issues that you wouldn’t otherwise see.
Step is used to walk through the paused Scene frame by frame. This works really well when you’re looking for live changes in the game world that it would be helpful to see in real time.
These tools move and manipulate the GameObjects in the Scene view. You can click on the buttons to activate them, or use a shortcut key.
You can use this tool to move your Scene around in the window. You can also use middle click with the mouse to access the tool.
This tool enables you to select items and move them individually.
Select items and rotate them with this tool.
Tool to scale your GameObjects up and down.
This tool does lots of things. Essentially, it combines moving, scaling and rotation into a single tool that’s specialized for 2D and UI.
This tool enables you to move, rotate, or scale GameObjects, but is more specialized for 3D.
Another useful shortcut is the F key, which enables you to focus on a selected object. If you forget where a GameObject is in your Scene, select it in the Hierarchy. Then, move your cursor over the Scene view and press F to center it.
When you’re in the Scene view, you can also do the following:
Left click to select your GameObject in the Scene.
Middle click and drag to move the Scene view’s camera using the hand tool.
For more advice on moving GameObjects in the Scene view, see in the Manual.
Let’s review some key concepts, which will help you as you begin to explore editing scripts for mixed reality development.
In Unity, areas of the game that a player can interact with are generally made up of one or more Scenes. Small games may only use one Scene; large ones could have hundreds.
Every Unity project you create comes with a SampleScene that has a light and a camera.
You can create a new scene by right clicking under the assets tab and selecting Create > Scene. Organizing scenes under a Scenes folder is only for the organization purposes.
You can use scenes to organize navigation inside your application or adding different levels to a game.
Every object in the game world exists as a GameObject in Unity. GameObjects are given specific features by giving them appropriate components which provide a wide range of different functionality.
When you create a new GameObject, it comes with a Transform component already attached. This component controls the GameObject’s positional properties in the 3D (or 2D) gamespace. You need to add all other components manually in the Inspector.
Prefabs are a great way to configure and store GameObjects for re-use in your game. They act as templates, storing the components and properties of a specific GameObject and enabling you to create multiple instances of it within a Scene.
All copies of the Prefab template in a Scene are linked. This means that if you change the object values for the health potion Prefab, for example, each copy of that Prefab within the Scene will change to match it. However, you can also make specific instances of the GameObject different to the default Prefab settings.
Go to Edit > Preferences.
Change the color scheme under General, if it is available.
You can change the default editor by selecting External Tools > External Script Editor drop-down will have your editors currently available in your computer.
Unity Introduction.
HoloLens Seed project is a github repository that is configured for Windows Mixed Reality development. The repo includes Mixed Reality Toolkit and .gitignore files.
You can create a new project from the seed instead of downloading the different assets and setting up your git project. To be able to use the seed project, you can get a github account and setup your development environment or directly download the repository content.
You can clone and delete this repository's history and start a new git project by running the below script. You need to create your own github repo first. Replace with your own github project url.
Or by running the below github commands:
Whenever there is a new update for Mixed Reality Toolkit or Azure Spatial Anchors packages, this repo will be updated with the latest version. You can automaticly get the latest packages by adding the seed repo as your upstream and pulling from it.
You can check to see if your remote origin and upstream by copy and pasting to your terminal:
You can remove the upstream anytime by running:
If you are using HoloLens Seed project, you do not need to follow this step. Seed project already comes with MRTK. Still, it's good to know how to import the MRTK assets for your future projects.
First, you need to download MRTK by going to their github page: aka.ms/MRTKGithub and navigating to releases tab. Scroll down to Assets section and download the tools:
Examples
Extensions
Foundation
Tools
In your Unity project, select Assets tab and select Import Package > Custom Package from the drop down.
Navigate to MRTK downloaded folders to select and import them into your project.
Once you have MRTK assets imported, a new tab called Mixed Reality Toolkit will appear in your Unity editor. Navigate to new tab and select Add Scene and Configure from the dropdown menu. In your Scene Hierarchy, a new MixedRealityToolkit and MixedRealityPlayspace dropdowns will appear.
MixedRealityPlayspace now includes your Main Camera and the camera is configured for Mixed Reality applications. Camera background is black to render transparent and MixedRealityInputModule, EventSystem, GazeProvider components are now added to your camera.
You can create a new scene to compare the camera settings that has changed by MRTK.
You might be prompted to select a configuration. You can choose the default MRTK configuration or if you are developing for an HoloLens device, you can choose the configuration for the appropriate version.
On your Project panel select Assets > MixedRealityToolkit.Examples > Demos.
Select from the folders that you want to see an example of, ex: HandTracking, EyeTracking...
Open the Scenes folder and select a scene and double click to open.
You can press play to try out the scene in your editor window.
Turn on your HoloLens device.
Tab your wrist(HoloLens 2) or make a bloom gesture(HoloLens 1) to initiate Windows menu.
Open the Settings > Update & Security.
Select For Developers tab on the right hand panel.
The Settings app on Android includes a screen called Developer options that lets you configure system behaviors that help you profile and debug your app performance. For example, you can enable debugging over USB, capture a bug report, enable visual feedback for taps, flash window surfaces when they update, use the GPU for 2D graphics rendering, and more.
On Android 4.1 and lower, the Developer options screen is available by default. On Android 4.2 and higher, you must enable this screen. To enable developer options, tap the Build Number option 7 times. You can find this option in one of the following locations, depending on your Android version:
Android 9 (API level 28) and higher: Settings > About Phone > Build Number
Android 8.0.0 (API level 26) and Android 8.1.0 (API level 26): Settings > System > About Phone > Build Number
Android 7.1 (API level 25) and lower: Settings > About Phone > Build Number
At the top of the Developer options screen, you can toggle the options on and off (figure 1). You probably want to keep this on. When off, most options are disabled except those that don't require communication between the device and your development computer.
Before you can use the debugger and other tools, you need to enable USB debugging, which allows Android Studio and other SDK tools to recognize your device when connected via USB. To enable USB debugging, toggle the USB debugging option in the Developer Options menu. You can find this option in one of the following locations, depending on your Android version:
Android 9 (API level 28) and higher: Settings > System > Advanced > Developer Options > USB debugging
Android 8.0.0 (API level 26) and Android 8.1.0 (API level 26): Settings > System > Developer Options > USB debugging
Android 7.1 (API level 25) and lower: Settings > Developer Options > USB debugging
The rest of this page describes some of the other options available on this screen.
In the Unity menu, select File > Build Settings... to open the Build Settings window.
In the Build Settings window, select Universal Windows Platform and click the Switch Platform button.
Click on Project Settings in the Build Settings window or the Unity menu, select Edit > Project Settings... to open the Project Settings window.
In the Project Settings window, select Player > XR Settings to expand the XR Settings.
In the XR Settings, check the Virtual Reality Supported checkbox to enable virtual reality, then click the + icon and select Windows Mixed Reality to add the Windows Mixed Reality SDK.
Your projects settings might have been configured by Mixed Reality Toolkit.
Optimize the XR Settings as follows:
Set Windows Mixed Reality Depth Format to 16-bit depth.
Check the Windows Mixed Reality Enable Depth Sharing checkbox.
Set Stereo Rendering Mode* to Single Pass Instanced.
In the Project Settings window, select Player > Publishing Settings to expand the Publishing Settings. Scroll down to the Capabilities section and check the SpatialPerception checkbox.
Save your project and open up the Build Settings. Click on Build button, not Build and Run. When prompted, create a new folder(ex:HoloLensBuild) and select your new folder to build your files into.
Click on Build button, not Build and Run.
When your build is done, your file explorer will automatically open to the build folder you just created.
1 ) Make sure you have imported Microsoft.MixedReality.Toolkit.Unity.Foundation as a custom asset or through NuGet.
2 ) In the Unity Package Manager (UPM), install the following packages:
3 ) Enabling the Unity AR camera settings provider.
The following steps presume use of the MixedRealityToolkit object. Steps required for other service registrars may be different.
Select the MixedRealityToolkit object in the scene hierarchy.
2. Select Copy and Customize to Clone the MRTK Profile to enable custom configuration.
3. Select Clone next to the Camera Profile.
4. Navigate the Inspector panel to the camera system section and expand the Camera Settings Providers section.
5. Click Add Camera Settings Provider and expand the newly added New camera settings entry.
6. Select the Unity AR Camera Settings provider from the Type drop down.
Android
iOS
AR Foundation Version: 2.1.4
AR Foundation Version: 2.1.4
ARCore XR Plugin Version: 2.1.2
ARKit XR Plugin Version: 2.1.2
There are no additional steps after switching the platform for Android.
Unchecking Strip Engine Code is the short term solution to an error in Xcode #6646. We are working on a long term solution.
Common issues to consider while developing for Mixed Reality
Since a Mixed Reality application might have access to the user video stream, developers might be able to save or share private information about the user. Be careful to not to save any sensitive data or image anywhere other than users device. Never send sensitive information to any backend.
Iris scan is a more accurate identification method than fingerprint. Since iris scan data can be used to identify and sing in a user, it should never leave the users device. HoloLens 2 does not send the iris scan to the cloud and does not give access to the data.
Eye tracking, while a very useful tool to make your application more accessible, it can also be used to collect data about the user's attention and might be used to manipulate the user's attention.
Unity versions are not backward compatible. If you decide to open a project on a newer version, Unity will try to update your project automatically but it is not guaranteed that the newer version will work with your imported assets. There might be some incompatibilities with your asset or your code and the new version.
Let's take the latest version in the image below, 2019.3.5f:
2019: is the year the Unity version was developed. Changes are issued once a year. If there are major changes, that will break your application. Stick to the same year version unless you are creating a new application from scratch for now. We will talk about how to update your project to the latest version in the following lessons.
3: implies the 3rd iteration in 2019. When a version updates from 2 to 3, there are minor breaking code. Make sure to read changelog before updating your project from 2 to 3.
.5f: is for bug fixes. Usually there are few fixes that does not break your code or the APIs being used. Feel free to update your project from 2019.3.4f to 2019.3.5f.
In your Unity Hub, under the project tab, you can select the Unity version drop down for your application and select a newer version of Unity. Unity will confirm your choice before updating your project. It is a good idea to save a version of your project as a new branch in Github, in case you need to revert back.
Here is a detailed article about the subject: https://www.what-could-possibly-go-wrong.com/unity-and-nuget/
Mixed Reality getting started resources
Windows Mixed Reality Docs: aka.ms/MixedRealityDocs
Mixed Reality Curriculum Youtube Playlist: aka.ms/MixedRealityCurriculumVideos
Mixed Reality Resources Repository: aka.ms/MixedRealityResourcesRepository
HoloLens Seed Project Repository: aka.ms/HoloLensSeedProject
Code Samples: aka.ms/MixedRealityUnitySamples
Mixed Reality Development Tools to install: https://aka.ms/HoloLensToolInstalls
Elliminate Texture Confusion: Bump, Normal and Displacement Maps: https://www.pluralsight.com/blog/film-games/bump-normal-and-displacement-maps
Normal vs. Displacement Mapping & Why Games Use Normals: https://cgcookie.com/articles/normal-vs-displacement-mapping-why-games-use-normals
Live editing WebGL shaders with Firefox Developer Tools: https://hacks.mozilla.org/2013/11/live-editing-webgl-shaders-with-firefox-developer-tools/
Introduction to Mixed Reality Developer Tools and 3D Concepts
Short link: aka.ms/UnityMixedRealityDeveloperTools
In this section, we will go through the developer tools and how to get started with debugging our applications.
Second part of the course is focused on creating and using 3D assets in your applications.
Debugging is the process of finding and resolving defects or problems within a computer program that prevent correct operation of the software.
Debugging tactics can involve:
debugging.
analysis.
.
.
Monitoring at the or level.
.
.
A 3D model is a digital representation of a real world object. Representing a 3D object requires you to get to know some parts that makes up the 3D object.
Polygonal modeling is an approach for modeling objects by representing or approximating their surfaces using polygon meshes.
Objects created with polygon meshes must store different types of elements. These include vertices, edges, faces, polygons and surfaces.
The more edges and faces a model has, detail if the model improves. On the other hand, having a high polygon count model will reduce the performance of your app. The calculation that needs to be done to render the model is expensive.
Reuse the model instance instead of a new model where ever you can.
In this section we will install Windows Mixed Reality developer tools and learn about how to use them.
Mixed Reality Toolkit(MRTK) supports in-editor input simulation. Simply run your scene by clicking Unity’s play button. Use these keys to simulate input.
Press W, A, S, D keys to move the camera.
Hold the right mouse button and move the mouse to look around.
To bring up the simulated hands, press Space bar(Right hand) or left Shift key(Left hand).
To keep simulated hands in the view, press T or Y key.
To rotate simulated hands, press Q or E(horizontal) / R or F(vertical).
The HoloLens Emulator lets you test holographic applications on your PC without a physical HoloLens. It also includes the HoloLens development toolset.
You can download the latest HoloLens Emulator update here: bit.ly/emulator2.
Before installing the emulator, make sure your PC meets the following hardware requirements:
Windows 10 Home Edition does not support Hyper-V or the HoloLens Emulator. The HoloLens 2 Emulator requires the Windows 10 October 2018 update or later.
The Windows Device Portal for HoloLens lets you configure and manage your device remotely over Wi-Fi or USB. The Device Portal is a web server on your HoloLens that you can connect to from a web browser on your PC. The Device Portal includes many tools that will help you manage your HoloLens and debug and optimize your apps.
Turn on your HoloLens device.
Tab your wrist(HoloLens 2) or make a bloom gesture(HoloLens 1) to initiate Windows menu.
Open the Settings > Update & Security.
Select For Developers tab on the right hand panel.
Enable "use developer features" by toggling on/off button.
Scroll down at the For Developer settings to enable "Device Portal".
Go back to all settings page by clicking "Home" on the left hand panel and select "Network & Internet" settings.
Select "Wifi" tab on the left, if it is not already selected.
Select the wifi you are connected to and click on "Advanced Options".
Scroll down and write down the IPV4 address.
You will type in this IP address to your browser to reach to your device portal.
You might see a connection Alert as shown below:
Go ahead and click Advanced button and click Proceed to <your IP address>(unsafe).
Congrats, you made it to your device portal.
Click Views on the right hand panel and select "Live Preview" to see the camera view of your HoloLens.
You can turn off PV camera if you would like to share or record what you are seeing through your HoloLens but do not want to capture your environment.
You can see the videos recorded or screenshots you snapped by asking Cortana here, on the Videos and Photos section, if you enabled voice commands.
Developer Tools and 3D assets resources
Asset creation tools: https://github.com/Yonet/MixedRealityResources#asset-creation-tools.
Asset Libraries: https://github.com/Yonet/MixedRealityResources#asset-libraries.
Debugging C# code in Unity: https://docs.unity3d.com/Manual/ManagedCodeDebugging.html
Unity IL2CPP debugging: https://aka.ms/AA7qap4.
Working with Hand Interactions.
Short link: aka.ms/UnityHandInteractions
In this section, we will look into the hand interactions as an input in our application.
Hand Interactions are currently available for only HoloLens 2 and Oculus devices.
In project section, we will create our first hand interactions to scale, move and rotate objects.
Hand interaction is a very natural way to interact with 3D models. Since we interact and modify real objects with our hands, a new user of your application can start interacting with your application without having to learn about your application interface first.
Gestures are input events based on human hands.
There are two types of devices that raise gesture input events in Mixed Reality Toolkit(MRTK):
Windows Mixed Reality devices such as HoloLens. This describes pinching motions ("Air Tap") and tap-and-hold gestures.
WindowsMixedRealityDeviceManager
wraps the Unity XR.WSA.Input.GestureRecognizer to consume Unity's gesture events from HoloLens devices.
Touch screen devices.
UnityTouchController
wraps the Unity Touch class that supports physical touch screens.
Both of these input sources use the Gesture Settings profile to translate Unity's Touch and Gesture events respectively into MRTK's Input Actions. This profile can be found under the Input System Settings profile.
The HandInteractionExamples.unity example scene contains various types of interactions and UI controls that highlight articulated hand input.
To try the hand interaction scene, first open the HandInteractionExamples scene under Assets\MixedRealityToolkit.Examples\Demos\HandTracking\Scenes\HandInteractionExamples
This example scene uses TextMesh Pro. If you receive a prompt asking you to import TMP Essentials, select the Import TMP Essentials button. Some of the MRTK examples use TMP Essentials for improved text rendering. After you select Import TMP Essentials, Unity will then import the package.
After Unity completes the import, close the TMP Importer window and reload the scene. You can reload the scene by double-clicking the scene in the Project window.
After the scene is reloaded, press the Play button.
You can organize any objects in Unity into a grid by using an Object collection script. In this example, you will learn how to organize 6 3D objects into a 3 x 3 grid.
First, configure your Unity scene for the Mixed Reality Toolkit. Next, in the Hierarchy window, right click in an empty space and select Create Empty. This will create an empty GameObject. Name the object CubeCollection.
In the Inspector window, position CubeCollection so that the collection displays in front of the user (example, X = 0, Y = -0.2, Z = 2).
With CubeCollection still selected, in the Hierarchy window, create a child Cube object. Change the scale of the object to x = .25, y = .25, z = .25.
Duplicate the child Cube object 8 times so that there is a total of 9 Cube child objects within the CubeCollection object.
In the Hierarchy window, select CubeCollection. In the Inspector window, click Add Component and search for the Grid Object Collection (Script). Once found, select the component to add to the object.
Configure the Grid Object Collection (Script) component by changing the Sort Type property to Child Order. This will ensure that the child objects (the 9 Cube objects) are sorted in the order you placed them under the parent object.
Click Update Collection to apply the new configuration.
You can adjust the parameters within the Grid Object Collection (Script) component to further customize the grid. For example, you could change the number of rows to 2 by changing the value in the Num Rows properties. Be sure to click Update Collection to apply the new configuration.
To grab and move an object, first ensure that the Manipulation Handler (Script) and Near Interaction Grabbable (Script) components are added to the object. The Manipulation Handler (Script) allows you to manipulate an object while the **Near Interaction Grabble (Script) allows the object to respond to near hand interactions.
To add the scripts to the object, first select the object in the Hierarchy window. In the Inspector window, click Add Component and search for each script. Once found, select the script to add to the object.
With the object selected, in the Inspector window, navigate to the Manipulation Handler (Script) component to modify the component's parameters.
You can move an object using one or two hands. This setting is dependent on the Manipulation Type parameter. The Manipulation Type can be limited to either:
One Handed Only
Two Handed Only
One and Two Handed
Select the preferred Manipulation Type so that the user is restricted to use one of the available manipulation types.
You can now test grabbing and moving the object using the in-editor simulation. Press the Play button to enter Game mode. Once in Game mode, hold the space bar to bring up the hand and use the mouse to grab and move the object.
To rotate and scale an object, first ensure that the Manipulation Handler (Script) and Near Interaction Grabbable (Script) components are added to the object. The Manipulation Handler (Script) allows you to manipulate an object while the Near Interaction Grabble (Script) allows the object to respond to near hand interactions.
To add the scripts to the object, first select the object in the Hierarchy window. In the Inspector window, click Add Component and search for each script. Once found, select the script to add to the object.
With the object selected, in the Inspector window, navigate to the Manipulation Handler (Script) component to modify the component's parameters.
You can rotate an object using one or two hands. This setting is dependent on the Manipulation Type parameter. The Manipulation Type can be limited to either:
One Handed Only
Two Handed Only
One and Two Handed
Select Two Handed Only for Manipulation Type so that the user can only manipulate the object with two hands.
To limit the two handed manipulation to rotating and scaling, change Two Handed Manipulation Type to Rotate Scale.
To limit whether the object can be rotated on the x, y or z axis, change Constraint on Rotation to your preferred axis.
You can now test rotating and scaling the object using the in-editor simulation. Press the Play button to enter Game mode. Once in Game mode, press T and Y on the keyboard to toggle both hands. This will permanently display both hands in Game mode. Press the space bar to move the right hand and use left mouse click + Shift to move the left hand. While either controlling the left or right hand, use the mouse to rotate and scale the object.
Bounding boxes make it easier and more intuitive to manipulate objects with one hand for both near and far interaction by providing handles that can be used for scaling and rotating. A bounding box will show a cube around the hologram to indicate that it can be interacted with. The bounding box also reacts to user input.
You can add a bounding box to an object by adding the BoundingBox.cs script as a component of the object.
To add the Bounding Box (Script) component to an object, first select the object in the Hierarchy window. In the Inspector window, click Add Component and search for Bounding Box.
Select the Bounding Box script to apply the component to the object. The bounding box is only visible in Game mode. Press play to view the bounding box. By default, the HoloLens 1st gen style is used.
To reflect the MRTK bounding box style, you need to change the parameters inside the Handles section of the Bounding Box (Script) component.
You can change the color of the handles by assigning a material to the Handle Material property.
In the Handles section, click the circle icon to open the Select Material window.
In the Select Material window, search for BoundingBoxHandleWhite. Once found, select to assign the color to the handle material.
When you press play, the handle colors for the bounding box will be white.
You can change the color of the handles when an object is grabbed by assigning a material to the Handle Grabbed Material property.
In the Handles section, click the circle icon to open the Select Material window.
In the Select Material window, search for BoundingBoxHandleBlueGrabbed. Once found, select to assign the color to the handle material.
When you press play, grab one of the handles of the bounding box. The color of the handle will change to blue.
You can change the scale handles in corners by assigning a scale handle prefab in the Scale Handle Prefab and Scale Handle Slate Prefab (for 2D slate) parameters.
First, assign a prefab to the Scale Handle Prefab. In the Handles section, click the circle icon to open the Select GameObject window.
In the Select GameObject window, switch to the Assets tab and search for MRTK_BoundingBox_ScaleHandle. Once found, select to assign the prefab to the scale handle.
Next, assign a prefab to the Scale Handle Slate Prefab. In the Handles section, click the circle icon to open the Select GameObject window.
In the Select GameObject window, switch to the Assets tab and search for MRTK_BoundingBox_ScaleHandle_Slate. Once found, select to assign the prefab to the scale handle.
When you press play, grab one of the handles of the bounding box to see the change in how the scale handle look.
You can change the rotation handles by assigning a rotation handle prefab in the Rotation Handle Prefab parameter.
In the Handles section, click the circle icon to open the Select GameObject window.
In the Select GameObject window, switch to the Assets tab and search for MRTK_BoundingBox_RotateHandle. Once found, select to assign the prefab to the scale handle.
When you press play, grab one of the handles of the bounding box to see the change in how the scale handle look.
You can configure an object to play a sound when the user touches an object by adding a trigger touch event to the object.
To be able to trigger touch events, the object must have the following components:
Collider component, preferably a Box Collider
Near Interaction Touchable (Script) component
Hand Interaction Touch (Script) component
To add audio feedback, first add an Audio Source component to the object. The audio source component enables you to play audio back in the scene. In the Hierarchy window, select the object and click Add Component in the Inspector window. Search for Audio Source to add the Audio Source component.
Once the Audio Source component has been added to the object, in the Inspector window, change the Spatial Blend property to 1 to enable spatial audio.
Next, with the object still selected, click Add Component and search for the Near Interaction Touchable (Script). Once found, select the component to add to the object. Near interactions come in the form of touches and grabs - which is an interaction that occurs when the user is within close proximity to an object and uses hand interaction.
After the Near Interaction Touchable (Script) is added to the object, click the Fix Bounds and Fix Center buttons. This will update the Local Center and Bounds properties of the Near Interaction Touchable (Script) to match the BoxCollider.
With the object still selected, click Add Component and search for the Hand Interaction Touch (Script). Once found, select the component to add to the object.
To make audio play when the object is touched, you will need to add an On Touch Started event to the Hand Interaction Touch (Script) component. In the Inspector window, navigate to the Hand Interaction Touch (Script) component and click the small + icon to create a new On Touch Started () event.
Drag the object to receive the event and define AudioSource.PlayOneShot as the action to be triggered. PlayOneShot will play the audio clip.
Next, assign an audio clip to the trigger. You can find audio clips provided by MRTK by navigating to Assets > MixedRealityToolkit.SDK > StandardAssets > Audio. Once you've found a suitable audio clip, assign the audio clip to the Audio Clip field.
You can now test the touch interaction using the in-editor simulation. Press the Play button to enter Game mode. Once in Game mode, hold the spacebar to bring up the hand and use the mouse to touch the object and trigger the sound effect.
Mixed Reality Toolkit is equipped with a variety of button prefabs that you could add to your project. A prefab is a pre-configured GameObject stored as a Unity Asset and can be reused throughout your project.
You can find button prefabs available in MRTK by navigating to MixedRealityToolkit.SDK > Features > UX > Interactable > Prefabs.
In this project, you will learn how to change the color of a cube when a button is pressed.
First, select the button of your choice from the Project window and drag into the Hierarchy window.
Change the button's Transform Position so that it's positioned in front of the camera to x = 0, y = 0, and z = 0.5
Next, right click on an empty spot in the Hierarchy window and click 3D Object > Cube.
With the Cube object still selected, in the Inspector window, change the Transform Position so that the cube is located near but not overlapping the button. In addition, resize the cube by changing the Transform Scale.
In the Hierarchy window, select the button. In the Inspector window, navigate to the Interactable (Script) component.
In the Events section, expand the Receivers section.
Click the Add Event button to create a new event receiver of Event Receiver Type InteractableOnPressReceiver.
For the newly created InteractableOnPressReceiver event, change the Interaction Filter to Near and Far.
From the Hierarchy window, click and drag the Cube GameObject into the Event Properties object field for the On Press() event to assign the Cube as a receiver of the On Press () event.
Next, click the action dropdown (currently assigned No Function) and select MeshRenderer > Material material. This action will set the Cube's material property to change when the button is pressed.
Now, assign a color for the Cube to change to when the button is pressed. Click the small circle icon next to the Material field (currently assigned None (Material)) to open the Select Material window.
MRTK provides a variety of materials that can be used in your projects. In the search bar, search for MRTK_Standard and select your color of choice.
Now that the event is configured for when the button is pressed, you now need to configure an event that occurs when the button is released. For the On Release () event, click and drag the Cube GameObject into the Event Properties.
Next, click the action dropdown (currently assigned No Function) and select MeshRenderer > Material material. This action will set the Cube's material property to change when the button is released.
Now, assign a color for the Cube to change to when the button is released. Click the small circle icon next to the Material field (currently assigned None (Material)) to open the Select Material window and search for MRTK_Standard. Select your choice of color.
Now that both the On Press () and On Trigger () events are configured for the button, press Play to enter Game mode and test the button in the in-editor simulator.
To press the button, press the space bar + mouse scroll forward.
To release the button, press the space bar + mouse scroll backward.
MRTK uses what are known as Solvers to allow UI elements to follow the user or other game objects in the scene. The Radial View solver is a tag-along component that keeps a particular portion of a GameObject within the user's view.
You can make a button follow your hand by adding the Radial View (Script) component to the object.
First, drag a button prefab from MixedRealityToolkit.SDK > Features > UX > Interactable > Prefabs to the Hierarchy window.
In the Hierarchy window, select the button prefab. In the Inspector window, click Add Component. Search for Radial View. Once found, select to add the component to the button.
When you add the Radial View (Script) component to the button, the Solver Handler (Script) component is added as well because it is required by the Radial View (Script).
The Solver Handler (Script) component needs to be configured so that the button follows the user's hand. First, change Tracked Target Type to Hand Joint. This will enable you to define which hand joint the button follows.
Next, for the Solver Handler (Script) component, change Tracked Handness to Right. This setting determines which hand is tracked.
There over 20 hand joints available for tracking. Still inside the Solver Handler (Script) component, change Tracked Hand Joint to Wrist so that the button tracks the user's wrist.
Now that the hand tracking is configured, you need to configure the Radial View (Script) component to further define where the button is located and how it is viewed in relation to the user. First, change Reference Direction to Facing World Up. This parameter determines which direction the button faces.
Next, in the Radial View (Script) component, change the Min Distance and Max Distance to 0. The Min and Max Distance parameters determine how far the button should be kept from the user. As a reminder, the unit of measurement in Unity is meters. Therefore, a Min Distance of 1 would push the buttona way to ensure it is never closer than 1 meter to the user.
Now that the button is configured to follow your right wrist, press Play to enter Game mode and test the solver in the in-editor simulator. Press and hold the space bar to bring up the hand. Move the mouse cursor around to move the hand, and click and hold the left mouse button to rotate the hand:
Eye and Head Gaze Tracking.
Code Samples: https://aka.ms/MixedRealityUnitySamples
Spatial Visualization using Bing Map using HoloLens 2 and Windows Mixed Reality Headsets.
Shortlink: aka.ms/UnityBingMapsVisualizationLesson
This project is for HoloLens 2 and Windows Mixed Reality Headsets.
In this project, we will create a 3D Map visualization using Bing Maps Unity SDK: aka.ms/BingMapsUnitySDK.
Outings, a sample app created by Bings Map SDK can be found on Microsoft Store for PC and HoloLens 1: aka.ms/OutingsHoloLens1
We will build the app in below video for HoloLens 2. You can render it for Windows Mixed Reality Headset and use hand controllers instead of hand gestures.
With spatial data you can discover growth insights, manage facilities and networks, and provide location information to customers. Without considering spatial components and how they relate to your business, your risks and possibility of poor results will increase.
Spatial analysis allows you to solve complex location-oriented problems and better understand where and what is occurring in your world. It goes beyond mere mapping to let you study the characteristics of places and the relationships between them. Spatial analysis lends new perspectives to your decision-making.
A good visualization allows the users to understand a data better by seeing the data points in the right context. Check out some of the examples below to see what the visualization provides that you would have hard time to understand just by seeing the information data points.
Small arms and ammunition import and export interactive visualization: https://armsglobe.chromeexperiments.com/
Compare Covid-19 Data tab and map tab to see the difference it makes in your perception: https://ncov2019.live/
Wind and weather visualizations: https://www.windy.com/
Chrome experiments with Globe: https://experiments.withgoogle.com/chrome/globe
Maps SDK, a Microsoft Garage project provides a control to visualize a 3D map in Unity. The map control handles streaming and rendering of 3D terrain data with world-wide coverage. Select cities are rendered at a very high level of detail. Data is provided by Bing Maps.
The map control has been optimized for mixed reality applications and devices including the HoloLens, HoloLens 2, Windows Immersive headsets, HTC Vive, and Oculus Rift. Soon the SDK will also be provided as an extension to the Mixed Reality Toolkit (MRTK).
HoloLens 2 and Windows Mixed Reality Headset project using Bing Maps SDK
In this project we will create a 3D map visualization as shown in the video below:
Follow along the next steps or answer the questions below to see if you can skip some of the steps.
A Bing Maps developer key is required to enable the mapping functionality of the SDK.
Sign-in to the Bing Maps Dev Center.
For new accounts, follow the instructions at Creating a Bing Maps Account.
Select My keys under My Account, and select the option to create a new key.
Provide the following required information to create a key:
Application name: The name of the application.
Key type: Basic or Enterprise. Key types are explained here.
Application type: Select Other Public Mobile App.
Click the Create button. The new key displays in the list of available keys. This key will be used later when setting up the Unity project.
See the Understanding transactions page for more details about transaction accounting.
After importing the SDK, to add a map to the scene...
Create a new GameObject.
Add a MapRenderer component to the GameObject: Add component
-> Scripts
-> Microsoft.Maps.Unity
-> MapRenderer
In the MapRenderer component, provide the Bing Maps developer key.
For the sample scenes, a Bing Maps developer key will also need to be provided in the MapRenderer.
Once the a valid key is provided, the map will render at runtime and in the editor as well.
Once a valid key has been provided, the map will render in the editor as well, unless the Show Map Data in Editor
option has been disabled.
The view of the map can be configured in the Location foldout.
The Center
is the geolocation where the map is currently focused, represented as a latitude and longitude in degrees.
The ZoomLevel
is the area of the map that is visible. Lower zoom levels correspond to zooming out, higher zoom levels correspond to zooming in.
The map uses a web Mercator projection.
The shape and dimension of the map can be configured in the layout section.
The shape of the map can be a block or a cylinder. The default is block.
Dimensions are specified in local space. For convenience, the sizes scaled to Unity's world space are displayed in the editor as well.
Larger map dimensions will require more data to be downloaded and rendered. This will affect the overall performance of the app. It is recommended to stay with the default settings or smaller, or only increase the map dimensions on devices that are capable. Regardless, the map dimensions are clamped to a maximum size.
The type of terrain rendered by the map can be modified with the MapTerrainType
.
Default: The map terrain consists of either elevation data or high resolution 3D models.
Elevated: The map terrain consists only of elevation data. No high resolution 3D models are used.
Flat: Both elevation and high resolution 3D models are disabled. The map terrain surface will be flat.
If the scenario does not require the higher resolution data, disabling the terrain can improve performance. Likewise, the Flat
type requires the least amount of performance overhead.
There are 2 ways to add Pins to your map. First, you can directly attach a MapPin component to your Map GameObjects which you want to attach to the MapRenderer at a specific lat-lon. When attached, the MapRenderer will take over positioning of the MapPin's Transform component.
Second way is to add a MapPinLayer, covered on the next section.
This approach is better suited for large data sets where clustering may be required:
Add a MapPinLayer
component to the MapRenderer's GameObject. If clustering is enabled, check this setting on the layer and attach a prefab that has a ClusterMapPin component.
In a script, get a reference to the MapPinLayer
and add MapPin
instances to the the layer's MapPins
collection.
Larger map dimensions will require more data to be downloaded and rendered. This will affect the overall performance of the app. It is recommended to stay with the default settings or smaller, or only increase the map dimensions on devices that are capable. Regardless, the map dimensions are clamped to a maximum size.
The Flat
map terrain type requires the least amount of performance overhead. Both elevation and high resolution 3D models are disabled. The map terrain surface will be flat.
It is important to note that the level of detail offset can have a large impact on performance. The trade-off being higher quality will come with a higher performance impact where the cache size will grow more quickly. Lowering the quality may be beneficial on devices that are performance constrained.
It is possible to replace the material used for either the terrain or the clipping volume wall.
When doing this, it is recommended to make a copy of the base shaders, which are imported as part of the NuGet package under lib\unity\map\Resources. Use the copies made of these shaders as the starting point for new materials.
Importantly, the ENABLE_ELEVATION_TEXTURE
keyword used by the shaders will need to be maintained. Certain draw calls for the terrain require an elevation texture while others do not.
An advantage to using the MapPinLayer is that it supports clustering. If a ClusterMapPin prefab is specified on the layer, MapPins will be clustered automatically. When MapPins are clustered, the ClusterMapPin is shown in the place of the many MapPins that associate to it.
Clustering is highly recommended for large data sets as this will reduce the number of MapPin instances that need to be rendered for zoomed out views.
Besides this rendering performance benefit, it is often preferable to cluster MapPins from a usability perspective since dense, cluttered views will make it more difficult for the user to interact with individual MapPins because the map pins might overlap with each other.
Clusters are created at every zoom level, so as the zoom level of the MapRenderer changes, the visible clusters and MapPins may change as well.
Working with REST APIs
Representational state transfer (REST) is a software architectural style that defines a set of constraints to be used for creating Web services.
To be able to save any user data and progression, we need a back-end system with a storage.
How to sign up for Azure student account?
How to set up a web project in Azure?
How to create a REST end-point using Azure Functions?
How to make a call to your API endpoint?
How to test your API endpoint?
How to decide on which database you need for your application?
How to set-up your first database with Azure?
How to change your APIs to save the data to your database?
How to retrieve your data from the database?
How to reflect your data changes in your application?
How to handle errors?
What are the security concerns with REST APIs?
What kind of bugs are common related to REST APIs?
Creating and adding many MapPins at once, either to a MapPinLayer or as children of the MapRenderer, could be time consuming and thus cause a frame hitch. If the MapPins can be initialized and added all at startup, this may be an acceptable one time hit. However, if data is being streamed and converted to MapPins throughout the app's lifetime, consider spreading out the MapPin creation and addition over multiple frames, i.e. time slice the additions. This will help to maintain render performance.
UnityWebRequest provides a modular system for composing HTTP requests and handling HTTP responses. The primary goal of the UnityWebRequest system is to allow Unity games to interact with web browser back-ends. It also supports high-demand features such as chunked HTTP requests, streaming POST/PUT operations, and full control over HTTP headers and verbs.
The system consists of two layers:
A High-Level API (HLAPI) wraps the Low-Level API and provides a convenient interface for performing common operations
A Low-Level API (LLAPI) provides maximum flexibility for more advanced users
Code Samples: https://aka.ms/MixedRealityUnitySamples
Bing Maps Account: https://aka.ms/NewBingMapsAccount
BingMaps-SDK Repository: https://aka.ms/BingMapsUnitySDK
BingMaps SDK releases: https://aka.ms/BingMapsSDKReleases
How to create Bing Maps Account: https://aka.ms/BingMapsAccount
Code Samples:
Spatial Anchors allow you to place virtual object in a specific point in your real world. You can think of them as more accurate GPS that works indoors as well as outdoors. Spatial Anchors are available for iOS, Android mobile devices and HoloLens headsets. Azure Spatial Anchors gives you a way to save and share anchor points, so that you can share the virtual objects or information between multiple devices and persist them over time.
In this lesson, you will learn about Spatial Anchors and the use case scenarios of Spatial Anchors:
When creating or locating anchors, pictures of the environment are processed on the device into a derived format. This derived format is transmitted to and stored on the service.
To provide transparency, below is an image of an environment and the derived sparse point cloud. The point cloud shows the geometric representation of the environment that is transmitted and stored on the service. For each point in the sparse point cloud, we transmit and store a hash of the visual characteristics of that point. The hash is derived from, but does not contain, any pixel data.
Using allows you to share any information in specific context, time and space. Some of the use cases are having user guides for machinery, inventory information, , educational applications, . The evolution of smartphones and near-universal access to the GPS data changed the apps we build and enabled ride-sharing and location- based recommendation applications. Developing with Azure Spatial Anchors will help you deliver contextual data at the right time and place. It will open up new possibilities indoors.
To share Azure Spatial Anchors, SDK translate the local Spatial Anchor data into Azure Spatial Anchor format and saves it. Similarly, when a different platform asks for the same Spatial anchor data, the device will receive the anchor in the platform's format.
By using anchor relationships, you can create connected anchors in a space and then ask questions like these:
Are there anchors nearby?
How far away are they?
You could use connected anchors in cases like these:
A worker needs to complete a task that involves visiting various locations in an industrial factory. The factory has spatial anchors at each location. A HoloLens or mobile app helps guide the worker from one location to the next. The app first asks for the nearby spatial anchors; it then guides the worker to the next location. The app visually shows the general direction and distance to the next location.
A museum creates spatial anchors at public displays. Together, these anchors form a one-hour tour of the museum's essential public displays. At a public display, visitors can open the museum's mixed reality app on their mobile device. Then, they point their phone camera around the space to see the general direction and distance to the other public displays on the tour. As a user walks toward a public display, the app updates the general direction and distance to help guide the user.
Start your project from UnitySeedProject.
Work on main scene or create a new scene and configure your scene with MRTK.
In the Publishing Settings Configuration section, check InternetClientServer and SpatialPerception.
Add Spatial Mapping Collider component to your camera.
How to integrate ARCore for Android?
Go to Azure For Students page: bit.ly/AzureStudentCredit or scan the below QR code.
Follow the Activate Now link to sign up.
Go to Azure Portal: portal.azure.com.
In the left navigation pane in the Azure portal, select Create a resource.
Use the search box to search for Spatial Anchors.
Select Spatial Anchors. In the dialog box, select Create.
In the Spatial Anchors Account dialog box:
Enter a unique resource name, using regular alphanumeric characters.
Select the subscription that you want to attach the resource to.
Create a resource group by selecting Create new. Name it myResourceGroup and select OK. A resource group is a logical container into which Azure resources like web apps, databases, and storage accounts are deployed and managed. For example, you can choose to delete the entire resource group in one simple step later.
Select a location (region) in which to place the resource.
Select New to begin creating the resource.
After the resource is created, Azure Portal will show that your deployment is complete. Click Go to resource.
Then, you can view the resource properties. Copy the resource's Account ID value into a text editor because you'll need it later.
Under Settings, select Key. Copy the Primary key value into a text editor. This value is the Account Key
. You'll need it later.
Scroll down to assets section and click on AzureSpatialAnchors.unitypackage to download.
In your Unity project select Assets > Import package > custom package and find the downloaded AzureSpatialAnchors.unitypackage and import all.
Create a new script called AzureSpatialAnchorsScript.
Add imports:
Add the following members variables into your AzureSpatialAnchorsScript class:
When working with Unity, all Unity APIs, for example APIs you use to do UI updates, need to happen on the main thread. In the code we'll write however, we get callbacks on other threads. We want to update UI in these callbacks, so we need a way to go from a side thread onto the main thread. To execute code on the main thread from a side thread, we'll use the dispatcher pattern.
Let's add a member variable, dispatchQueue, which is a Queue of Actions. We will push Actions onto the queue, and then dequeue and run the Actions on the main thread.
Next, let's add method to add an Action to the Queue. Add QueueOnUpdate()
right after Update()
:
Let's now use the Update() loop to check if there is an Action queued. If so, we will dequeue the action and run it.
Import AzureSpacialAnchors asset into your script.
Add the CloudSpatialAnchorSession and CloudSpatialAnchor member variables into your AzureSpatialAnchorsScript
class:
Initialize Session:
Add methods to handle delegate calls.
Call the InitializeSession() method inside the Start() function:
After you run your application, you can check to see the recently created anchors by navigating to the Azure Portal > Spatial Anchor resource you have created for this tutorial.
Attach a local Azure Spatial Anchor to the sphere that we're placing in the real world.
Sign in to your Azure Portal
Create a resource by selecting Databases > Azure CosmosDB
Select the subscription and resource group you are using for this project.
Enter a unique name to identify Azure Cosmos DB account.
Select "Azure Table" as the API.
Select a geographic location to host your Azure Cosmos DB account. Use the location that's closest to your users to give them the fastest access to data.
You can leave the Geo-Redundancy and Multi-region Writes options at their default values (Disable) to avoid additional RU charges. You can skip the Network and Tags sections.
Select Review+Create. After the validation is complete, select Create to create the account.
It takes a few minutes to create the account. You'll see a message that states Your deployment is underway. Wait for the deployment to finish and then select Go to resource.
Copy the Connection String
for later use.
Open SharingService\Startup.cs.
Locate #define INMEMORY_DEMO
at the top of the file and comment that line out. Save the file.
Open SharingService\appsettings.json
.
Locate the StorageConnectionString
property, and set the value to be the same as the Connection String
value.
Add task.run function on line 26 to your CreateAndSaveAnchor function. We will change the color of the sphere to indicate that it is saved or failed to save.
At this point, your AzureSpatialAnchorsScript.cs should look like below code snippet:
Now you can follow the and start creating anchors in your environment.
Displaying Spatial Anchors on a map.
That's a tough question but thankfully, our team is on it. Please bear with us while we're investigating.
Yes, after a few months we finally found the answer. Sadly, Mike is on vacations right now so I'm afraid we are not able to provide the answer at this point.
Yes, if you build you can prevent that from happening. You can implement Authentication in your application and check for permission before you respond to an API call for Anchors. Read more...
Lighting, people moving arround and the changing environment can reduce the retreivability of the anchors. Finding contrasting colored an unchanged object and starting the session from there would be helpful.
You can retreive a nearby anchor and ask for the other anchors in a radius.
You can store your Spatial Cloud Anchors as local anchors in your device.
Code Samples: https://aka.ms/MixedRealityUnitySamples
Azure Spatial Anchors Documentation: bit.ly/AzureSpatialAnchors
Azure Spatial Anchors API: bit.ly/asa-api
ASA Samples Repo: bit.ly/AzureSpatialAnchorsSamples
ASA SDK Releases: bit.ly/ASAReleases