21 May 2018

Developing in Unity for Windows Mixed Reality without having to unplug your device all the time

A very simple but potentially quite a big time saving tip this time.

Typically, when you are developing for Windows Mixed Reality, you spend a lot of time in the Unity editor getting things just right. Unity and Mixed Reality integration is awesome - you can just hit play and your scene will show directly in your head set.

But sometimes, that is just not what you want. If you are fiddling with a tiny bit of interaction or animation code (and we all knows that can take a lot of time in Unity) you often just want to hit play, observe the results in the Game window, maybe move a bit around with the WASD keys or using an Xbox Controller. In that workflow, the necessity of putting on a  Mixed Reality device - and taking it off again - for every little iteration can be a cumbersome process.

There are two solutions for this: first, unplug your device if you don't need it. Duh. But if you use big desktop development box or this means reaching out to the back of the device. For laptops and desktop boxes both, you will need to fiddle with plugs, which in time might wear out or damage the plugs and/or the plug sockets of your PC. I am unfortunately speaking from experience here.

So what to use? Good ol'e device manager. Simply type "Device manager" in your start menu

image

You will find a node "Mixed Reality Devices". Find yours (I have a Samsung Odyssey indeed). Simply right-click

image

Hit "Disable device", click "Yes" on the rather ominous following warning, and your headset is now an expensive paperweight, no longer paying attention to what Unity does. You can now use Unity Play mode the 'old' way.

If you are done finicking in Unity, you can simply enable the device again using the Device Manager, and your awesome headset wakes up again.

14 May 2018

Fixing the "...Unity\Tools\AssemblyConverter.exe exited with code 1" error when you are building your Mixed Reality app

It's not easy being green - I mean, a Mixed Reality developer

This is one that has annoyed me for quite some time. When developing your Mixed Reality app, I usually go like this:

  • Change something in Unity
  • Build your Mixed Reality app from Unity
  • Open Visual Studio
  • Build your app and run in your HoloLens or Immersive headset
  • Be not entirely satisfied with the result
  • Change something more in Unity
  • Build your Mixed Reality app again

And then something like this happens:

image

An extremely unhelpful error message. If you copy the whole command in a command line and run the command yourself, you get a little more info

System.Exception: Unclean directory. Some assemblies are already converted, others are not.
    at Unity.SanityCheckStep.Execute()
    at Unity.Step.Execute(OperationContext operationContext, IStepContext previousStepContext)
    at Unity.Operation.Execute()
    at Unity.Program.Main(String[] args)

So here's a clue. Apparently some cruft stays behind preventing Unity from rebuilding the app the second time around. This is because Unity does not always overwrite all the files, presumably to to speed up the build process that second time. Only apparently they mess up sometimes.

UntitledSo then you go delete your generated app, but don't forget to retain some files you might want to keep (like your manifest file). You might add those to your repo - but then don't forget to revert after deleting, and oh yes, if you are testing on your local machine (and not your HoloLens or a different machine) you might find you can't even delete everything because it's locked. So you need to uninstall the app first. And this happens every second time you compile the app. And yes, this also happens when you use the Build Window.

Meh indeed

I never thought I would ever write this sentence, but here it goes:

PowerShell to the rescue

This first step you only need to do when you are testing on your development machine, and not on a HoloLens.

We  need to find out what the actual package name of your app is. The are two ways to do this. The simplest one is going over to your package.appxmanifest, double click it, select tab "packaging"

Untitled

Or you can just run a PowerShell Command like this:

Find name : Get-AppxPackage | Where-Object {$_ -like "*Walk*"}

Anyway, then you can make a script, call it "CleanApp.ps1" (or whatever you want to call it) and add the following commands:

Get-AppxPackage 99999LocalJoost.ThisIsMyApp | Remove-AppxPackage

$PSScriptRoot = Split-Path -Parent -Path $MyInvocation.MyCommand.Definition
Get-ChildItem -Path ${PSScriptRoot}\App\ `
    -include *.dll,*.pdb,*.winmd,*.pri,*.resw,Appx -Recurse | `
     Remove-Item -Force -Recurse

This assumes your generated app is sitting in a directory "App" that is a sub directory of the directory that the actual script is sitting in. It typically place this in the root of my (Unity) project, while the App folder is a sub directory from that.

So your typical workflow becomes now:

  • Change something in Unity
  • Run this script
  • And then continue building the Mixed Reality app as you see fit (either from Unity to Visual Studio, or directly from the Build Window)

I deem you all capable of copying these lines of code from this blog post, so won't put any code online to go with this article.

05 May 2018

Running Mixed Reality apps on Windows 10 on ARM PCs–get ready for a surprise

Intro

I was planning to write a step-by-step procedure of the things you would need to do to get the Mixed Reality app I created in my previous post to work on a Windows 10 on ARM PC. After all, when I tried to do that on a Raspberry PI2 quite some time ago, there was some creative slashing necessary.

Life is what happens while you are busy making other plans

Turns that what I needed to do was exactly nothing. Well, I had to compile and deploy it for ARM. And that worked. Just like that, just like on a intel-based PC as I described in my previous post. When I deployed Walk the World to the Windows 10 on ARM PC some posts ago, I still had to remove some parts of the Mixed Reality Toolkit to make the ARM tools swallow the sources. Apparently, that’s no longer necessary.

And then I tested it. And I learned I had to make some changes to my code after all. I think Microsoft likes to hear that you can run code on Windows 10 on ARM PCs unchanged, but in this case I don’t think they will mind me saying this I actually needed to make some changes because it was running too bloody fast on a Windows 10 on ARM PC. Yeah, you read that right. The code I wrote for controlling the app via an Xbox One Controller replied so fast it was actually nearly impossible to control the view point, especially when rotating. Even when I compiled it for x86 and CHPE had to do the translation, it still ran too fast for reasonable control.

It actually ran faster than on my i7 Surface Pro 4. That was one serious WTF, I can tell you that.

One trigger, two triggers

You might remember that in the previous post I used the right trigger to make moving and rotating go faster. Well we sure don’t need to go faster, so I adapted the code to calculate the speed up factor:

var speed = 1.0f + TriggerAccerationFactor * eventData.XboxRightTriggerAxis;

To use the right trigger to slow down the speed.

var speed = (1.0f + TriggerAccerationFactor * eventData.XboxRightTriggerAxis) - 
            (eventData.XboxLeftTriggerAxis * 0.9f);

And that works reasonably well.

Now I made a setup quite comparable to my previous post on Windows 10 on ARM, only now the x86/ARM versions are not only compared with each other, but also with an x64 version running on my Surface Pro 4.IMG_6845_2

The Surface Pro 4 for is running on his own screen and is connected to the right Xbox Controller and the gray ArcMouse, the Windows 10 on ARM PC, once again missing from this picture, is connected to the black ArcMouse, and Dell monitor and the left Xbox Controller via the Continuum dock that you can see just in front of the MVP thermos.

So here’s a little video of the three versions:

You can clearly see the the Windows on ARM10 PC is quite a bit faster than the Surface Pro 4 and that even the x86 CHPE-fied version is faster, so that rotating indeed needs the left trigger to slow it down, to get some resemblance of control. At the end, you can actually seen them all three together

IMG_6852_3

The difference between the x86 and the ARM version is mainly in startup time here and a wee bit of general performance (although you mainly notice that when you actually operate the app – if you just watch it’s less obvious). Last time I wrote about Windows 10 on ARM I already concluded that CHPE does an amazing job as far as graphics performance goes, and it shows here again.

[image%5B22%5D]Interesting detail – the Windows 10 on ARM PC does not show this popup a the end, while the Surface Pro 4 does. Now this may be because Windows x64 actually has the optional “Windows Mixed Reality” component (although this particular hardware doesn’t support that), and Windows 10 on ARM does not have that particular component. Also, the latter still runs the Fall Creators Update, while the Surface Pro 4 runs the April 2018 update. Both may be a factor. I have no way to test this now.

Two versions of the same app?

You might have noticed this before - I sometimes run two versions of the same app together on one PC. That's normally not possible - if you deploy one version it gets overwritten by the other, even when you change target architecture in Visual Studio. To get two version of the same app to run of one computer, you will need to fiddle somewhat in the Package.appmanifest. Open it as XML file (not via the beautiful GUI editor provided by Visual Studio). Change the Name in Identity (3rd line in the file)

<?xml version="1.0" encoding="utf-8"?>
<Package xmlns:....
   <Identity Name="XBoxControllerDemo" Publisher="CN=DefaultCompany" Version="1.0.0.0" />

and change XBoxControllerDemo for instance to XBoxControllerDemoARM

Then look a bit lower for the VisualElements tag

<uap:VisualElements DisplayName="XBoxControllerDemo"

And change that to for instance "XBoxController ARM Version" - to make sure the app also have separate icon labels.

Do not ever do this on production apps but if you want to you your own kind of crazy A-B testing like me it can be useful.

Conclusion

This article is quite a bit shorter than I anticipated, but that it’s because Mixed Reality apps seem to run amazingly well on Windows 10 on ARM PCs with very little work. This platform is a serious candidate for Unity generated UWP apps.

I am now seriously considering rebuilding my Mixed Reality apps with this new MRTK and the newest applicable version of Unity, and including an ARM package in the store. Why not. It runs fine. Let's see if users like it.

No (new) project this time. You can find the project with the updated XBoxControllerAppControl.cs (still) here.

02 May 2018

Running your Mixed Reality app on an ‘ordinary’ PC–using an Xbox One Controller

Intro

Let’s face it – although Windows Mixed Reality has a steady uptick (at least I think I can draw that conclusion from the increasing download numbers of my two Mixed Reality apps in the Windows Store) – not everyone has a Mixed Reality headset, or even has a PC capable of supporting that. Time will take care of that soon enough. In the mean time, as a Mixed Reality developer, you might want to show all 700 million Windows 10 users a glimpse of your app, in stead of ‘only’ the HoloLens and Mixed Reality headset owners out there. Even in a reduced state, it gives you eyeballs, and maybe entice them to get themselves a headset after all. It’s not like they are expensive these days.

This sounds familiar?

Well it should. This is far from original. I have been down this road before, describing how to run a HoloLens app on a Raspberry PI2. That’s the U in UWP for you. Only now we are going to run on a full PC – in my case, a Surface Pro 4. That’s a sufficiently high end device for a nice experience, but it predates the Windows Mixed Reality era by almost two years and does not support it. But you can’t walk around without a headset, so we will need another means to change our view point.

Parts list

  • One reasonably nice performing PC not capable of supporting Mixed Reality – or at least with the Mixed Reality portal not installed
  • Unity 2017.2.1p2
  • The Mixed Reality Toolkit 
  • One XBox One controller

The first point is important – for if you have the portal installed, your PC will launch it like a good boy trying to do the logical thing - and you won’t see the effect I am trying to show you.

Setting up the project

I created a new project in Unity, copied in the latest Mixed Reality Toolkit, then clicked the three menu options under Mixed Reality Toolkit/Configure.

Then I added my standard empty game objects “Managers” (with nothing in it)  and “HologramCollection” with a cube and a sphere, to have something to see:

image

There is more to that two objects that meets the eye but we will get to that later.

Control the view point using an XBox Controller

There’s a simple class for that, in my ever growing HolotoolkitExtensions, that starts like this

using HoloToolkit.Unity.InputModule;
using UnityEngine;

namespace HoloToolkitExtensions.Utilities
{
    public class XBoxControllerAppControl : MonoBehaviour, IXboxControllerHandler
    {
        public float Rotatespeed = 0.6f;
        public float MoveSpeed = 0.05f;
        public float TriggerAccerationFactor = 2f;

        private Quaternion _initialRotation;
        private Vector3 _initialPosition;

        private readonly DoubleClickPreventer _doubleClickPreventer = 
                                                new DoubleClickPreventer();
        void Start()
        {
            _initialRotation = gameObject.transform.rotation;
            _initialPosition = gameObject.transform.position;
        }
    }
}

I tend to offer settings to the Unity editor as much as possible - to make it easy to reuse this class and adapt its behavior without code changes. Here I offer some speed settings. You can set the maximal rotation speed and the maximal speed the camera moves, and the ‘speed up factor’ that is applied to all values when the right trigger is pressed. Be advised these are all analog values between 0 and 1, so you can control the speed anyway by varying the amount of pressure you apply to the sticks, the D-pad. But sometimes you just wanna go fast, hence the trigger. Also notice how initial rotation and position are retained.

The main routine is of course OnXboxInputUpdate, as the IXboxControllerHandler mandates its presence.

public void OnXboxInputUpdate(XboxControllerEventData eventData)
{
    if (!UnityEngine.XR.XRDevice.isPresent)
    {
        var speed = 1.0f + TriggerAccerationFactor * eventData.XboxRightTriggerAxis;

        gameObject.transform.position += eventData.XboxLeftStickHorizontalAxis * 
                                         gameObject.transform.right * MoveSpeed * speed;
        gameObject.transform.position += eventData.XboxLeftStickVerticalAxis * 
                                         gameObject.transform.forward * MoveSpeed * speed;

        gameObject.transform.RotateAround(gameObject.transform.position, 
            gameObject.transform.up, 
            eventData.XboxRightStickHorizontalAxis * Rotatespeed * speed);
        gameObject.transform.RotateAround(gameObject.transform.position, 
            gameObject.transform.right, 
            -eventData.XboxRightStickVerticalAxis * Rotatespeed * speed);

        gameObject.transform.RotateAround(gameObject.transform.position, 
            gameObject.transform.forward, 
            eventData.XboxDpadHorizontalAxis * Rotatespeed * speed);

        var delta = Mathf.Sign(eventData.XboxDpadVerticalAxis) * 
                    gameObject.transform.up * MoveSpeed * speed;
        if (Mathf.Abs(eventData.XboxDpadVerticalAxis) > 0.0001f)
        {
            gameObject.transform.position += delta;
        }

        if (eventData.XboxB_Pressed)
        {
            if (!_doubleClickPreventer.CanClick()) return;
            gameObject.transform.position = _initialPosition;
            gameObject.transform.rotation = _initialRotation;
        }

        HandleCustomAction(eventData);
    }
}

Let’s unpack that a little.

Important is the if (!UnityEngine.XR.XRDevice.isPresent). We only want this behaviour to do it’s work when there is no headset present whatsoever – no Mixed Reality head set, no HoloLens.

  • First we calculate a possible ‘speed up factor’ to be applied when the trigger is used. If it is not, it’s simply 1 and has no effect to the actual movement or rotation.
  • The left stick is used for movement in the ‘horizontal’ plane – forward, backward, left, right. Be aware the axes are relative. So if you are rotated 45 degrees left and you move left, you will move 45 degrees left. It’s actually logical – your frame of reference is always yourself, not some random rotation that happened to be in place when you got somewhere.
  • The right stick is used for rotation around your top and horizontal axis (left to right). Moving it to right will make you spin to the right (I negate the actual value coming from the stick as you can only rotate a game object around it’s left axis), pushing it forward will make you look at the floor.
  • That leaves moving up and down, and rotating left and right. The D-pad fills the voids: pushing it left or right will make you rotate sideways (like you are falling to the left or right), pushing it up or down will make your viewpoint move up or down.

This is exactly the way it works when you use an Xbox Controller to steer the Unity editor in play mode. The D-pad feels a bit counter-intuitive to me, but when you try to move in three dimensions using sticks that move both in only two dimensions, you will need something extra, and the D-pad is the only thing left. It feels odd to me, but it works.

Then finally the B button – when you press that, you get back to your initial position. This is very useful for if you have messed around a bit too much and completely lost track of where you are. And that is mostly all of it.

A tiny bit of SOLID

protected virtual void HandleCustomAction(XboxControllerEventData eventData)
{
}

Hardly worth mentioning, but should you want to add your own logic handling controller buttons or triggers, you can make a child class of this XBoxControllerAppControl  and override this method. It’s a hook that makes it open for extension, but keeps it own logic intact. That’s better than making OnXboxInputUpdate virtual, because that enables you to interfere with the existing logic by not calling the base OnXboxInputUpdate. It’s the O of SOLID. image

How to use it

Simply drag it to the the MixedRealityCameraParent, change the settings to your liking and your are done. I think I took some reasonable default settings.

But wait, there’s more!

I have found the Xbox Controller buttons tend to stutter – that is, they sometimes fire repeatedly and rapid fire events can give a bit of a mess.

So I created this little helper DoubleClickPreventer that is not exactly rocket science, but very useful

using UnityEngine;

namespace HoloToolkitExtensions.Utilities
{
    public class DoubleClickPreventer
    {
        private readonly float _clickTimeOut;

        private float _lastClick;

        public DoubleClickPreventer(float clickTimeOut = 0.1f)
        {
            _clickTimeOut = clickTimeOut;
        }

        public bool CanClick()
        {
            if (!(Time.time - _lastClick > _clickTimeOut))
            {
                return false;
            }
            _lastClick = Time.time;
            return true;
        }
    }
}

It’s rather simple: whenever the method CanClick is called, a time is set. If the method is called twice within 0.1 seconds it returns false, otherwise it returns true. It’s actually used twice within this sample: it’s also on on the little helper class “SelectorDemo” that makes the sphere and the cube go “plonk” and flash blue when you click them using the Xbox “A” button. I won’t go into that – you can find it in the demo project, and it’s inner workings are left as exercise to the reader.

And it looks like…

There are a few things you might notice. First of all, I apparently am able to select something, but I never coded for it. That’s courtesy of the Mixed Reality Toolkit – your Xbox Controller’s “A” button is acting the same as saying “Select” in a Mixed Reality app while you are gazing at something, air tapping while using a HoloLens, or pointing your Mixed Reality controller to an object and pressing the trigger.

Also, you might notice this at the end of the video:

image

A clear sign Windows is not really content with this. It figures – because if nothing prevents you from downloading an app that simply does not work on your machine it might disgruntle users. But still – the app launches and seems to work.

Some other things to notice and take into consideration

  • Use the right Unity version: 2017.2.1p2. That’s the one that goes with this release of the Mixed Reality Toolkit. Using newer versions of Unity or the toolkit (like the development branch) I got results varying from the app not wanting to compile, crashing or simply not starting. I also got just this “Can’t open app” dialog and nothing else
  • You can also see (very small) “Development build” in the lower right corner. There’s a check box in Unity that everyone tells you to use, and then that text will go away. The trouble is, that does not work. What will make it go away, is building the app with the Master configuration. That and only that. For Mixed Reality apps, this check box apparently only is there for show. At least as far as this text is concerned, and as far as I can see ;).

buildmaster

  • And finally, when making these apps run on an ordinary PC, you might want to rethink the UI a bit at places. Floating menu’s, which are very cool in real Mixed Reality environments, can be really hard to use on a flat screen, for instance. Also, placing things on top of the ‘floor’ might be a bit of a challenge without a floor – even a virtual one.

Concluding words

I am not sure if this will continue working going forward with the MRTK and Unity, how useful this will be in the real word, or if that the Mixed Reality team even appreciates this approach. I am simply showing you what’s possible and a possible way to tackle this. Your mileage may vary, very much in fact. Have fun!

Once again – demo project here

22 April 2018

Downloading holograms from Azure and using them in your Mixed Reality application

Intro

In my previous post I explained two ways of preparing holograms and uploading them to Azure: using a whole scene, and using just a prefab. In this post I am going to explain how you can download those holograms in a running Mixed Reality or HoloLens app and an show them.

image

Setting the stage

We uploaded two things: a single prefab of an airplane, with a behavior attached, and a scene containing a prefab – a house, also with a behavior attached. The house rotates, the airplane follows quite an erratic path. To access both we created a Shared Access Signature using the Azure Storage Explorer.

In the demo code there’s a Unity project called RemoteAssets. We have used that before in earlier posts. The third scene (Assets/App/Demo3/3RemoteScenes) is the scene that actually tries to load the holograms

If you open that scene, you will see two buttons: “Load House” and “Load Plane”

image

nicked from the Mixed Reality Toolkit Examples I nicked these buttons. The left button, “Load House”, actually loads the house. It does so because it’s Interactive script’s OnDownEvent calls SceneLoader.StartLoading

image 

Loading a remote scene

This SceneLoader is not a lot of code and does a lot more than is strictly necessary:

using System.Collections;
using UnityEngine;
using UnityEngine.Networking;
using UnityEngine.SceneManagement;

public class SceneLoader : MonoBehaviour
{
    public string SceneUrl;

    public GameObject Container;

    private bool _sceneIsLoaded;

    public void StartLoading()
    {
        if (!_sceneIsLoaded)
        {
            StartCoroutine(LoadScene(SceneUrl));
            _sceneIsLoaded = true;
        }
    }

    private IEnumerator LoadScene(string url)
    {
        var request = UnityWebRequest.GetAssetBundle(url, 0);
        yield return request.SendWebRequest();
        var bundle = DownloadHandlerAssetBundle.GetContent(request);
        var paths = bundle.GetAllScenePaths();
        if (paths.Length > 0)
        {
            var path = paths[0];
            yield return SceneManager.LoadSceneAsync(path, LoadSceneMode.Additive);
            var sceneHolder = GameObject.Find("SceneHolder");
            foreach (Transform child in sceneHolder.transform)
            {
                child.parent = Container.transform;
            }
            SceneManager.UnloadSceneAsync(path);
        }
    }
}

imageIt downloads the actual AssetBundle using a GetAssetBundle request, then proceeds to extract that bundle using a DownloadHandlerAssetBundle. I already wrote about the the whole rigmarole of specialized requests and accompanying handlers in an earlier post. Then it proceeds to find all scenes in the bundle, picks the first once, and loads this one additive to the current scene. If you comment out all lines after the LoadSceneAsync and run the code, you will be actually able to see what’s happening – a second scene inside the current scene is created.

If you however run the full code, the SubUrb house will appear inside the HologramCollection and no trace of the BuildScene will remain. That’s because the code tries to find a “SceneHolder” object (and you can see that’s the first object in the BuildScene), moves all children (one, the house) to the imageContainer object and once that is done, the additional scene will be unloaded again. But the hologram that we nicked from it still remains, and it even rotates. If you look very carefully in the editor if you click the button, you can actually see the scene appear, see the house being moved from it, and then disappear again.

The result: when you click “Load House” you will see the rotating house, 2 meters before you.

image

Success. Now on to the airplane.

Loading a remote prefab

This is actually less code, or at least – less code than the way I chose to handle scenes:

using System.Collections;
using UnityEngine;
using UnityEngine.Networking;

public class PrefabLoader : MonoBehaviour
{

    public string AssetUrl;

    public GameObject Container;

    private bool _isLoaded;

    public void StartLoading()
    {
        if (!_isLoaded)
        {
            StartCoroutine(LoadPrefab(AssetUrl));
            _isLoaded = true;
        }
    }

    private IEnumerator LoadPrefab(string url)
    {
        var request = UnityWebRequest.GetAssetBundle(url, 0);
        yield return request.SendWebRequest();
        var bundle = DownloadHandlerAssetBundle.GetContent(request);
        var asset = bundle.LoadAsset<GameObject>(bundle.GetAllAssetNames()[0]);
        Instantiate(asset, Container.transform);
    }
}

The first part is the same, then we proceed to use bundle.LoadAsset to extract the first asset by name from the bundle as a game object (there is only one, so that’s always correct for this bundle). And then we instantiate the asset – a prefab, which is a game object, into the hologram collection.

If you click the “Load Plane” button the result is not what you might expect:

image

Uhm, what? We basically did the same as before, actually less. It turns out, the only reason the house rotated fine, was because I used the Horizontal Animator from my own HoloToolkitExtensions. That script is present in both the SceneBuilder project (that I used to create the bundles uploaded to Azure) and the target app, RemoteAssets, that downloads the assets bundles and tries to use them.

But for the airplane to move around, I created a script “MoveAround” that is only present in the SceneBuilder. It does not happen often, dear reader, but I intentionally checked in code that fails, to hammer home the following very important concept:

In an Asset Bundle you can put about anything – complete scenes, prefabs, images, materials and whatnot – everything but scripts.

In order to get this to work, the script and its meta file need to be copied to the target project. Manually.

Untitled

imageIt does not matter much where in the target project it comes, Unity will pick it up, resolve the script reference, the bundle will load successfully if you press the “Load Plane” button. I tend to place it next to the place where it’s used.

And lo and behold: and airplane moving once again like a drunken magpie.

Concluding words

I have shown you various ways to upload various assets – JSON data, images, videos and finally holograms to Azure, how to download them from you app, and what limitations you will need to consider.

Important takeaways from this and previous posts:

  • Yes, I could have downloaded earlier assets using an Unity Asset Bundle as well, in stead of downloading images etc. via a direct URL. Drawback of using an Asset Bundle is you will always need Unity to build it. If you are building an app for a customer that wants to update training images or videos, it’s a big plus if you can just have them uploaded to Azure using the Storage Explorer or some custom (web) app. Whatever the customer can change or maintain themselves, the better it is.
  • You can’t download dynamic behavior, only (static) assets. The most ‘dynamics’ a downloaded asset can have is referring to a script that exists both in the building and the target app. I have seen complex frameworks that tried to achieve downloadable behavior by storing properties into the project file but that usually is a lot of work to achieve only basic functionalities (like moving some parts or changing colors and stuff) but while that may for simple applications, approaches like that are complex to maintain, a lot of work to ‘program’ into your asset, brittle and hard to transfer between project. Plus, it still needs Unity, and your customer is not going to use that.
    Rule of thumb is always: if you want to change the way things look, you can download assets dynamically. If you need new behavior, you will need to update the app.

I hope you enjoyed this brain dump. The project can (still) be found here.

18 April 2018

Preparing and uploading Holograms into Azure for use in Mixed Reality apps

Intro

In my series about remote assets, there’s one major thing left: how to use remote assets that can be used as Holograms in your Mixed Reality app. I will do this in two post:

  1. Prepare for usage and actual upload
  2. Use and download.

And this is, as you guessed, the first post.

Asset bundles

A Unity asset bundle is basically a package that can almost any piece of a Unity app. It’s usually used to create downloadable content for games, or assets that are specific for a platform or game level, so the user does not have to download everything at once. This enhances startup time, but also saves on (initial) bandwidth and local storage. Also, you can update part of the app without actually requiring the user to download a complete new app. There are some limitations, but we will get to that.

General set up

We want to create an app that downloads assets, that it does not have initially included in it. But a Unity project is what we need to actually build the asset bundles. So we need two projects: the actual app that will run, and a ‘builder’ project. That project – which I not completely correctly called “SceneBuilder: – you can find here. I will show you how to prepare an asset bundle that contains a complete scene, and one that only contains a single prefab.

The SceneBuilder contains everything a normal Unity app does: scenes, assets, scripts, the works. And something extra. But first, our runtime app is going to use the Mixed Reality Toolkit, some stuff from the Mixed Reality Toolkit Samples, LeanTween, and my own extensions. I kind of all threw it in because why not. It also contains three scenes in the root.

  • Main (which is empty and not used)
  • BuildScene (a rather darks scene which contains a house from the Low Poly Buildings Lite package, and nothing else)
  • PrefabScene (which contains what seems to be a normal Mixed Reality app, as well an airplane called “Nathalie aguilera Boing 747” (I assume “Boing” should be “Boeing”, and the model is actually a Boeing 737 but whatever – it’s a nice low poly model)

imageimage

The most important thing is the AssetBundle Browser. This is a piece of code provided by Unity to make building asset bundles a lot easier. You can find the AssetBundle Browser in folder “UnityEngine.AssetBundles” under “Assets. You can download the latest version from Unity’s Github repo. It comes with a whole load, but basically you only need everything that’s under UnityEngine.AssetBundles/Editor/AssetBundleBrowser. This you plonk in your Scen eBuilder project’s Asset folder.

imageThe BuildScene

If you open the BuildScene first, you will see there’s actually very little in it. An empty ‘SceneHolder’ object, and a “Suburb House 1” prefab in it. The lack of lighting also makes the scene rather dark. This has a reason: we are going to add this scene to another scene later, and we don’t want all kinds of duplicate stuff like lighting, Mixed Reality toolkit managers, etc. – coming into our existing app. So almost everything is deleted. But to that Prefab, one thing is added: the Horizontal Animator behavior (that debuted in this article). Timagehis is a very simple behavior that will make the house spin round every five seconds. Nothing special – this is to prove a point later.

If you play the scene, you will actually see the house spinning in the scene. The Game window will be black, only saying “Display 1 No cameras rendering”, which is correct, as I deleted everything but the asset we are going to build and upload, so there’s even no camera to show anything.

Build the buildscene asset bundle

First order of business – build the UWP app, as if you are going to create an actual Mixed Reality app for HoloLens and/or immersive head set out of the SceneBuilder. For if you don’t, the AssetBundle browser will kick off that process, and if there’s a syntax error or whatever – it’s modal build window will get stuck, blocking Unity – and the only way out is killing it in the Task manager.

Click Window/AssetBundle Browser and you will see this:

image

I already pre-defined some stuff for you. What I basically did was, in an emtpy AssetBundle browser, drag the BuildScene into the empty screen:

Untitled

and the AssetBundle Browser then adds everything it needs automatically. Click the tab “Build” and make sure the settings are the same as what you see here. Pay particular attention to the build target. That should be WSAPlayer, because that’s the target we use in the app that will download the assets.

Untitled

Hit the ugly wide “Build” button and wait till the build completes. If all works out, you will see four files in SceneBuilder\AssetBundles\WSAPlayer:

  • buildscene
  • buildscene.manifest
  • WSAPlayer
  • WSAPlayer.manifest

Only those where there all along, as I checked the folder with these files in GitHub. But go ahead, delete the files manually, you will see they are re-created. You will see they are all only 1 KB, except for buildscene – that’s 59 KB.

The PrefabScene

This looks more like a normal scene: it has everything in it you would expect in , and the airplane.

image

imageIf you hit the play button in this scene, the airplane will move around in a way that makes you happy you are not aboard it, due to a behaviour called “Move Around”. This script sits in “Assets/App/Scripts” and is nothing more than an start method that creates a group of point relative to the start position and moves it around using LeanTween

void Start()
{
    var points = new List<Vector3>();
    points.Add(gameObject.transform.position);
    points.Add(gameObject.transform.position);
    points.Add(gameObject.transform.position + new Vector3(0,0,1));
    points.Add(gameObject.transform.position + new Vector3(1,0,1));
    points.Add(gameObject.transform.position + new Vector3(-1,0,1));
    points.Add(gameObject.transform.position + new Vector3(-1,1,1));
    points.Add(gameObject.transform.position);
    points.Add(gameObject.transform.position);

    LeanTween.moveSpline(gameObject, points.ToArray(), 3).setLoopClamp();
}

But this script will later cause us some trouble as we will see in the next post (also to prove a point). I created a prefab of this airplane with it’s attached behaviour in Assets/App/Prefabs. Open the AssetBundle Brower again, drag this prefab onto the left pane of the dialog like this:

Untitled

UntitledOnly this creates an impossible long name, so simply right-click on the name and rename it to “airplane”. Select the “Build” tab again, hit the wide Build button again, wait a while.

If everything went according to plan, SceneBuilder\AssetBundles\WSAPlayer should now contain two extra files:

  • airplane
  • airplane.manfest

The manifest file is once again 1 KB, airplane itself should be 87 KB

Uploading to Azure

This is the easiest part. Use the Azure Storage Explorer:

Untitled

Simply drag your the airplaine and buildscene (you don’t need anything else) into a blob container of any old storage account. The right-click on a file, select “Get Shared Access Signature” and click “Create”, but not before paying close attention to the date/time range, particularly the start date and time

image

The Azure Storage explorer tends to generate a start time some time (1.5 hours or so) in the future, and if you use the resulting url right away, it won’t work. Believe me – been there, done that, and I am glad no-one can could hear me ;). So dial back the start time to the present or a little bit in the past, then create the url, and save it. Do this for both the airplane and the buildscene file. We will need them in the next post to actually download and use them.

Conclusion

We have built two assets and uploaded them to Azure without writing any code – only a little code to make the airplane move, but that was not part of the build/upload procedure. Of course, sou can write the code to build asset bundles yourself, but I am lazy smart and tend to use what I can steal borrow from the internet. This was almost all point-and-click, next time we will see more code. For the impatient, the whole finished project can be found here.

04 April 2018

For crying out loud, compile for native ARM!

CHPE works its magic…

Let me get this straight first: Microsoft did and amazing job with CHPE and x86 emulation, that allows x86 to run without any conversion on the new Windows 10 on ARM PCs. Considering what happens under the hood, the performance and reliability x86 apps get is nothing short of stunning. And you still get the amazing battery life we are used to from ARM – but now on full fledged PCs.

Yet, there are still some things to consider. If you are an UWP developer, you by now probably have stopped providing ARM packages for your UWP apps. After all, Windows 10 Mobile is, alas, fading away, and these ARM PCs run x86 apps, so why bother?

… but magic comes at a price

Well this is why. UWP can compile to native ARM code, and now we have these ARM based PCs. Native ARM code can run on the Windows 10 on ARM PCs without using CHPE. Although CHPE is awesome, it still comes at a price – it uses CPU cycles to convert x86 instructions to ARM instructions and then executes those. Skip one step, you gain performance. And depending on what you do, you can gain a lot of performance.

To show you I am not talking nonsense, I actually compiled my HoloLens/Mixed Reality app “Walk the World” not only for x86 (which I need to do for HoloLens anyway) but also for native ARM. I made two videos, of the app running as x86 UWP, and native ARM UWP. Since I don’t use a head set in this demo, I created a special Unity behaviour to control the viewpoint using an Xbox One Controller. I keep the actual PC out of the videos again, but you can clearly see the Continuum dock I wrote about before – I connected the Dell monitor using a DisplayPort, the Xbox One controller using USB, and a USB-to-ethernet dongle to rule out any variations in Wi-Fi signal.

First, watch the x86 version

Then, the ARM version.

You can clearly see: although the x86 version works really well, the native ARM version starts faster, and also downloads the map faster – considerably so. Still, CHPE amazed me again by the fact that the graphics performance was nearly identical – once the map was loaded, panning over it happened at nearly identical speeds. But apparently startup and network code take more time. So there’s your win!

Note: any flickering or artifact you see are the results of the external camera I used, to prevent any suggestion of this being faked. I also did not want to use a screen capture program as this might interfere with the app’s performance.

Message clear? CHPE is to be used for either x86 apps from the Store that are converted using the Desktop Bridge, or none-Store apps that are downloaded ‘elsewhere’ – think of Chrome, Notepad++ – anything that has not been converted yet to UWP or processed via the Desktop Bridge.

One extra checkbox to rule them all

I did not have to change anything to my code just to add the native ARM version. Basically all I had to do was tick a check box:

image

…and the store will do the rest, selecting the optimal package for you user. That one little checkbox gives your app a significant performance boost on these Windows 10 on ARM PCs.

Now one might wonder – why on Earth would you convert an app that started on HoloLens to an UWP desktop app to run on an Windows 10 on ARM PC and how you make it work? Well, stay tuned.

And incidentally, this was the 300th post on this blog since I started in October 2007. And rest assured – I am far from done ;)