22 April 2018

Downloading holograms from Azure and using them in your Mixed Reality application

Intro

In my previous post I explained two ways of preparing holograms and uploading them to Azure: using a whole scene, and using just a prefab. In this post I am going to explain how you can download those holograms in a running Mixed Reality or HoloLens app and an show them.

image

Setting the stage

We uploaded two things: a single prefab of an airplane, with a behavior attached, and a scene containing a prefab – a house, also with a behavior attached. The house rotates, the airplane follows quite an erratic path. To access both we created a Shared Access Signature using the Azure Storage Explorer.

In the demo code there’s a Unity project called RemoteAssets. We have used that before in earlier posts. The third scene (Assets/App/Demo3/3RemoteScenes) is the scene that actually tries to load the holograms

If you open that scene, you will see two buttons: “Load House” and “Load Plane”

image

nicked from the Mixed Reality Toolkit Examples I nicked these buttons. The left button, “Load House”, actually loads the house. It does so because it’s Interactive script’s OnDownEvent calls SceneLoader.StartLoading

image 

Loading a remote scene

This SceneLoader is not a lot of code and does a lot more than is strictly necessary:

using System.Collections;
using UnityEngine;
using UnityEngine.Networking;
using UnityEngine.SceneManagement;

public class SceneLoader : MonoBehaviour
{
    public string SceneUrl;

    public GameObject Container;

    private bool _sceneIsLoaded;

    public void StartLoading()
    {
        if (!_sceneIsLoaded)
        {
            StartCoroutine(LoadScene(SceneUrl));
            _sceneIsLoaded = true;
        }
    }

    private IEnumerator LoadScene(string url)
    {
        var request = UnityWebRequest.GetAssetBundle(url, 0);
        yield return request.SendWebRequest();
        var bundle = DownloadHandlerAssetBundle.GetContent(request);
        var paths = bundle.GetAllScenePaths();
        if (paths.Length > 0)
        {
            var path = paths[0];
            yield return SceneManager.LoadSceneAsync(path, LoadSceneMode.Additive);
            var sceneHolder = GameObject.Find("SceneHolder");
            foreach (Transform child in sceneHolder.transform)
            {
                child.parent = Container.transform;
            }
            SceneManager.UnloadSceneAsync(path);
        }
    }
}

imageIt downloads the actual AssetBundle using a GetAssetBundle request, then proceeds to extract that bundle using a DownloadHandlerAssetBundle. I already wrote about the the whole rigmarole of specialized requests and accompanying handlers in an earlier post. Then it proceeds to find all scenes in the bundle, picks the first once, and loads this one additive to the current scene. If you comment out all lines after the LoadSceneAsync and run the code, you will be actually able to see what’s happening – a second scene inside the current scene is created.

If you however run the full code, the SubUrb house will appear inside the HologramCollection and no trace of the BuildScene will remain. That’s because the code tries to find a “SceneHolder” object (and you can see that’s the first object in the BuildScene), moves all children (one, the house) to the imageContainer object and once that is done, the additional scene will be unloaded again. But the hologram that we nicked from it still remains, and it even rotates. If you look very carefully in the editor if you click the button, you can actually see the scene appear, see the house being moved from it, and then disappear again.

The result: when you click “Load House” you will see the rotating house, 2 meters before you.

image

Success. Now on to the airplane.

Loading a remote prefab

This is actually less code, or at least – less code than the way I chose to handle scenes:

using System.Collections;
using UnityEngine;
using UnityEngine.Networking;

public class PrefabLoader : MonoBehaviour
{

    public string AssetUrl;

    public GameObject Container;

    private bool _isLoaded;

    public void StartLoading()
    {
        if (!_isLoaded)
        {
            StartCoroutine(LoadPrefab(AssetUrl));
            _isLoaded = true;
        }
    }

    private IEnumerator LoadPrefab(string url)
    {
        var request = UnityWebRequest.GetAssetBundle(url, 0);
        yield return request.SendWebRequest();
        var bundle = DownloadHandlerAssetBundle.GetContent(request);
        var asset = bundle.LoadAsset<GameObject>(bundle.GetAllAssetNames()[0]);
        Instantiate(asset, Container.transform);
    }
}

The first part is the same, then we proceed to use bundle.LoadAsset to extract the first asset by name from the bundle as a game object (there is only one, so that’s always correct for this bundle). And then we instantiate the asset – a prefab, which is a game object, into the hologram collection.

If you click the “Load Plane” button the result is not what you might expect:

image

Uhm, what? We basically did the same as before, actually less. It turns out, the only reason the house rotated fine, was because I used the Horizontal Animator from my own HoloToolkitExtensions. That script is present in both the SceneBuilder project (that I used to create the bundles uploaded to Azure) and the target app, RemoteAssets, that downloads the assets bundles and tries to use them.

But for the airplane to move around, I created a script “MoveAround” that is only present in the SceneBuilder. It does not happen often, dear reader, but I intentionally checked in code that fails, to hammer home the following very important concept:

In an Asset Bundle you can put about anything – complete scenes, prefabs, images, materials and whatnot – everything but scripts.

In order to get this to work, the script and its meta file need to be copied to the target project. Manually.

Untitled

imageIt does not matter much where in the target project it comes, Unity will pick it up, resolve the script reference, the bundle will load successfully if you press the “Load Plane” button. I tend to place it next to the place where it’s used.

And lo and behold: and airplane moving once again like a drunken magpie.

Concluding words

I have shown you various ways to upload various assets – JSON data, images, videos and finally holograms to Azure, how to download them from you app, and what limitations you will need to consider.

Important takeaways from this and previous posts:

  • Yes, I could have downloaded earlier assets using an Unity Asset Bundle as well, in stead of downloading images etc. via a direct URL. Drawback of using an Asset Bundle is you will always need Unity to build it. If you are building an app for a customer that wants to update training images or videos, it’s a big plus if you can just have them uploaded to Azure using the Storage Explorer or some custom (web) app. Whatever the customer can change or maintain themselves, the better it is.
  • You can’t download dynamic behavior, only (static) assets. The most ‘dynamics’ a downloaded asset can have is referring to a script that exists both in the building and the target app. I have seen complex frameworks that tried to achieve downloadable behavior by storing properties into the project file but that usually is a lot of work to achieve only basic functionalities (like moving some parts or changing colors and stuff) but while that may for simple applications, approaches like that are complex to maintain, a lot of work to ‘program’ into your asset, brittle and hard to transfer between project. Plus, it still needs Unity, and your customer is not going to use that.
    Rule of thumb is always: if you want to change the way things look, you can download assets dynamically. If you need new behavior, you will need to update the app.

I hope you enjoyed this brain dump. The project can (still) be found here.

18 April 2018

Preparing and uploading Holograms into Azure for use in Mixed Reality apps

Intro

In my series about remote assets, there’s one major thing left: how to use remote assets that can be used as Holograms in your Mixed Reality app. I will do this in two post:

  1. Prepare for usage and actual upload
  2. Use and download.

And this is, as you guessed, the first post.

Asset bundles

A Unity asset bundle is basically a package that can almost any piece of a Unity app. It’s usually used to create downloadable content for games, or assets that are specific for a platform or game level, so the user does not have to download everything at once. This enhances startup time, but also saves on (initial) bandwidth and local storage. Also, you can update part of the app without actually requiring the user to download a complete new app. There are some limitations, but we will get to that.

General set up

We want to create an app that downloads assets, that it does not have initially included in it. But a Unity project is what we need to actually build the asset bundles. So we need two projects: the actual app that will run, and a ‘builder’ project. That project – which I not completely correctly called “SceneBuilder: – you can find here. I will show you how to prepare an asset bundle that contains a complete scene, and one that only contains a single prefab.

The SceneBuilder contains everything a normal Unity app does: scenes, assets, scripts, the works. And something extra. But first, our runtime app is going to use the Mixed Reality Toolkit, some stuff from the Mixed Reality Toolkit Samples, LeanTween, and my own extensions. I kind of all threw it in because why not. It also contains three scenes in the root.

  • Main (which is empty and not used)
  • BuildScene (a rather darks scene which contains a house from the Low Poly Buildings Lite package, and nothing else)
  • PrefabScene (which contains what seems to be a normal Mixed Reality app, as well an airplane called “Nathalie aguilera Boing 747” (I assume “Boing” should be “Boeing”, and the model is actually a Boeing 737 but whatever – it’s a nice low poly model)

imageimage

The most important thing is the AssetBundle Browser. This is a piece of code provided by Unity to make building asset bundles a lot easier. You can find the AssetBundle Browser in folder “UnityEngine.AssetBundles” under “Assets. You can download the latest version from Unity’s Github repo. It comes with a whole load, but basically you only need everything that’s under UnityEngine.AssetBundles/Editor/AssetBundleBrowser. This you plonk in your Scen eBuilder project’s Asset folder.

imageThe BuildScene

If you open the BuildScene first, you will see there’s actually very little in it. An empty ‘SceneHolder’ object, and a “Suburb House 1” prefab in it. The lack of lighting also makes the scene rather dark. This has a reason: we are going to add this scene to another scene later, and we don’t want all kinds of duplicate stuff like lighting, Mixed Reality toolkit managers, etc. – coming into our existing app. So almost everything is deleted. But to that Prefab, one thing is added: the Horizontal Animator behavior (that debuted in this article). Timagehis is a very simple behavior that will make the house spin round every five seconds. Nothing special – this is to prove a point later.

If you play the scene, you will actually see the house spinning in the scene. The Game window will be black, only saying “Display 1 No cameras rendering”, which is correct, as I deleted everything but the asset we are going to build and upload, so there’s even no camera to show anything.

Build the buildscene asset bundle

First order of business – build the UWP app, as if you are going to create an actual Mixed Reality app for HoloLens and/or immersive head set out of the SceneBuilder. For if you don’t, the AssetBundle browser will kick off that process, and if there’s a syntax error or whatever – it’s modal build window will get stuck, blocking Unity – and the only way out is killing it in the Task manager.

Click Window/AssetBundle Browser and you will see this:

image

I already pre-defined some stuff for you. What I basically did was, in an emtpy AssetBundle browser, drag the BuildScene into the empty screen:

Untitled

and the AssetBundle Browser then adds everything it needs automatically. Click the tab “Build” and make sure the settings are the same as what you see here. Pay particular attention to the build target. That should be WSAPlayer, because that’s the target we use in the app that will download the assets.

Untitled

Hit the ugly wide “Build” button and wait till the build completes. If all works out, you will see four files in SceneBuilder\AssetBundles\WSAPlayer:

  • buildscene
  • buildscene.manifest
  • WSAPlayer
  • WSAPlayer.manifest

Only those where there all along, as I checked the folder with these files in GitHub. But go ahead, delete the files manually, you will see they are re-created. You will see they are all only 1 KB, except for buildscene – that’s 59 KB.

The PrefabScene

This looks more like a normal scene: it has everything in it you would expect in , and the airplane.

image

imageIf you hit the play button in this scene, the airplane will move around in a way that makes you happy you are not aboard it, due to a behaviour called “Move Around”. This script sits in “Assets/App/Scripts” and is nothing more than an start method that creates a group of point relative to the start position and moves it around using LeanTween

void Start()
{
    var points = new List<Vector3>();
    points.Add(gameObject.transform.position);
    points.Add(gameObject.transform.position);
    points.Add(gameObject.transform.position + new Vector3(0,0,1));
    points.Add(gameObject.transform.position + new Vector3(1,0,1));
    points.Add(gameObject.transform.position + new Vector3(-1,0,1));
    points.Add(gameObject.transform.position + new Vector3(-1,1,1));
    points.Add(gameObject.transform.position);
    points.Add(gameObject.transform.position);

    LeanTween.moveSpline(gameObject, points.ToArray(), 3).setLoopClamp();
}

But this script will later cause us some trouble as we will see in the next post (also to prove a point). I created a prefab of this airplane with it’s attached behaviour in Assets/App/Prefabs. Open the AssetBundle Brower again, drag this prefab onto the left pane of the dialog like this:

Untitled

UntitledOnly this creates an impossible long name, so simply right-click on the name and rename it to “airplane”. Select the “Build” tab again, hit the wide Build button again, wait a while.

If everything went according to plan, SceneBuilder\AssetBundles\WSAPlayer should now contain two extra files:

  • airplane
  • airplane.manfest

The manifest file is once again 1 KB, airplane itself should be 87 KB

Uploading to Azure

This is the easiest part. Use the Azure Storage Explorer:

Untitled

Simply drag your the airplaine and buildscene (you don’t need anything else) into a blob container of any old storage account. The right-click on a file, select “Get Shared Access Signature” and click “Create”, but not before paying close attention to the date/time range, particularly the start date and time

image

The Azure Storage explorer tends to generate a start time some time (1.5 hours or so) in the future, and if you use the resulting url right away, it won’t work. Believe me – been there, done that, and I am glad no-one can could hear me ;). So dial back the start time to the present or a little bit in the past, then create the url, and save it. Do this for both the airplane and the buildscene file. We will need them in the next post to actually download and use them.

Conclusion

We have built two assets and uploaded them to Azure without writing any code – only a little code to make the airplane move, but that was not part of the build/upload procedure. Of course, sou can write the code to build asset bundles yourself, but I am lazy smart and tend to use what I can steal borrow from the internet. This was almost all point-and-click, next time we will see more code. For the impatient, the whole finished project can be found here.

04 April 2018

For crying out loud, compile for native ARM!

CHPE works its magic…

Let me get this straight first: Microsoft did and amazing job with CHPE and x86 emulation, that allows x86 to run without any conversion on the new Windows 10 on ARM PCs. Considering what happens under the hood, the performance and reliability x86 apps get is nothing short of stunning. And you still get the amazing battery life we are used to from ARM – but now on full fledged PCs.

Yet, there are still some things to consider. If you are an UWP developer, you by now probably have stopped providing ARM packages for your UWP apps. After all, Windows 10 Mobile is, alas, fading away, and these ARM PCs run x86 apps, so why bother?

… but magic comes at a price

Well this is why. UWP can compile to native ARM code, and now we have these ARM based PCs. Native ARM code can run on the Windows 10 on ARM PCs without using CHPE. Although CHPE is awesome, it still comes at a price – it uses CPU cycles to convert x86 instructions to ARM instructions and then executes those. Skip one step, you gain performance. And depending on what you do, you can gain a lot of performance.

To show you I am not talking nonsense, I actually compiled my HoloLens/Mixed Reality app “Walk the World” not only for x86 (which I need to do for HoloLens anyway) but also for native ARM. I made two videos, of the app running as x86 UWP, and native ARM UWP. Since I don’t use a head set in this demo, I created a special Unity behaviour to control the viewpoint using an Xbox One Controller. I keep the actual PC out of the videos again, but you can clearly see the Continuum dock I wrote about before – I connected the Dell monitor using a DisplayPort, the Xbox One controller using USB, and a USB-to-ethernet dongle to rule out any variations in Wi-Fi signal.

First, watch the x86 version

Then, the ARM version.

You can clearly see: although the x86 version works really well, the native ARM version starts faster, and also downloads the map faster – considerably so. Still, CHPE amazed me again by the fact that the graphics performance was nearly identical – once the map was loaded, panning over it happened at nearly identical speeds. But apparently startup and network code take more time. So there’s your win!

Note: any flickering or artifact you see are the results of the external camera I used, to prevent any suggestion of this being faked. I also did not want to use a screen capture program as this might interfere with the app’s performance.

Message clear? CHPE is to be used for either x86 apps from the Store that are converted using the Desktop Bridge, or none-Store apps that are downloaded ‘elsewhere’ – think of Chrome, Notepad++ – anything that has not been converted yet to UWP or processed via the Desktop Bridge.

One extra checkbox to rule them all

I did not have to change anything to my code just to add the native ARM version. Basically all I had to do was tick a check box:

image

…and the store will do the rest, selecting the optimal package for you user. That one little checkbox gives your app a significant performance boost on these Windows 10 on ARM PCs.

Now one might wonder – why on Earth would you convert an app that started on HoloLens to an UWP desktop app to run on an Windows 10 on ARM PC and how you make it work? Well, stay tuned.

And incidentally, this was the 300th post on this blog since I started in October 2007. And rest assured – I am far from done ;)

02 April 2018

How to detect your Windows Mixed Reality app is actually running on a head set

Very short and small tip – Windows Mixed Reality can run on a HoloLens, on an immersive headset – but although the Store warns about necessary hardware to run the app, there’s basically nothing stopping users from installing the app on a regular Windows PC. We can discuss whether or not that is a smart thing to do, but I approach it from the other side – if a users runs my app on a device that has no immersive capabilities, I can either quit the app, or offer her or him some basic functionality, show menus that can be easily controlled by a mouse click, whatever.

It is actually hilariously simple to detect that: just check

UnityEngine.XR.XRDevice.isPresent

Which returns true if your app is running on either an immersive headset or a HoloLens, and false if the user has started it on some other Windows device.

Thanks to Peter Nolen for the tip!

22 March 2018

Windows 10 on ARM and devices–hang on to your continuum dock!

Drumroll

I have been asked to evaluate a prototype Windows 10 on ARM PC. You might have seen people talk about it earlier, like my friend Lance who wrote about his one day developer experience, Daren May has something to say about remote debugging with these devices, and wouldn’t you know - on the first day of spring, one sprung up at to Paul Thurrott’s site. I am not sure if that’s exactly the same model as I have – it looks pretty similar, but that’s actually not important. As far as Windows goes, the platform and what it can do is more interesting to me than the actual underlying hardware. Windows goes ARM – yet again, one might say.

Wait, haven’t we seen this before?

Windows has been running on ARM before, both on tablets, phones and IoT devices like a Raspberry PI. Windows RT was an Windows 8 variant, Windows Mobile made it actually to Windows 10, IoT devices run a super compact version of Windows 10 and UWP apps. In all cases, apps on Windows versions that run on ARM devices could only be native (ARM) apps. For a number of use cases, the backward compatibility with the vast library of Windows apps that have been created over the years posed a bit of a challenge. And that’s where some brand new tech comes in. The new Windows 10 on ARM runs actual x86 code, made for the ‘conventional’ Intel chips - converting it on the fly. It uses a technology that’s called CHPE (pronounced “chip-pee”) to work the magic. Lance’s article has a nice in-depth explanation of it. I talked to people working on that CHPE during the last MVP Summit. Modest and quiet people they are, but by golly, I felt like a Neanderthal getting a quantum physics 101 lecture by the late professor Hawking when they casually talked about a few of the things they had to overcome. Really impressive.

imageI installed some x86 programs on the PC, downloaded from various sources and some Desktop Bridge programs from the Windows Store. It’s very much a case of Your Mileage May Vary, but let’s just put it this way – I put a resource hog like Chrome on it – the x86 version – and it ran just fine, even the first time, when it’s supposed to be slower while CHPE works it’s magic. I still prefer Edge, as I like to keep my memory and battery power for other things than just web pages – but it runs Chrome just fine. I also tried TeamViewer – also just works fine – case in point, I made the screenshots on this blogpost using that. For all intents and purposes, this is just Windows. So much so, that you actually have to dig to see there’s another heart beating beneath it’s metal. The most obvious is the File Explorer – see image on the right side:


And of course, there’s this.

image

Also, fun fact: because my good old Map Mania app still has an ARM package, intended for phones, it gets the native ARM package from the store, and runs very fast on the device. So pay attention kids, but by all means, submit an ARM package when you put your app in the Windows Store. .

So if this is just Windows… how about devices?

One of the most awesome things I like about Windows is that whatever device you plug into it, it works, and nearly instantly. If it does not, you actually have a better chance of having a defective device than Windows not at least eking the basic functionality out of it. I have had… let’s say, other and utterly frustrating experiences with other operating systems. However, the device I have has just one port – an USB-C port. It charges fine with the accompanying charger, but what about other devices?

See the source imageThis is where the fun starts. As a former Windows Phone MVP, I went all the way to the Lumia 950XL, scoring a free Continuum Dock with the phone. Remember this one? Connect a keyboard,a mouse and a monitor to it, plug the other end in your Lumia, and you basically had a kind of PC-from-your-pocket. Turns out Microsoft did not use some proprietary tricks, but apparently just some standard protocol.

I plugged the dock into the device, power in the other end:

Score one – it charged. Then I went a bit …. overboard…

IMG_6803

I connected this entire pile of hardware to it. And all of it worked. What you see here, connected simultaneously:

  • A Dell monitor connected via DisplayPort (tried the HMDI port too – worked as well)
  • Two USB hubs, because I have 3 only USB ports on the dock ;)
  • A generic USB key
  • A Microsoft Basic Mouse V2
  • A Microsoft Natural Ergonomic Keyboard 4000 v1
  • An Xiaomi MI 5 Android Phone
  • A Microsoft LifeChat LX-3000 headset
  • A Microsoft XBox One controller
  • A Microsoft Sculpt ergonomic keyboard and accompanying mouse set (via a wireless dongle)

Not on this picture, but successfully tried:

  • A HoloLens – it got set up, but I could not connect to the portal via localhost:10080. I have to look into that a little bit more. Also other things work but that’s outside the scope of this article.
  • A fairly new Canon DSLR, but I needed that one to take the picture so it’s obviously not in it ;)

I also found the PC actually wants to charge from a Lizone QC series battery, that I originally bought to extend my Surface Pro 4’s battery life on long transatlantic flights. The Windows 10 on ARM PC itself is missing from the picture – that’s because it’s a pre-release device and I don’t want pictures of it to roam around the internet.

Did I find stuff that did not work? In fact, I did:

  • I could not get a fingerprint reader that I got for free to work. This is some pre-release device that I got on the summit from a fellow MVP – 1.5 or maybe 2.5 years ago. Although it is set up and recognized, I cannot activate it in the settings screen. Maybe this has something to do with the built-in Windows-Hello-compatible camera of the PC getting priority.
  • A wireless dongle for XBox One controllers. Remember the original XBox One controllers did not have Bluetooth in it? This gadget allows you to connect it to PCs anyway. It connects, but nothing is set up. It’s not a big deal, as a controller plugged in via an USB cable works just fine. I suppose this dongle was not sold in large volumes, and probably not at all anymore, as all newer XBox One controllers can be connected via Bluetooth. Only people hanging on to old hardware (guilty as charged) would run into this.

General conclusion

I feel like a broken record, because I keep getting back to this simple fact - it’s just Windows, it will run your apps pretty nicely, it will connect to nearly all of your hardware, and give you a very long battery life. Although, I can imagine battery life might degrade a little if you add this much devices to it’s USB port. But then again, if you need this many devices connected to your PC you might want to rethink what kind of PC you want to buy anyway ;). The point is, you can, and everything but very obscure devices will work.

Now if you would excuse me, I have to clean up an enormous pile of stuff – my study looks like a minor explosion took place in the miscellaneous hardware box.

17 March 2018

Loading remote video stored in Azure blob storage into a floating gaze activated video player in a Mixed Reality app

Intro

The title of this blog post kind of gives away that this is actually two blog post in one:

  • How to prepare and load videos into Azure
  • How to load these videos back from Azure, and show these in a floating video player that is activated upon looked at.

The basic idea

The UI to demo loading and playing the video is a simple plane that gets a ‘MovieTexture’ applied to it. When you look at the plane (i.e. the gaze strikes the Plane), MovieTexture’s “Play” method is called, and the video starts playing. When you don’t look at it for like three seconds, the MovieTexture’s “Pause” method is called. It’s not rocket science.

Two post ago, I introduced a BaseMediaLoader a as simple base class for downloading media. We are going to re-use that in this post, as loading video – as you will see – is not that different from loading audio.

Prepare and upload the video

If you have read my post about loading audio you might have guessed – you can’t just upload an MP4 file to a blob storage, download and play it. Unity seems to have a preference for off-center open source formats. You will need to convert you movie to the OggTheora and you can do this with the command line tool “ffmpeg”. The documentation on it is not very clear, and default conversion yields a very low quality movie (think early years YouTube). I have found the following parameters give a quite reasonable conversion result:

ffmpeg.exe -i .\Fireworks.mp4 -q:v 8 fireworks.ogv

-q:v 8 gives a nice video quality. Also, the original 121605 kb movie is compressed to about 40000 kb. The resulting ogv need to be uploaded to an Azure blob storage. I used the Storage Explorer for that. That also makes it easy to get a shared access signature url.

Video player components

The video player itself is pretty simple – a Plane to display the movie on, a Text to tell the user to start playing it by looking at the Plane, and an AudioSource you can just about see in this image blow, depicted by a very vague loudspeaker icon

image

image

imageNote the video player is about 3 meters from the user, and a bit off-center to the left – preventing it from auto starting immediately, which it would do if it would appear right ahead. The video plane is rotated 90/90/270° to make it appear upright with the right direction to the user.

The VideoPlayer script

The  VideoPlayer script is actually doing all the work – downloading the video, playing it when gaze hits, and pausing the playback after a timeout of 2 seconds (‘Focus Lost Timeout’). It start pretty simple:

using System.Collections;
using HoloToolkit.Unity.InputModule;
using UnityEngine;
using UnityEngine.Networking;

public class VideoPlayer : BaseMediaLoader, IFocusable
{
    public GameObject VideoPlane;

    public AudioSource Audio;

    public GameObject LookText;

    public float FocusLostTimeout = 2f;

    private MovieTexture _movieTexture;

    private bool _isFocusExit;

    protected void Start()
    {
        VideoPlane.SetActive(false);
        LookText.SetActive(false);
    }
}

Notice all components are explicitly defined, that is – although they are within one prefab, you still have to drag the Plane, the Text and the AudioSource into the script’s fields. Initially it turns off everything – if there’s nothing downloaded (yet), show nothing. If you are on a slow network, you will see the player disappear for a while, then reappear.

The most important part of this script consist out of this two methods:

protected override IEnumerator StartLoadMedia()
{
    VideoPlane.SetActive(false);
    LookText.SetActive(false);
    yield return LoadMediaFromUrl(MediaUrl);
}

private IEnumerator LoadMediaFromUrl(string url)
{
    var handler = new DownloadHandlerMovieTexture();

    yield return ExecuteRequest(url, handler);

    _movieTexture = handler.movieTexture;
    _movieTexture.loop = true;
    Audio.loop = true;

    VideoPlane.GetComponent<Renderer>().material.mainTexture = _movieTexture;
    Audio.clip = handler.movieTexture.audioClip;
    VideoPlane.SetActive(true);
    LookText.SetActive(true);
}

Remember, from BaseMediaLoader, that StartLoadMedia is called as soon as MediaUrl changes. That turns off the UI again (in case it was already turned on because a different file was loaded previously). Then we need an DownloadHandlerMovieTexture. I think the person who came up with the DownloaderScheme should be awarded for an originality award ;)

Then we set both the loop property for both the movie texture and the AudioSource to true, and after that we apply the movie texture to the Videoplane's Renderer material texture so it will indeed show the movie.  Since that will only play a silent movie, we need to extract the movie texture's audioClip property value and put that in our audio source, and both make the plane and the text visible, inviting the user to have a look

Then we have these two simple methods to actually start and pause playing. Notice you have to start call the movie texture's Play method and the AudioSource's Play method, but for pausing it's enough to call just the movieTexture's Play. One of those weird Unity idiosyncrasies.

private void StartPlaying()
{
    if (_movieTexture == null)
    {
        return;
    }
    _isFocusExit = false;
    if (!_movieTexture.isPlaying)
    {
        LookText.SetActive(false);
        _movieTexture.Play();
        Audio.Play();
    }
}

private void PausePlaying()
{
    if (_movieTexture == null)
    {
        return;
    }
    LookText.SetActive(true);
    _movieTexture.Pause();
}

Notice the setting of _onFocusExit to false when the StartPlaying. We need that later. Finally, the methods that actually are fired when you are looking at or away from the plane, as defined by IFocusable

public void OnFocusEnter()
{
   StartPlaying();
}

public void OnFocusExit()
{
    _isFocusExit = true;
    StartCoroutine(PausePlayingAfterTimeout());
}

IEnumerator PausePlayingAfterTimeout()
{
    yield return new WaitForSeconds(FocusLostTimeout);
    if (_isFocusExit)
    {
        PausePlaying();
    }
}

If the user stops looking at the plane, _onFocusExit is sets to true and a coroutine starts that first waits for the defined time. If that time has passed and the user still does not look at the plane, the video play will actually be paused. This way you prevent small head movements, that make the gaze cursor wander off the plane for a short period of time, will make the movie stop and start repeatedly - which is a bad user experience.

No controls?

The floating audio player I described earlier has a fancy slider that showed progress and made it possible to jump to any piece of the audio. Unfortunately, a movie texture does not support a time property that you can get and set to random access parts of the movie, and jump to a specific point. You can only move forward, and only by setting the loop property to true you actually end up at the start again, because moving to start does not work either. I don't know why this is, but that's the way it seems to be.

Conclusion

Showing video is almost a easy as playing audio, and in many ways are similar. The default Unity capabilities allow only for a bit limited control, but it's a nice way to - for instance - show instructional videos. Be aware playing videos in a resource-constricted device (read: HoloLens) might ask for a lot of resources. Consider smaller low-res videos is this case. Testing is always key.

Demo project, containing more stuff by the way, can be found here

07 February 2018

Building a floating audio player in Mixed Reality

Intro

imageAs I promised in my previous blog post, I would write about how I created the floating audio player designed to easily demonstrate how to download and play audio files in Mixed Reality (or actually, just Unity, because the code is not MR specific). I kind of skipped over the UI side. In this post I am going to talk a little more about the floating audio player itself. This code is using the Mixed Reality Toolkit and so actually is Mixed Reality specific.

Dissecting the AudioPlayer prefab

The main game object

imageThe AudioPlayer consists out of two other prefabs, a SquareButton and a Slider. I have talked about this button before, so I won’t go over that one in detail again. The main game object of the AudioPlayer has an AudioSource and two extra scripts. The simple version of the Sound Playback Controller was already described in the previous blog post, and will be handled in great detail here. The other script is a standard Billboard script from the Mixed Reality toolkit. It essentially keeps the object rotated towards the camera, so you will never see it from the side of the backside where it’s hard to read and operate. Note I have restricted pivot axis to Y, so it only rotates over a vertical axis.

The button

imageIt’s a fairly standard SquareButton, and I have set the text and icon as I described here. Now that button only shows in the editor, the runtime text and the icon are set by a simple script that toggles icon and text, so that the button cycles between being a “Play” and a “Pause” button. That script is pretty easy:

using HoloToolkit.Unity.InputModule;
using UnityEngine;

public class IconToggler : MonoBehaviour, IInputClickHandler
{
    public Texture2D Icon1;

    public Texture2D Icon2;

    public string Text1;

    public string Text2;

    private TextMesh _textMesh;

    private GameObject _buttonFace;

    void Awake ()
    {
        _buttonFace = gameObject.transform.
           Find("UIButtonSquare/UIButtonSquareIcon").gameObject;
        var text = gameObject.transform.Find("UIButtonSquare/Text").gameObject;
        _textMesh = text.GetComponent<TextMesh>();
        SetBaseState();
    }

    public void SetBaseState()
    {
       _textMesh.text = Text1;
       _buttonFace.GetComponent<Renderer>().sharedMaterial.mainTexture = Icon1;
    }

    private float _lastClick;

    public void OnInputClicked(InputClickedEventData eventData)
    {
        if (Time.time - _lastClick > 0.1)
        {
            _lastClick = Time.time;
            Toggle();
        }
    }

    public void Toggle()
    {
        var material = _buttonFace.GetComponent<Renderer>().sharedMaterial;
        material.mainTexture = material.mainTexture == Icon1 ? Icon2 : Icon1;
       _textMesh.text = _textMesh.text == Text1 ? Text2 : Text1;
    }
}

It has four public properties, as already is visible in the image: Image1 and Text1 for the default image and text (“Play”), Image 2 and Text 2 for the alternate image and text (“Pause”). The Awake method grabs some objects within the button itself, then sets the base state – which is, the default icon and text.

It also implements IInputClickHandler, so the user can tap it. In OnInputClicked it calls the Toggle method. That then toggles both text and image. Notice there’s simple time based guard OnInputClicked. This is to prevent the button from sending a burst of click events. In the Unity editor, I mostly get two clicks every time I press the XBox controller A button, and then nothing happens. Annoying, but easily mitigated this way.

The Slider

I can be short about that one. I did not create that, but simply nicked it from the Mixed Reality Toolkit Examples. It sits in HoloToolkit-Examples\UX\Prefabs. I like making stuff, but I like stealing reusing stuff even better.

The extended Sound Playback Controller

Let’s start at Start ;). Note: the BaseMediaLoader was handled in the previous blog post,

public class SoundPlaybackController : BaseMediaLoader
{
    public AudioSource Audio;

    public GameObject Slider;

    public GameObject Button;

    private SliderGestureControl _sliderControl;

    private IconToggler _iconToggler;

    public AudioType TypeAudio = AudioType.OGGVORBIS;

    void Start()
    {
        _sliderControl = Slider.GetComponent<SliderGestureControl>();
        _sliderControl.OnUpdateEvent.AddListener(ValueUpdated);
        Slider.SetActive(false);
        Button.SetActive(false);
        _iconToggler = Button.GetComponent<IconToggler>();
    }
}

In the Start method, we first grab a bunch of stuff. Note the fact that we not only turn off the slider control but also actually attach an event handler to that.

We continue with StartLoadMedia and LoadMediaFromUrl

protected override IEnumerator StartLoadMedia()
{
    Slider.SetActive(false);
    Button.SetActive(false);
    yield return LoadMediaFromUrl(MediaUrl);
}
private IEnumerator LoadMediaFromUrl(string url) { var handler = new DownloadHandlerAudioClip(url, TypeAudio); yield return ExecuteRequest(url, handler); if (handler.audioClip.length > 0) { Audio.clip = handler.audioClip; _sliderControl.SetSpan(0, Audio.clip.length); Slider.SetActive(true); Button.SetActive(true); _iconToggler.SetBaseState(); } }

The override from StartLoadMedia in this version turns off the whole UI while we are actually loading data, and turns it on when we are done loading. Since that fails when we load MP3, the MP3 player in the demo project disappears and on startup. The others one disappear too, in fact, but immediately appear again since we are loading small clips. This goes so fast you can’t even see it.

LoadMediaFromUrl not only executes the request and sets the downloaded clip to the Audio Souce, as we saw before, but we also set the span of the Slider Control between 0 and the length of the AudioClip in seconds. Easy, right?

Now the Update method, which as you know is called 60 times per second, is the trick to keeping the slider equal to the the current time of the clips that’s now playing:

protected override void Update()
{
    base.Update();
    if (Audio.isPlaying)
    {
        _sliderControl.SetSliderValue(Audio.time);
    }
    if (Mathf.Abs(Audio.time - _sliderControl.MaxSliderValue) < 0.1f)
    {
        Audio.Stop();
        Audio.time = 0;
        _iconToggler.SetBaseState();
        _sliderControl.SetSliderValue(0);
    }
}

Thus if the audio clip plays, the slider moves along. It’s not quite rocket science. If the clip has nearly finished playing, it is stopped and everything is set to the base state: the icon, the time of the audio clip, and the slider is set to 0 again.

And finally – remember that event handler we added to the OnValueUpdated event of the slider? Guess what:

private void ValueUpdated()
{
    Audio.time = _sliderControl.SliderValue;
}

It’s the opposite of the third line of Update – now we set the Audio time to the Slider value.

Conclusion

And that’s it. You can simply use some out-of-the-box components in the Mixed Reality Toolkit and/or it’s examples to build a simple but effective control to play audio. You can grab the demo project (it’s still the same) from here.