IoTivity.NET – A Crossplatform .NET Wrapper

Since I’m a .NET developer who can’t be bothered with spending too much time in C++, what am I to do if I want to use the IoTivity library to expose and interact with devices on the OCF protocols? The SDKs provided are either C++, C, Java (Android) and Object-C (iOS). .NET is sorely missing (hint hint OCF!).

Well the key here is the C SDK. It provides the necessary export of methods to import them into C# using p/invoke. Even better this approach works in .NET, UWP, but even Xamarin Android, and Xamarin iOS (and probably more). So why not create a set of C# classes in a .NET Standard library that imports the methods, to call into the native library?

Yeah I couldn’t come up with a reason why not, so I did: https://github.com/dotMorten/IotivityDotNet

This is however a work in progress, but it already allows you to create, discover and interact with devices over the IoTivity protocol. Compared to the AllJoyn APIs, the IoTivity API is A LOT simpler. In fact you can create a device in a single line of code.

To use, first initialize the Iotivity service:

 IotivityNet.Service.Initialize(IotivityNet.ServiceMode.Client);

You can then search for device resources using Device Discovery:

var svc = new IotivityDotNet.DiscoverResource("/oic/res");
svc.ResourceDiscovered += (s, e) =>
{
    Console.WriteLine($"Device Discovered @ {e.Response.DeviceAddress}");
};
svc.Start();

When you're done, shut down the Iotivity service:

await IotivityNet.Service.Shutdown();

Look at the sample apps for examples how to create devices to discover and respond to requests from clients.

 

Note 1: Currently the SDK is building off a fork of mine that has a set of bug-fixes for IoTivity, rather than using the official builds. All of these bugs are logged and waiting to be fixed, but the IoTivity maintainers have been pretty responsive to this, and one has already been fixed recently.

 

Note 2: Xamarin doesn’t work at this point. I have not been able to compile the native libraries for these platforms. However it should “just work” once that’s done. If you’re interested and have experience building native iOS and Android libraries, I would love a hand with it.

Is AllJoyn dead?

I’ve spent a lot of time the past two years building various AllJoyn related libraries and services. In fact every single smart-device in my home is accessible via AllJoyn. Most of that work is available as a set of repos on GitHub as well. However did I waste my time doing this? (apart from the fun it is). Here’s my take on all of this. It’s written from my personal point of view focused on building a smart-home. I might be wrong, or my arguments doesn’t apply in other scenarios. But please do use the comment section to add additional viewpoints.

 

First of all what is AllJoyn? AllJoyn is a protocol and a set of standard interfaces to interact with your smart devices on the local network. It is (was?) meant to be the standard that unifies all the other IoT standards, either by directly implementing AllJoyn in the device, or providing Device Service Bridges (DSB) to translate a protocol to AllJoyn. I created several DSBs to bridge my range of devices into one common set of AllJoyn interfaces (LIFX, Philips Hue, EcoBee, ZWave, ZigBee HA, ZigBee SE, etc). This was neat because I for instance didn’t have to care what type of light I was controlling. The code to discover, flip the switch, dim or change color would be the same. And if a new device/protocol came out tomorrow, my apps didn’t have to change, as long as a bridge was created.

However AllJoyn isn’t the only standard trying to do this, Open Connectivity Foundation started by Intel had similar goals, albeit some years behind on the implementation. It started to look like the Betamax/VHS wars all over, but luckily they decided to join forces and merge into one organization.

However it looks as if the direction going forward is betting on OCF’s protocol and it’s “Iotivity” open-source reference implementation, rather than the (slightly) more established AllJoyn protocol. So does that mean AllJoyn is dead? It’s a question people are very reluctant to give you a clear answer to. Here’s why I think that is:

1. AllJoyn already ships as part of Windows 10. Microsoft are pretty good at supporting their products and features for several years.

2. AllJoyn is still seeing development (regular commits are still going).

3. OCF will provide an AllJoyn to OCF bridge, and bring the two standards closer, taking the best of both.

 

Now I don’t have any knowledge what’s really happening with AllJoyn and it’s future, but here are my thoughts from an outside perspective:

Microsoft will often bring up #1 when asked about the future of AllJoyn. However I don’t read much into this. Their support lifecycle makes them required to support it for a while. It’s just part of their support life-cycle. However I find it doubtful that we’ll see Microsoft pushing forward with more AllJoyn support beyond what’s already there. They might update the binary with the latest versions, but I doubt we’ll see anything beyond that. Judging from Microsoft’s AllJoyn repos on GitHub, they are very clear they don’t have the resources to maintain/improve them. For instance see this comment. Judging from the IoTivity committers and its developer mailinglist, it’s the same people from Microsoft who also contributed to AllJoyn, so it’s fair to assume resources has been moved away from AllJoyn towards IoTivity.

OCF are saying they’ll provide a bridge, and it’s on the timeline for v2.0 due in April. However looking at how different the specs are and how differently devices are exposed, I’m guessing the bridge will be exposing AllJoyn devices to IoTivity in such a generic way it becomes virtually use-less (but would love to be proved wrong). Yes there might be a boolean exposed that flips the switch on a light, but if the object isn’t actually modeled as a device that looks like any other “proper” IoTivity light, but just a set of generic properties, it doesn’t really help us to create a great user-interface for controlling all our devices. We’ll have to see, but the bridge feels and smells like a temporary band-aid until everyone is moved over. And let’s be honest… there really aren’t that many AllJoyn devices to bridge.

I’ve also seen the merger to cause a little bit of a split within the AllSeen Alliance members. Some people do not like the OCF licensing and find it too restrictive. Other people find the merger more important, since it brings a lot of big players on board, like Samsung and Intel. This split doesn’t bode well for AllJoyn, and the concerns doesn’t really affect you and I who just want to talk to these devices with out apps, bots, dashboards, etc.

AllJoyn might be more mature, but the only wide-ranging consumer product that shipped with AllJoyn was the LIFX light bulb, and LIFX aren’t even betting much more on AllJoyn1. Yes there are more products, like various music players, humidifiers etc (and all with a horrible ecosystem to control them), but you’ll be hard pressed finding any other Alljoyn products selilng in significant numbers, and none of them implement a standard set of useable interfaces. The AllSeen Alliance likes to bring up that they have over 325 millions devices that support AllJoyn, but they are including Windows 10, which while having an AllJoyn routing service installed, really shouldn’t be counted as an AllJoyn device. The fact is AllJoyn sadly never really caught on.

 

So based on this, is AllJoyn dead and should you switch to bet on IoTivity: YES! (sorry guys, but someone had to say it).

Will IoTivity be a waste of time and die off soon: I seriously doubt it. All the right players are part of this, and I don’t really see any other serious contender trying to solve this problem.

If you already have AllJoyn products shipped, you should be ok for a bit. However creating a new product today based on AllJoyn seems really senseless. Personally I’ve stopped all my AllJoyn development, and starting building a .NET Standard wrapper around the IoTivity APIs (help wanted!). Unless you control your entire internal eco-system and use AllJoyn to communicate between them (it’s really great/simple for this) consider your AllJoyn work a sunk cost and just move on already.

 

Disagree or have additional insights to share? Please write in the comment section.

 

1) Quote from LIFX VP: “We're dependent on chipset providers for our AJ [Alljoyn] support, and their focus has somewhat shifted elsewhere of late, as the market evolves and consolidates.”. Also long-time reported AllJoyn bugs aren’t getting fixed by LIFX, indicating that they have no commitment to AllJoyn.

IoT Series

Yes yes yes, I know… this blog has been pretty quiet lately. Sorry, I’m trying to fix this. I’ve been busy with releasing a very large awesome .NET product, spending some time on the UWP Community Toolkit, and hacking away at various IoT projects, and most importantly spending time with my family. My IoT-related GitHub repos are getting a little out of hand, but I’m having a lot of fun creating all the building blocks for a smart-home. My ultimate goal is to build a system that can interact with any protocol, and is resilient against internet-outages (your light switch should still work even if the internet is down – yes I’m looking at you SmartThings!).

Regarding my IoT projects, I’d like to start writing down my progress, make some notes, ramblings etc. It’s a great way to get feedback, share your work, gather your own thoughts, and having to explain something helps myself really understanding the topic etc. So I’d like to start my blog back up with various IoT topics. For those who have been following me on twitter knows I’ve been hacking away at AllJoyn, ZigBee, ZWave, IoTivity etc. I’m going to start writing some blogposts gathering my experiences, thoughts etc.

While this blog might have been a little quiet, I have actually been writing some articles on Hackster.io, so I’ll start with a few links to those, before diving into a series of IoT related blogposts.

Stay tuned….

Adding a Gaze Cursor to your HoloLens App

Adding a gace cursor to your app is important to give the user feedback what you’re looking at and whether you can interact with it.

First make sure you installed the HoloToolkit into your project.

Add new empty GameObject and rename it to Managers

Select the “Managers” object and in the Inspector click “Add Component” and add the “Gaze Manager” script.

image

In the added component’s “Raycast Layer Mast” dropdown, unselect “TransparentFX”

image

From HoloToolkit\Prefabs\Input\ add “Cursor” object to Managers object.

image

Save the scene, build the app and deploy. You now have a cursor at the center of your view following holograms and the spatially mapped mesh.

Rendering the Spatial Mapping Mesh

A lot of the HoloLens apps will sometimes render the mesh it scans to show you which surfaces it has detected – it can give a really cool effect to understand the play space.

Also if you’re using the Emulator, you won’t actually be able to see the virtual room.you’re placing holograms in, so being able to render the spatially mapped mesh can be useful for building apps if you’re not among the lucky few who has an actual HoloLens yet.

So let’s use the spatial mapping mesh we get from the HoloLens sensors and render it inside the application.

First make sure you installed the HoloToolkit into your project.

From HoloToolkit\Prefabs\SpatialMapping drag the SpatialMapping prefab into the root of your Hierarchy.

image

Select the added “SpatialMapping” object, and ensure “Draw Visual Meshes” is checked on. The default material is the “Wireframe”. Feel free to experiment with other materials.

image

Build your app and deploy it. You should now see the mesh rendered on top of walls, floors etc.

Untitled

Using HoloLens’ Spatial Mapping to occlude objects

Spatial Mapping is probably one of the most important aspects of the HoloLens. It’s what it uses to know where it is in a room and how you can make holograms interact with the real world. It essentially scans your surroundings and builds a 3D model. Here’s an example of some of the mesh it has generated for a house:

image

Now we can use this to avoid being able to see our holograms “through walls”. If we bring up the sample we build in the first blogpost, here’s what happens when the holograms goes behind a wall and ruins the illusion:

Nonoccluded2

 

First enable SpatialPerception capability. In Unity go to Edit –> Project Settings –> Player, click the “Windows Store” tab, and check off the capability:

image

Note: If you already have generated a Visual Studio project, this checkbox doesn’t actually “work”. You can do two things: Either delete the generated app and regenerate it again, or go into Visual Studio, and manually edit the “Package.appxmanifest” in a text editor (the manifest designer doesn’t have this capability listed yet) by adding the following line to the manifest:

    <uap2:Capability Name="spatialPerception" />

image

Next we’ll add the SpatialMapping component from the HoloToolkit. Make sure you’ve followed the steps from the previous blogpost to add the HoloToolkit to your project.

Select the object or collection you want to have spatial mapping occlude. In this case I’ll select the HologramCollection, so all children of this collection will get occluded. Click “Add Component” in the inspector and add the “Spatial Mapping Renderer”.

 

 

dimage

image

That’s it! Now save the scene, build your project again, and redeploy from Visual Studio.

Occluded2

The clipping is a little bit off – this is because the mixed reality capture webcam is “off” compared to what you really see in the HoloLens and the mesh scan wasn’t too good on this corner.

Installing HoloLens HoloToolkit into your Unity Project

Make sure you first read “Creating your very first holographic app in Unity” for setting up your hololens project.

The hololens team has created a useful “HoloToolkit” for use with Unity. It provides stuff like Spatial Mapping, client/service for sharing holograms among multiple users, cursors, gesture handling, spatial sound etc.

It’s pretty simple to install in to your project, so here’s the simple step-by-step:

  1. Go to https://github.com/microsoft/HoloToolkit-Unity and click “Download Zip” to download the toolkit.
  2. Right-click the downloaded zip, select properties, Check the “Unblock” checkbox and click OK.
  3. Unzip the folder “HoloToolkit-Unity-master\Assets” into your Assets folder in your Unity project.

Done!

You should now see all the HoloToolkit in your Project view (Unity doesn’t even need to restart, but will auto-detect the new files and import them).

image

I’ll be blogging about using the Toolkit in upcoming blogposts as I figure out how to use the pieces.

Creating your very first holographic app in Unity

Most of the tutorials at the Holographic Academy starts out with a starter-project with a bunch of stuff already set up for you. If you’re new to Unity and/or holographic development, I found that a little bit “cheating” and wanted to know how to do things “from scratch”, to property understand it. I thought I would share my findings in a set of blogposts – they wiill serve as notes for myself, but figured it might be useful for others as well. If something is wrong or you know a better way, please comment in the comment section.

I have all the steps recording in a video at the bottom, but for those who like to read and understand the steps, I’ll go with that first. So lets get started.

First launch Unity and create a new project. Name it whatever you’d like.

After launch, you’ll see in the Hierarchy view a “Main Camera” and a “Directional Light” object.

First we’ll configure the camera for Unity. Keep the name “Main Camera”. From my understanding this is what automatically becomes the camera controlled by your HoloLens. But we have to configure it to be placed at the center of the world.

Select the camera in the hierarchy and In the inspector set the position and rotation to all zeros:

image

Next we need to set the camera to render “nothing” as black. By default it renders blue skies, but since “black” renders as transparent and we want to see the real world around our holograms, we set “Clear Flags” to “Solid Color” and “Background” to “Black”:

image

It is also recommended to set the Near Clipping Plane to 0.85 m. This prevents users from getting “too close” to holograms and get all cross eyed from it. It can be very uncomfortable for people, but feel free to set it to 0.1, to get really up close to your holograms.

 

Next we need to configure the app for Virtual Reality. Go to Edit –> Project Settings –> Player. Click the green “Store Logo” tag, expand “Options” and check off “Virtual Reality Supported”. You should then see “Windows Holographic” listed under the “Virtual Reality SDKs” list.

image

Lastly, we configure the app to run with the fastest rendering possible. To go Edit –> Project Settings –> Quality. Under the green “Store Logo” tag to the left of “Default” click the little black dropdown triangle (highlighted with the red arrow blow) and select “Fastest”. You should see “Fastest” now be green in the first line under the store logo.

image

 

Now at this point we’ve done all the steps for configuring your Holographic app. To recap:

  1. Place camera at 0,0,0, name it “Main Camera” and set the background to solid color black.
  2. Enable the app for Virtual Realtiy
  3. Set quality settings to “Fastest”

At this point we can save the app, and build a Windows Store Visual Studio project to deploy to the HoloLens, but since we haven’t added anything to the scene, it would be a boring app, so let’s do that first before creating the visual studio project.

Right-click inside the Hierarchy Panel, and select “Create Empty”. You should see a “GameObject” be created. Right-click it at click “Rename”, and name it something like “HologramCollection”. Double-check the transform settings and sure position is placed at 0,0,0.

Next select the HologramCollection object, and right-click it. Select “3D Object –. Cube”. A new cube should be created under the collection, and the Hierarchy panel should look like this:

image

Select the Cube, and in the position, set it to 0, –0.5, 2. This means “place it 0.5 meters below the camera, and two meters in front. When the app starts, this is where the hologram will be placed relative to the HoloLens. The cube is 1x1x1 meter. That’s a little bit large for a Hologram, so set the scale to 0.25 for all 3 values, to make it .25m on each side.

image

That’s it for setting up our scene. Next lets create and build it. First go to “File –> Save Scene as…” and give it a name like “Main Scene”.

Next go to “File –> Build settings…”. For platform select “Windows Store”. SDK to “Windows 10”, UWP Build Type to “D3D” and click “Build”.

Lastly click “Add open scenes” and ensure the scene you just saved got added to the list at the top.

image

You’ll be asked for a folder to create it in. Create a folder in your project called “App”, and select the folder. The the project is done being created (it takes a while the first time), go into the app folder and open the Visual Studio solution.

Next, set the build architecture to x86, and select either the holographic emulator, or if you have a device select the “Remote Machine” and enter the IP (Tip: From within the hololens open the start menu and ask Cortana “What is my IP”), or plug the device in with USB and select “Device”.

Hiit F5 and start your first holographic app!

All the steps are also shown in the video below. At the end of the video you’ll see the app running inside the HoloLens.

 

Enjoy!

The beginnings of an AllJoyn based Home Automation Controller

I’ve been working on building my own home automation controller to make my home smarter. I decided to build this around AllJoyn so I can avoid getting any type of device-protocol lock-in, but can abstract everything with AllJoyn.

I’m currently at a stage where I have several, lights, switches, temperature, humidity and door/window sensors, as well as a way to directly read my house’s SmartMeter to get real-time power consumption, all exposed via AllJoyn.

Since I want to build a controller; that can pick up any AllJoyn device at runtime, without the need to have a preconfigured list of supported device types, I needed a way to discover any device without any prior knowledge. Luckily there’s a great library with full source from Microsoft that does just this, and I wrapped it all up into a little NuGet package, and wrote an article on how to use it on Hackster here: https://www.hackster.io/dotMorten/discovering-and-interacting-with-any-alljoyn-device-0dbd86

 

I’m excited to be going to CES2016 for a few days this week, and will be meeting with the AllSeen Alliance who has a big presence there, and get some inspiration and hopefully get some questions answered, before moving forward with my controller.

Here’s a few photos of it all running on a Raspberry PI with a little 5” display.

WP_20151026_11_34_30_Pro

Home screen

 

WP_20151222_12_58_27_Rich_LI

Tracking power consumption over time.

 

image

Automation Rule Designer

 

Just some of the AllJoyn devices in my house…