Posts Tagged ‘Windows 8’

Data Compression for the Kinect

Transmitting uncompressed Kinect depth and color data requires a network bandwidth of about 460Mbit/s. Using the RleCodec or the LZ4 library we achieve tremendous compression – a compression ratio of 10 or 22 respectively, at lightning speed – over 1600Mbytes/s. We achieve this not so much by the compression algorithms, but by removing undesirable effects (jitter, by the DiscreteMedianFilter) and redundancy (already sent data, by taking the Delta).

Introduction

From the start, one goal of the Kinect Client Server (KCS) project was to provide a version of the KCS viewer, called 3D-TV, from the Windows Store. Because of certification requirement 3.1 (V5.0)

“Windows Store apps must not communicate with local desktop applications or services via local mechanisms,..”

3D-TV has to connect to a KinectColorDepth server application on another PC. In practice, the network bandwidth that is required to transfer uncompressed Kinect depth and color data over Ethernet LAN using TCP is about 460Mbit/s, see e.g. the blog post on the jitter filter. This is a lot, and we would like to reduce it using data compression.

This is the final post in a series of three on the Kinect Client Server system, an Open Source project at CodePlex, where the source code of the software discussed here can be obtained.

What Do We Need?

Let’s first clarify the units of measurements we use.

Mega and Giga

There is some confusion regarding the terms Mega and Giga, as in Megabit.

  • For files 1 Megabit = 2^20 bits, and 1 Gigabit = 2^30 bits.
  • For network bandwidth, 1 megabit = 10^6 bits, and 1 gigabit = 10^9 bits.

Here we will use the units for network bandwidth. The Kinect produces a color frame of 4 bytes and a depth frame of 2 bytes at 30 frames per second. This amounts to:

30 FPS x 640×48 resolution x (2 depth bytes + 4 color bytes) x 8 bits = 442.4 Mbit/s = 55.3 Mbyte/s.

We have seen before that the actual network bandwidth is about 460Mbit/s, so network transport creates about 18Mbit/s overhead, or about 4%.

Since we are dealing with data here that is streamed at 30 FPS, time performance is more important than compression rate. It turns out that a compression rate of at least 2 (compressed file size is at most 50% of uncompressed size), in about 5ms satisfies all requirements.

Data Compression: Background, Existing Solutions

Theory

If you are looking for an introduction to data compression, you might want to take a look at Rui del-Negro’s excellent 3 part introduction to data compression. In short: there are lossless compression techniques and lossy compression techniques. The lossy ones achieve better compression, but at the expense of some loss of the original data. This loss can be a nuisance, or irrelevant, e.g. because it defines information that cannot be detected by our senses. Both types of compression are applied, often in combination, to images, video and sound.

The simplest compression technique is Run Length Encoding, a lossless compression technique. It simply replaces a sequence of identical tokens by one occurrence of the token and the count of occurrences. A very popular somewhat more complex family of compression techniques is the LZ (Lempel-Ziv) family (e.g. LZ, LZ77, LZ78, LZW) which is a dictionary based, lossless compression. For video, the MPEG family of codecs is a well known solution.

Existing Solutions

There are many, many data compression libraries, see e.g. Stephan Busch’s Squeeze Chart for an overview and bench marks. Before I decided to roll my own implementation of a compression algorithm, I checked out two other solutions: The Windows Media Codecs In Window Media Foundation. But consider the following fragment from the Windows Media Codecs documentation:

It seems as if the codecs are way too slow: max 8Mbit/s where we need 442Mbit/s. The WMF codecs obviously serve a different purpose.

The compression used for higher compression levels seems primarily of the lossy type. Since we have only 640×480 pixels I’m not sure whether it is a good idea to go ‘lossy’. It also seems that not all versions of Windows 8 support the WMF.

LZ4, an open source compression library. This is a very fast compression library, see image below, from the website:

So, LZ4 can compress a file to 50% at over 400MByte/s. That is absolutely great! I’ve downloaded the source and tried it on Kinect color and depth data. Results are shown and compared in the Performance section.

The RleCodec

I decided to write my own data compression codec, and chose the Run Length Encoding algorithm as a starting point. Why?

Well, I expected a custom algorithm, tailored to the situation at hand would outperform the general purpose LZ4 library. And the assumption turned out to be correct. A prototype implementation of the RleCodec supported by both the DiscreteMedianFilter and creating a Delta before compressing data really outperformed the LZ4 reference implementation, as can be read from the performance data in the Performance section.

It only dawned on me much later that removing undesired effects (like jitter, by the DiscreteMedianFilter) and redundant information (already sent data, by taking the Delta) before compressing and transmitting data is not an improvement of just the RLE algorithm, but should be applied before any compression and transmission takes place. So, I adjusted my approach and in the performance comparison below, we compare the core RLE and LZ4 algorithms, and see that LZ4 is indeed the better algorithm.

However, I expect that in time the data compression codec will be implemented as a GPU program, because there might be other image pre-processing steps that will also have to be executed at the GPU on the server machine; to copy the data to the GPU just for compression requires too much overhead. It seems to me that for GPU implementations the RLE algorithm will turn out the better choice. The LZ4 algorithm is a dictionary based algorithm, and creating and consulting a dictionary requires intensive interaction with global memory on the GPU, which is relatively expensive. An algorithm that can do its computations purely locally has an advantage.

Design

Lossless or Lossy

Shall we call the RleCodec a lossy or lossless compression codec? Of course, RLE is lossless, but when compressing the data, the KinectColorDepth server also applies the DiscreteMedianFilter and takes the Delta with the previous frame. Both reduce the information contained in the data. Since these steps are responsible for enormous reduction of compressed size, I am inclined to consider the resulting library a lossy compression library, noting that it only looses information we would like to lose, i.e. the jitter and data we already sent over the wire.

Implementation

Algorithm

In compressing, transmitting, and decompressing data the KinectColorDepth server application takes the following steps:

  1. Apply the DiscreteMedianFilter.
  2. Take the Delta of the current input with the previous input.
  3. Compress the data.
  4. Transmit the data over Ethernet using TCP.
  5. Decompress the data at the client side.
  6. Update the previous frame with the Delta.

Since the first frame has no predecessor, it is a Delta itself and send over the network as a whole.

Code

The RleCodec was implemented in C++ as a template class. Like with the DiscreteMedianFilter, traits classes have been defined to inject the properties that are specific to color and depth data at compile time.

The interface consists of:

  • A declaration that take the value type as the template argument.
  • A constructor that takes the number of elements (not the number of bytes) as an argument.
  • The size method that returns the byte size of the compressed data.
  • The data method that returns a shared_ptr to the compressed data.
  • The encode method that takes a vector of the data to compress, and stores the result in a private array.
  • The decode method that takes a vector, of sufficient size, to write the decompressed data into.

Like the DiscreteMedianFilter, the RleCodec uses channels and an offset to control the level of parallelism and to skip channels that do not contain information (specifically, the A (alpha or opacity) channel of the color data). Parallelism is implemented using concurrency::parallel_for from the PPL library.

Meta Programming

The RleCodec contains some meta programming in the form of template classes that roll out loops over the channels and offset during compilation. The idea is that removing logic that loops over channels and checks if a specific channel has to be processed or skipped will provide a performance gain. However, it turned out that this gain is only marginal (and a really thin margin 🙂 ). It seems as if the compiler obviates meta programming, except perhaps for very complicated cases.

Performance

How does our custom RLE codec perform in test environment and in the practice of transmitting Kinect data over a network? How does its performance compare to that of LZ4?. Let’s find out.

In Vitro Performance

In vitro performance tests evaluate in controlled and comparable circumstances the speed at which the algorithms compress and decompress data.

Timing

In order to get comparable timings for the two codecs, we measure the performance within the context of a small test program I wrote, the same program is used for both codecs. This makes the results comparable and sheds light on the usefulness of the codec in the context of transmitting Kinect data over a network.

Algorithm

In comparing the RleCodec and LZ4, both algorithms take advantage of working the delta of the input with the previous input. We use 3 subsequent depth frames and 3 subsequent color frames for a depth test and a color test. In a test the frames are processed following the sequence below:

  1. Compute delta of current frame from previous frame.
  2. Compress the delta.
  3. Measure the time it takes to compress the delta
  4. Decompress the delta
  5. Measure the time it takes to decompress the delta
  6. Update previous frame with the delta
  7. Check correctness, i.e. the equality of the current input with the updated previous input.

We run the sequence 1000 times and average the results. Averaging is useful for processing times, the compression factor will be the same in each run. The frames we used are not filtered (DiscreteMedianFilter).

Let’s first establish that the performance of these libraries are of very different order than the performance of the WMF codecs. Let’s also establish that compression speed and decompression speed is much more than sufficient: as noted above 50Mbyte/s would do.

For depth data, we see that the RleCodec delivers fastest compression. LZ4 delivers faster decompression, and obtains stronger compression.

The RleCodec was tested twice with the color data. In the first test we interpreted the color data as of type unsigned char. We used 3 channels with offset 4 to compress it. In the second test we interpreted the color data as unsigned int. We used 4 channels, with offset 4. We see that now the RleCodec runs about 4 times as fast as with char data. The compression strength is the same, however. So, for color data, the same picture arises as with depth data: times are of the same order, but LZ4 creates stronger compression.

The difference in naked compression ratios has limited relevance, however. We will see in the section on In Vivo testing that the effects of working with a Delta, an in particular of the DiscreteMedianFilter dwarfs these differences.

We note that the first depth frame yields lesser results both for time performance and compression ratio. The lesser (de)compression speed is due to the initialization of the PPL concurrency library. The lesser compression ratio is illustrative of the effect of processing a Delta: the first frame has no predecessor, hence there is no Delta and the full frame is compressed. The second frame does have a Delta, and the compression ratio improves by a factor of 2.4 – 2.5.

In Vivo Performance

In Vivo tests measure, in fact, the effect of the DiscreteMedianFilter on the data compression. In Vivo performance testing measures the required network bandwidth in various configurations:

  1. No compression, no filter (DiscreteMedianFilter).
  2. With compression, no filter.
  3. With compression, with filter, with noise.
  4. With compression, with filter, without noise.

We measure the use of the RleCodec and the LZ4 libraries. Measurements are made for a static scene. Measuring means here to read values from the Task Manager’s Performance tab (update speed = low).

Using a static scene is looking for rock bottom results. Since activity in the scene- people moving around can be expressed as a noise level, resulting compression will always be somewhere between the noiseless level and the “no compression, no filter” level. Increase will be about linear, given the definition of noise as a percentage of changed pixels.

The measurements in the table below are in Megabits per second, whereas the table above shows measurements in Megabytes per second. So, in order to compare the numbers, if so required, the entries in the table below have to be divided by 8. Note that 460Mbit/s is 57.5Mbyte/s.

What we see is that:

  • Compression of the delta reduces network bandwidth width 13* (RLE), or 33% (LZ4).
  • Application of the filter reduces it further with 53% (RLE), or 39% (LZ4).
  • Cancelling noise reduces it further with 24% (RLE), or 23% (LZ4).
  • We end up with a compression factor of about 10 (RLE), or 22 (LZ4).

:-).

What do we transmit at the no noise level? Just the jitter beyond the breadth of the DiscreteMedianFilter, a lot of zeroes, and some networking overhead.

As noted above, the differences between the core RleCodec and LZ4 are insignificant compared to the effects of the DiscreteMedianFilter and taking the Delta.

Conclusions

Using the RleCodec or the LZ4 library we achieve tremendous compression, a compression ratio of 10 or 22 respectively , at lightning speed – over 1600Mbytes/s. We achieve this not so much by the compression algorithms, but by removing undesirable effects (jitter, by the DiscreteMedianFilter) and redundancy (already sent data, by taking the Delta).

ToDo

There is always more to wish for, so what more do we want?

Navigation needs to be improved. At the moment it is somewhat jerky because of the reduction in depth information. If we periodically, say once a second, send a full (delta yes, filter no) frame, in order to have all information at the client’s end, we might remedy this effect.

A GPU implementation. Compute shader or C++ AMP based. But only in combination with other processing steps that really need the GPU.

Improve on RLE. RLE compresses only sequences of the same token. What would it take to store each literal only once, and insert a reference to the first occurrence at reencountering it? Or would that be reinventing LZ?

Advertisement

A Jitter Filter for the Kinect

This blog post introduces a filter for the jitter caused by the Kinect depth sensor. The filter works essentially by applying a dynamic threshold. Experience shows that a threshold works much better than averaging, which has the disadvantage of negatively influencing motion detection, and has only moderate results. The presented DiscreteMedianFilter removes the jitter. A problem that remains to be solved is the manifestation of depth shadows. Performance of the filter is fine. Performance is great in the absence of depth shadow countermeasures.

Introduction

Kinect depth images show considerable jitter, see e.g. the depth samples from the SDK. Jitter degrades image quality. But it also makes compression(Run Length Encoding) harder; compression for the Kinect Server System will be discussed in a separate blog post. For these reasons we want to reduce the jitter, if not eliminate it.

Kinect Depth Data

What are the characteristics of Kinect depth data?

Literature on Statistical Analysis of the Depth Sensor

Internet search delivers a number of papers reporting on thorough analysis of the depth sensor. In particular:

[1] A very extensive and accessible technical report by M.R. Andersen, T. Jensen, P. Lisouski, A.K. Mortensen, M.K. Hansen, T. Gregersen and P. Ahrendt: Kinect Depth Sensor Evaluation for Computer Vision Applications.

[2] An also well readable article by Kourosh Khoshelham and Sander Oude Elberink.

[3] A more technically oriented article by Jae-Han Park, Yong-Deuk Shin, Ji-Hun Bae and Moon-Hong Baeg.

[4] Of course, there is always the Wikipedia

These articles discuss the Kinect 360. I’ve not found any evidence that these results do not carry over to the Kinect for Windows, within the range ([0.8m – 4m]) of the Default mode.

Depth Data

We are interested in the depth properties of the 640×480 spatial image that the Kinect produces at 30 FPS in the Default range. From the he SDK documentation we know that the Kinect provides depth measurements in millimeters. A dept value measures the distance between a coordinate in the spatial image and the corresponding coordinate in the parallel plane at the depth sensor, see image below from the Kinect SDK Documentation.

Some characteristics:

1. Spatial resolution: at 0.8m the 640×480 (width x height) depth coordinates cover an approximately 87cmx63cm plane. The resolution is inversely proportional with the squared distance from the depth sensor. The sensor has an angular field of view of 57° horizontally and 43° vertically.

2. Depth resolution: the real world distance between 2 subsequent depth values the Kinect can deliver is about 2mm at 1m from the Kinect, about 2.5cm at 3m, and about 7cm at 5m. Resolution decreases quadratically as a function of the distance.

Jitter

The Kinect depth measurements are characterized by some uncertainty that is expressible as a random error. One can distinguish between errors in the x,y-plane on the one hand, and on the z-axis (depth values) on the other hand. It is the latter that is referenced to as the depth jitter. The random error in the x,y-plane is much smaller than the depth jitter. I suspect it manifests itself as the color jitter in the KinectColorDepthServer through the mapping of color onto depth, but that still has to be sorted out. Nevertheless, the filter described here is also applied to the color data, after mapping onto depth.

The depth jitter has the following characteristics:

1. The error in depth measurements: The jitter is about a few millimeters at 0.5m, up to 4cm at 5m, increasing quadratically over the distance from the camera.

2. The jitter is a walk over a small number of nearby discrete values. We have already seen this in a previous post in The Byte Kitchen Blog, where we found a variance over mainly 3 different values, and incidentally 4 different values. Of course, for very long measuring times, we may expect an increased variance.

3. The jitter is much larger at the boundaries of depth shadows. Visually, this is a pretty disturbing effect, but the explanation is simple. The Kinect emits an infra red beam for depth measurements which, of course, creates a shadow. The jitter on the edges of a depth shadow jumps from an object to its shadow on the background which is usually much further away. We cannot remove this jitter without removing the difference between an object and its background, so for now, I’ve left it as is.

The miniature below is a link to a graph in [2] (page 1450, or 14 of 18) of the depth resolution (blue) and size of the theoretical depth measurement error (red).

 

A Kinect Produces a Limited Set of Discrete Depth Values

It is not the goal of the current project to correct the Kinect depth data, we just want to send it over an Ethernet network. What helps a lot is, and you could see this one coming:

The Kinect produces a limited set of depth values.

The Kinect for Windows produces 345 different depth values in the Default range, not counting the special values for unknown, and out of range measurements. The depth values for my Kinect for Windows are (divide by 8 to get the depth distance in mm):

This is the number of values for the Kinect for Windows. I also checked the number of unique values for my Kinect 360, and it has a larger but also stable set of unique depth values. The number of depth values is 781, due to the larger range.

The fact that a Kinect produces a limited and small set of depth values makes live a lot easier: we can use a lookup table were we would otherwise have to use function approximation. The availability of a lookup table is also good news for the time performance of the filter.

A question might be: can you use the depth table of an arbitrary Kinect to work with any other Kinect? I assume that each Kinect has a slightly different table, and this assumption is based on the fact that my Kinect for Windows has slightly different values than my Kinect 360, for the same sub range. However, if you use an upper bound search (this filter uses std::upper_bound), you will find the first value equal to or larger than a value from an arbitrary Kinect, which will usually be a working approximation (better than having the jitter). Of course, an adaptive table would be better, and it is on the ToDo list.

Design

I’ve experimented with several approaches: sliding window of temporal averages, Bilateral Filter. But these were unsatisfactory:

– Reduction of Jitter is much less good compared to applying a threshold.

– Movement detection is as much reduced as the jitter, which is an undesirable effect.

A simple threshold, of about the size of the breadth of the error function proved the best solution. As noted above, the jitter typically is limited to a few values above and below the ‘real’ value. We could call the ‘real’ value the median of the jitter range, and describe this jitter range not in terms of the depth values themselves but of the enumeration of the sorted list of discrete depth values (see table). We get a Discrete Median Filter if we map all discrete values within the range onto the median of that range (minding the asymmetry of the sub ranges at the boundaries of our sorted list).

The DiscreteMedianFilter Removes Jitter

In practice we see no jitter anymore when the filter is applied: The DiscreteMedianFilter ends the jitter (period). However, the filter is not applicable to (edges of) depth shadows.

Noise

Actually, it turned out that this filter is in fact too good. If the Kinect registers a moving object, we get a moving depth shadow. The filter cannot deal with illegal depth values, so we are stuck with a depth shadow smear.

A modest level of noise solves this problem. In each frame 10% of the pixels the filter skips is selected at random, and updated. This works fine, but it should be regarded as a temporal solution: the real problem is, of course, the depth shadow, and that should be taken up.

Implementation

The Discrete Median Filter was implemented in C++, as a template class, with a traits template class (struct, actually); one specialization for the depth value type and one specialization for the color value type, to set the parameters that are typical for each data type, and a policy template which holds the variant of the algorithm that is typical for the depth and color data respectively. For evaluation purposes, I also implemented traits and policy classes for unsigned int.

So, the DiscreteMedianFilter class is a template that takes a value type argument. Its interface consists of a parameter free constructor and the Filter method that takes a pointer to an input array and a pointer to a state. The method changes the state where the input deviates from the state more than a specified radius.

Channels and Offset

Color data is made up of RGBA data channels: e.g. R is a channel. Working with channels is inspired on data compression. More on this subject in the blog post on data compression.

The advantages of working with channels for the DiscreteMedianFilter are:

1. We can skip the A channel since it is all zeroes

2. Per channel, changes of color due to sensor errors are usually expressible as small changes, which is not the case for the color as a whole, so we get better filtering results.

3. There are less values: 3 x 256 vs 2^32, so the probability of equal values is higher.

Note that the more values are found to be within the range of values that do not need change, the better time performance we get.

The individual channels are not processed in parallel (unlike in the compression library). We will show below that parallelism should not be required (but actually is, as long as we have noise).

Code

The code is complex at points, so it seems to me that printing the code here would raise more questions than it would answer. Interested people may download the code from The Byte Kitchen Open Sources at Codeplex. If you have a question about the code, please post a comment.

Performance

How much space and time do we need for filtering?

A small test program was built to run the filter on a number of generated arrays simulating subsequent depth and color frames. The program size never gets over 25.4 megabytes. The processing speed (without noise) is:

– About 0.77ms for an array of 1228800 bytes (640×480=307200 ARGB color values); 1530Mbyte/s

– About 0.36ms for an array of 640×480=307200 unsigned short depth values; 1615Mbyte/s.

So, this is all very fast, on a relatively small footprint: in a little more than 1ms we have filtered both the depth and the color data of a Kinect frame.

The simple test program and its synthetic data are not suitable for use with noise. So, we measured the time the KinectColorDepthServer needs for calls to the DiscreteMedianFilter. In this case the noise is set at 10% of the values that would otherwise be skipped. The times below are averaged over 25,000 calls:

– Depth: 6.7ms per call.

– Color: 19.3ms per call.

So, we may conclude that the noise is really a performance killer. Another reason to tackle the depth shadow issue. Nevertheless, we are still within the 33ms time window that is available for each frame at 30 FPS.

Does the DiscreteMedianFilter has any effect on the required network capacity? No, in both cases a capacity of 460Mbit/s is required for a completely static scene, compression is off. I do have the impression that the filter has a smoothing effect on instantaneous (as opposed to average) required network capacity.

To do

There is always more to wish for, so what more do we want?

– Resolve the need for the noise factor, i.e. do something about the depth shadows. This will increase performance greatly.

– Because of filtering, navigation through the scene is jerky. There are much less depth values in the image, so movement is not so smooth. This is to be helped by sending a full depth image every now and then. After all, the filter only replaces values that differ more than a threshold, the rest of the image is retained. Refreshing the overall picture now and then retains the richness of the depth image, helps to make noise superfluous, and smoothes navigation.

– Make the table of depth values adaptive. If a value is not present we replace the nearest value. Of course, we would then also like to save the new table to file, and load it at any subsequent program starts.

Kinect Client Server System V0.2

The Kinect Client Server System V0.2 adds the possibility to V0.1 to watch Kinect Color and Depth data over a network, and to navigate the rendered 3D scene.

To support data transfer over TCP, the Kinect Client Server System (KCS system) contains a custom build implementation of Run Length Encoding compression.

To both maximize compression and improve image quality the KCS system uses a jitter filter

Introduction

Version 0.1 of the KCS system allowed the display of Kinect data in a Windows Store app. This is a restricted scenario: for security reasons, a Windows Store app cannot make a network connection to the same PC it is running on, unless in software development scenarios. Version 0.2 overcomes this restriction by:

1. Support for viewing Kinect data from another PC.

2. Providing the 3D-TV viewer from the Windows Store (free of charge).

Of course, V0.2 is an open Source project, the code and binaries can be downloaded from The Byte Kitchen’s open Source project at CodePlex.

Usage

The easiest way to start using the KCS system v0.2 is to download 3D-TV from the Windows Store, navigate to the Help-About screen (via the ‘Settings’ popup), click on the link to the online manual and follow the stepwise instructions.

The general usage schema is depicted below.

The PC called Kinect server runs the KinectColorDepthServer application. In order for the client PC running 3D-TV to find the server PC on the network, the server PC must be connected to gigabit Ethernet (cabled computer network, say) with a network adapter carrying the IP address 192.168.0.20. The IP address of the Ethernet adaptor of the PC running 3D-TV must satisfy 192.168.0.xxx, where 0 <= x <=255. It is wise to expect that required data capacity is well over 100Megabit/second.

The online manual also provides a complete description of the keyboard keys to be used for navigating around a Kinect scene.

More information

In a few more blog posts, I will discuss more technical details of the new features in the KCS system V0.2, specifically the jitter filter that is to stabilize the Kinect imagery and help in data compression, and the data compression itself.

CX Reconsidered [3]: Components on a Thread

This is a series of blog posts that explore programming tactics which ascertain ‘a thin layer of CX’, as advertised and advised by Microsoft, and that thus maximize the use of ISO C++11(+).

This installment is about different approaches to having a component do its own thread management, and starts off by looking at various ways an application can be started up – assuming that different startup mechanics will lead to different ownership relations between an application and a component, hence different approaches to thread management.

This installment also considers some idioms involving C++11 concurrency, as defined in <thread> and <mutex>.

Introduction

It has been argued earlier in this series that threads are a vehicle by which CX dependencies, hence CX code constructs, propagate through your code. To stop this propagation (and preserve portability) we would like the Core component (the ISO C++ only area) to manage its own threads, and to run its operations on the threads its manages. This implies, so it seems, that for starters a component should not be instantiated on a CX thread.

We will consider several alternatives for starting a Xaml + CX application and find that one alternative stands out. We then discuss two models for thread management, and select the best fit. Finally we will walk through an example program.

Alternative Main() Functions

Each CX application has, just like each C# application, a main() function. In CX applications, the main() function is located in <Project Directory>Generated FilesApp.g.hpp, and it is decorated with “#ifndef DISABLE_XAML_GENERATED_MAIN”. So, if you as a developer define DISABLE_XAML_GENERATED_MAIN, and provide a custom main() function, your main() function will be called.

Steps to move to a custom main() function:

1. Add “#define DISABLE_XAML_GENERATED_MAIN” to the pch.h file.

2. Copy the main function from the App.g.hpp, to have a starting point.

3. Add a file, e.g. Main.cpp with the following code:

4. Then edit the main() function to suit your needs.

The standard main() function starts the Xaml application and instantiates an App class.

It is a conceivable scenario that along the App class a component is initialized, and a handle is passed to the App. The other way round is conceivable also: a handle to the App is passed to the Component.

Enumerating the alternatives systematically results in the following list, assuming that the component has an interface called ComponentClass:

1. Xaml owns all. The standard main() function instantiates the App class. In turn the App class instantiates the ComponentClass. Recall that App.g.hpp is generated, we cannot edit it meaningfully to initiate the ComponentClass.

2. Xaml owns the App. The standard main() function instantiates the App class, and calls a factory method in the component which instantiates the ComponentClass on a new thread.

3. App owns all. A custom main() function instantiates the Xaml UI, the App class and the ComponentClass, preferably on two threads.

4. App owns Xaml. A custom main() function instantiates the Xaml UI and the App class, and calls a factory method in the component which instantiates the ComponentClass on a new thread.

To have ownership means (here) to control the life cycle.

So, the summarize: using the standard main() function implies the Xaml UI owns the App class. Using a custom main() function implies the application owns the Xaml UI. In both cases there are two alternatives concerning ownership of the component: the owner also owns the component, or the component owns itself.

Central ownership reduces the number of threads involved, which simplifies data exchange. Decentralized ownership improves process stability, portability, and it keeps the CX layer thin.

We are primarily interested in alternatives in which the component does its own thread management, i.e. alternative 2. and 4. We conclude that a custom main() function does not add a relevant shift in ownership for our purposes and choose alternative 2. because it is simpler than 4.

Active Object or Asynchronous Programming?

How do you implement a component that runs on its own thread(s)? Well, I did some research and contemplated a bit on what I found, and now think there are two major approaches: the Active Object Pattern (Douglas C. Schmidt et al.), and some form of an Asynchronous Programming Model.

The Active Object Pattern

Central in this pattern by is a scheduler that dispatches methods corresponding to request messages on a queue that originated at external clients. Method execution can be asynchronous. (the image is a hyperlink to its source)

The following example implementation of the scheduler is proposed in the article (image is hyperlink):

As the comment in the code above mentions: the scheduler runs on its own thread.

What to think of this? To me it seems the Active Object pattern is a very thorough but heavyweight solution. I think it is less suitable for the current challenge. Central in the pattern is a message processing loop, that continuously consumes processor cycles. This is a solution for an environment that lacks just this kind of infrastructural facilities. Windows, iOS, OSX, Linux, or Android GUI platforms are not such type of environments; they (still) have a message loop that continuously consumes processor cycles. To me it seems better to keep it at one such a glutton.

Recently I also stumbled upon a criticism by Bjarne Stroustrup. In his keynote at Going Native 2013 he classified a central scheduler as a performance bottleneck; he sketched a scenario of a significant number of potent parallel processors waiting for this one scheduler to provide some work from a well stocked queue.

So, we would like a solution that is more of a flyweight, and inherently concurrent as well.

A Singleton of Asynchronous Methods

A Singleton class is a pretty perfect implementation of the concept of Component. In C++ it gets instantiated at first use, and gets destructed at program termination. If so required, a reset method can be implemented that returns the state of the object to startup values. In a sense you could say that a Singleton holds its own destiny, just like an Active Component eating away cycles. The Singleton does this by holding a private static handle to its single object. No other entity (except friends, if any) can get to it.

The advantage of a Singleton over an Active Component is in particular (in this discussion) that it does not consume any processor cycles when not executing any tasks. Moreover, it can be made concurrent to any required extent. Here we will propose an asynchronous programming (callback variant) approach. So, we cannot say that the component runs on its own thread; there is no ‘engine’ explicitly running in the component. But, we do can say that the component does its own thread management if it is capable of running its operations on threads it controls.

Some Details

Operations in the context of GUI driven programs generally are functions, properties (get, set methods) and callbacks (event handlers).

In implementing IoC with DI we will implement an asynchronous method call as an interface method that delegates the work to be done to another thread, that provides a callback to return results or error information (the caller has to provide the callback), and then immediately returns.

Events can be raised on either the component generated thread an event raising operation is running on, or on a dedicated thread.

The get and set methods implementing properties should, I think, be so simple to not warrant asynchronism. These simple methods, primarily only getting an setting private fields, do should synchronize access to the fields they interface to.

A component as described here may require a substantial amount of threads, in little time. In order to be able to provide these threads timely, the component may maintain a thread pool, or rather, a thread queue where it keeps a stock of ready to use threads.

The following example might seem an overkill of asynchronicity. In that case it is important to keep three statements in mind:

1. This is the era of asynchronous programming. So , programming constructs that provide asynchronicity will be visible.

2. The example is densely filled with asynchronous constructs due to its instructive nature.

3. The exemplified class is an interface class. It is responsible for thread management, hence a focal point of asynchronous constructs. It thus frees other classes, deeper in the component from such constructs.

Example Application

To show the above described principles at work, we will present a demo application. This demo application centers around a Joke-of-the-Day component. You can:

– Request a random joke from the seemingly endless collection of jokes available to the component (property get() function).

– Add a joke to the collection (property set() function). When you add a joke to the collection, interested subscribers immediately are send your joke (raising an event).

– Request a stand-up session. During a client specified period, jokes are presented at small random intervals (string callback). The time remaining for the session is reported at fixed intervals (primitive type callback).

The component is an ISO C++ static lib, suitable for use in a CX component or app.

The complete example application is available from here. In this section we explore a few highlights dealing with lifecycle management and thread management.

Singleton Pattern

As developed above, the component manages its own lifecycle, but the component does not run on a thread. Below is the singleton class. Just the parts relating to the singleton mechanics are shown. Since the ‘me’ pointer is private, the class’ object dies with the program.

The class implementation, simple as can be.

Asynchronous Method with Callback (IoC with DI)

Consider the following method:

The methods receives three arguments:

– Duration of the session.

– Callback for returning jokes to caller.

– Callback to report progress to caller.

First the method defines a lambda expression f. It is this lambda that does the work. Since every call of this method creates a new lambda object, we don’t need to synchronize access to this code.

The lambda:

– puts the current thread to sleep for a little while.

– Returns a joke (from the seemingly endless collection) using the inserted callback.

– Reports progress by setting a property. Access to this property has to be synchronized, as will be shown below.

– Returns the progress using the inserted callback.

Next the method creates a new thread for the lambda to run on.

Finally the method detaches the thread and returns to the caller. Now the thread runs in the background and is cleaned up by the system after termination. The thread will call on the caller with results, when it has them, using callbacks inserted by the caller. This is how IoC by DI is implemented here.

Now provisions for error handling has been added here. But the lambda could contain a try – catch construct. In addition, we could define a callback output parameter of type std::exception that the callback could throw if not void at return. The lambda can then just send a caught exception to the caller using the callback.

Property with Synchronized Access to Field

The STL contains a *very* elegant mechanism to synchronize access. Consider the following code:

We need only three extra lines of code to completely synchronize access to the m_progress field. One line to declare the mutex, and two lines to lock the field, one time when setting it, one time when reading it.

The great thing of a lock_guard is that it releases its mutex when it goes out of scope. Most elegant!

Events

The same techniques return in our event implementation. Consider the following code:

We have a vector of type newJokeEventHandler, which is a typdef of a std::function object that wraps a pointer to a void function that takes a string argument. Access to the vector is synchronized with a mutex. When an event handler is added, it is added to the end of the vector. When removed, we take care to not disrupt the order in the vector, because the token we returned to the caller is an index into the vector by which we remove it.

An event is raised e.g. in the setter of the ‘JokeOfTheDay’ property:

The m_raiseNewJokeEvent returns immediately since it is asynchronous, thus releasing the joke for reading and further updates.

In Conclusion

We have seen that a C++ component that holds its own lifetime and manages its thread can be easily developed. We do not need CX for this, the resulting code is portable. The resulting component is indeed flyweight – because of the elegant constructs provided by the STL for managing threads, and also because the component doesn’t consume any processor cycles when not processing anything. Indeed, applying the sizeof operator to a thread, mutex, or lock_guard yields 4 (x the size of a char) in each case, i.e. they all are minimum size handles to system resources.

CX Reconsidered [2]: MVVM to the Rescue

Tactical implementation of the MVVM pattern will stop CX constructs bleeding through all of your code. In the first installment of this series, I have argued that CX data and threading structures tend to proliferate throughout your program, and that this is both unlike advised & advertised by Microsoft and undesirable because it drives out the far better developed C++ constructs. What we need is development tactics that keep the CX layer as thin as possible. This blog post presents first steps in the development of such tactics aka software development patterns or practices.

The goal is that:

  1. CX is used within specific layers of the design of a program.
  2. Each CX layer has a very limited set responsibilities.
  3. CX can be put to good use for the assigned responsibilities.

This is the second blog post in a series of n about my experiences with CX, and how I intend to use it in working with C++ and Xaml in the context of Windows 8.The table of contents into that series can be found in the first article in this series.

Ok, at the end of part [1] I wrote that next up would be a review of a number of (heated) discussions around the introduction of CX. But I changed my mind. It seems to me that although there is definitely value in a well argued position, there is more value in a working solution. So, let’s take a look at a way to put CX at its proper place.

Advised & Advertised CX Usage Policy

The usage policy for CX is, according to MS employees and documentation, to limit the use of CX to a thin layer at the ABI boundary. See e.g. the first response of Herb Sutter in the discussion after this Build 2011 talk

[ABI: At the lowest level, the Windows Runtime consists of an application binary interface (ABI). The ABI is a binary contract that makes Windows Runtime APIs accessible to multiple programming languages such as JavaScript, the .NET languages, and Microsoft Visual C++. (from the Hilo documentation)]

So, when we are interacting with the environment of a Windows Store application or component, we have to deal with the ABI.

However, there is nothing inherent to CX to enforce the advised & advertised usage policy. In part [1] of this series, it has been argued that it is very hard to ‘escape’ from CX and to restrict CX to a thin ABI interface layer.

The main reason it is hard to escape CX is the approach to developing a native code program that is natural to Visual Studio. This approach is a copy of the approach to developing .Net applications. You choose a project template, which assigns a central position to the user interface, then you add functionality to the program, extending, so to say, the capabilities of the user interface. For Windows 8, MS has introduced this approach also for native code, and they call it working with C++/CX. Point is, it is not C++ at all. Note also that CX is far, far behind to .Net in its development.

Nonetheless, if you start development with a CX Visual Studio project, it is CX that is used to interact with WinRT, and it thus defines the interface to the environment of the program. Because CX defines the periphery of an application, a tendency arises to define the main data structures in CX as well as considering its execution thread, the UI thread, as the main flow of control. We tend to consider the UI thread as the main flow of control because today’s apps are typically architected to react to events in the application’s environment. A consequence of this design is that CX language constructs and data structures tend to proliferate throughout a program, to bleed through all of your code. This proliferation generates a number of problems:

  1. Non portable code. The code cannot be compiled with a non-MS compiler, hence is not fit for use on e.g. IOS or Android platforms.
  2. It drives out the far more developed and richer C++11 language constructs, idioms and data structures.
  3. It drives out the far better .Net developer experience, if we consider CX to be positioned as a native alternative to C# .Net.

So, since CX does not itself enforce the Advised and Advertised Usage Policy of confining CX to a thin ABI interface layer, it is the CX user that carries the burden.

As a CX user, you will need a software pattern that restricts CX to what it is good at (yes, it does have its strengths), and to locations where it is useful. Such restrictions can, of course, be realized by disciplined application of conventions, but here we strive to have structural constructs that support the desired restriction: structural constructs that put a definite end to CX proliferation.

In restricting the use of CX we assume the task to not define the main data structures in CX, and to not run the principle flow of control in CX. We are constructing a generally applicable, patterned approach to developing programs involving one thin ABI interface layer of CX.

Overview of the Solution

In this blog post we propose to implement MVVM as a double layered structure, as depicted in the diagram 1. Yes, layers can be expressed as rectangles (with rounded corners, even) as well.

double layered Design

Diagram 1

As you can see, the core is considered the most important part of an application :-).

Components

In terms of physical components, or types of Visual Studio projects, or types of MS technologies, the proposal is to implement the core in C++, as a static library (or several static libraries); to implement the Interface as a WinRT Runtime Component written in CX; and to define the User Interface in Xaml with ‘code behind’ and other environmental interactions in preferably in C# or, if the situation necessitates the use of native code, in CX.

MVVM

The Model – View – ViewModel pattern will be used to stop the bleeding of CX constructs. Diagram 2 shows an image from the PRISM documentation that provides a very clear idea of the MVVM pattern.

Diagram 2: PRISM interpretation of MVVM

In this article we will use a slight variation of MVVM: We consider the View to cover all of the environmental interactions, not just the GUI.

The MVVM pattern is tactically implemented as follows:

  • The View is realized in the Peripheral layer.
  • The ViewModel is realized in the Core layer.
  • The Model is also realized in the Core layer.

The Interface in Diagram 1 doesn’t have a specific role in the MVVM pattern, it has an implementational role.

Should you like to review the MVVM pattern, you might like to take a look at PRISM or the MVVM Light Toolkit (the historical roots of MVVM are really interesting as well).

The Peripheral Layer

Conforming to MVVM, we keep the Peripheral Layer as thin as possible. There are many different types of environmental interactions, which for now will be conveniently categorized as “The Xaml UI”, and “Other types of Environmental Interactions”.

The Xaml UI

There is always the discussion of how much code to allow in the code behind of an MVVM implementation. Since we really want layers that could contain CX to be thin, we decide two things:

  1. We use as little code in the user interface as possible. We limit the View to presenting data to the user, sending Commands and data to the ViewModel, and responding to events (callbacks) coming from the ViewModel. On the other hand, we allow code that defines interactions between user interface elements only. An example of the latter is opening a file picker when a user has clicked a button.
  2. We make the boundary between the Views on the one hand, and the ViewModels and Models on the other hand an ABI boundary.

Why an ABI boundary between View and ViewModel?

Well:

  1. For data binding and commanding. Any object exposed by a Windows Runtime Component across the ABI to a C# Xaml UI can be a source for data binding (this holds for C#, but not for CX).
  2. As a containment barrier in case a Xaml + CX GUI is used.

CX is Not a Good Choice For Xaml UI Code Behind

But that may change over time, of course, so let’s pin it down to “in august 2013”. So what’s wrong with the use of CX in the code behind of a Xaml UI?

  1. Data binding support is rather crude. Data binding in CX requires data binding source classes either to be decorated with the Bindable attribute or implement ICustomPropertyProvider, and have the bindable properties registered as ICustomProperties (see Nish Sivakumar’s implementation). Either requirement makes it extremely impractical (I would like to have written ‘impossible’ here) to data bind to properties exposed by a Windows Runtime Component. So, note that by requiring an ABI barrier between the UI and the ViewModel, we virtually ruled out CX as a possible language for Xaml code behind.
  2. MVVM support is unstable. I have defined several (non trivial) Xaml GUIs with CX as code behind platform, and seen the Xaml designer crash when a ViewModel locater class was inserted as a global resource to provide the DataContext and also data templates were provided to it as resources. On incidental beautifully sunny days the designer would provide an error message saying it could not instantiate some resource.
  3. Asynchronism: the PPL tasks library seems to have a special version for Windows Store applications, and it is rather hard to handle. It also frequently seems to operate not according to its documentation.

An argument to do use native code is performance. But we intend to keep the Views layer a thin layer, with an absolute minimum of functionality, so the performance of this layer will not easily affect general program performance. This is both because it is a minimal layer, and because it is the UI, i.e. it is about sending data about user actions to the core.

So, the performance argument has relatively little weight, and I think we are better off using C# .Net in the Views layer. Just because it supports development so much better. Think e.g. of the support for MVVM itself; there are several, and leading, MVVM frameworks to support you using the pattern for applications of arbitrary complexity.

When using C# in the code behind, one thing we do have to pay attention to though, is marshalling data across the ABI. We want data that crosses the ABI to be copied only if unavoidable or when we like to have a copy instead of the original. In general we want to have pointers (references) copied across the ABI. As we will see (elsewhere) this requires the use of write only data structures, also if we only want to read the data with which the write only data structure is initialized.

A possibly less urgent consideration is that the combination of a Xaml + C# GUI and native Windows Runtime Components is also a way to go on the Windows Phone platform.

Other Types of Environmental Interaction

The above section discusses the case for the Xaml UI – the View. How about the other types of environmental interaction mentioned, like database access, networking, file access, etc. Will you do that in .Net as well?

As a first go, yes. The peripheral layer should be minimal, so in the case of e.g. incoming network data you would like to stream incoming bytes as directly as possible into a buffer controlled by the core, as an unstructured stream of bytes. I think that we can set this up so that C# is used to control the work, but the system (written in native code) is used to do the work, hence performance will not be an issue.

If performance does turn out to be an issue (after measurements and analysis), I would use a native solution. Think of the C++ framework Casablanca, or even a custom solution in CX (indeed!).

The Core Layer

This is where we want to write static libraries of ISO C++11(+) only. Why?

C++11

Personally I happen to like C++ (and the STL), and version 11 more than earlier versions. Apart from that, maximum performance against minimal footprint gets you the most out of available hardware, which enriches user experience and hopefully also reduces (environmentally relevant) power consumption.

ISO: Portable Code

In the second quarter of 2013, some 44.4 million tablets were sold running either iOS, Android or Windows (8), of which 1.8 million are running Windows 8. In the same period, 227.3 million phones were sold running either Android, iOS or Windows Phone (8), of which 8.7 million are running Windows. So, we want to port our precious code to Android and iOS, thus reaching a market of say 271 million devices sold in the previous quarter alone; that’s over a billion in a year :-). And then there is also the PC market, of course, of about 500 million PCs running Windows, and coming to Windows 8 sooner or later.

Library

Putting code in a separate library allows you, among others, to specify the compile switches you need for a specific piece of source code. Using a library will allow us to specify that the compiler must not compile CX: we will not set the /ZW switch (or rather, we will set it to /ZW:nostdlib). So, CX constructs cannot bleed into such a library.

Static Library vs. Dynamic Load Library

Static libraries link at compile time, not at run time, hence have a relative performance advantage. Also if you export activatable classes (COM Components such as CX classes) from a static library, they cannot be activated. From a dynamic library, they can, see here. So, CX classes cannot be run from a static library.

Structured Data

We will make sure all main data structures are part of the Core Library. The use of system facilities, such as data transport, will be defined inside the core by C++ constructs, such as ‘pointer to stream’, that are used by the Interface layer to import and export the required data – as streams of primitive types. So, no CX owned main data structures.

Inversion of Control and Dependency Injection

We will expose any functionality only as Inversion of Control (IoC), also known as ‘The Hollywood Principle’: Don’t call us, we’ll call you, either by Dependency Injection (DI) or a Locator Service, see e.g. the articles by Martin Fowler here and here.

If the Core runs on its own thread(s), it is not susceptible to threading issues created by interactions with the Peripheral or Interface Layers (although it might have its own threading issues). We are also in a position to use STL threading; the bleeding of Microsoft threading technology into code we wish to be portable can be halted. So, no CX owned threads in the Core.

The Core and the UI: Ownership issues

We would like the Core library to be as independent as possible. The rationale behind these tactics is that independence from CX precludes having to incorporate CX constructs, either with respect to data or with respect to control. Another advantage may be that the Core’s lifecycle is not controlled from the UI thread, hence no freezing, throttling or killing. Of course, there is also no freezing of the UI.

The core library is already really independent by incorporating the program’s main data structures, by managing its own threads, and by utilizing the IoC pattern. Nevertheless, we can take independence a step further by looking at the ownership of the Core library. Who is the owner, that is: who controls its lifecycle?

The system starts up a Xaml application by calling the main method defined in App.g.hpp (CX) or App.g.i.cs (C#), which then starts up the UI. Usually you then instantiate other classes from the principal UI classes like the App or MainPage classes.

Alternatively you could define your own main method. The Xaml main method is decorated with the DISABLE_XAML_GENERATED_MAIN symbol. If that has been defined the decorated main function will not be used (surprise!). Your main method could instantiate the Core library and provide the UI with a handle, while holding a reference itself in order to control the lifecycle of the Core. The Core and the UI are now completely independent. An example of a system with an alternative main function is the demo application in the WinRT-Wrapper library by Tomas Pecholt. See here (the comment by Tomas) for an introduction to the WinRT-Wrapper.

Less invasive tactics (which I like better) may be to provide the library with a factory that creates the core, holds ownership and provides the UI with a ref counted handle. So, there is no ownership of the Core by the UI or the Interface.

First the Core

Where C# .Net applications start development at the UI, I think development in C++ + Xaml applications should start at the Core library.

The Interface Layer

Mediating between an application and the ABI, i.e. the system, that is where CX can be valuable. The strong point of CX is that it is ‘syntactic sugar’ over WRL constructs; CX reduces the amount of code markedly compared to the WRL. The WRL (the Windows Runtime Library, an ATL analogue), is itself intended to make interactions with the Windows Runtime practical. It has been shown multiple times (see specifically the articles by James McNellis) that CX makes it much more comfortable to interact with the Windows Runtime. If so required there is nevertheless always the possibility to pass by CX and insert some WRL code as demonstrated by James McNellis (see the answer) and here, and Kenny Kerr. As I understand it, CX code is the better choice for the bulk of WinRT interfacing code, but at times, WRL is the better choice for getting the ultimate performance. See the talk and slides by Sridhar Madhugiri at Build 2013

Since this is where CX is really useful, this is the first and foremost layer we want to keep thin. The layer’s responsibilities are (only) to relay data and commands across the ABI from the Peripheral Layer to the Core, vice versa. Of course, with a minimum of copy operations. We will use it, so to speak, to map the interface of the Core Library onto the ABI.

CX Reconsidered [1]: A Thin Layer of CX?

This is the first blog post in a series of n. The posts in this series will discuss opinions about C++/CX (from here on referred to as CX for reasons to be explained later), discuss pros and cons, and propose a meaningful way of working with C++ and Xaml in the context of Windows 8.The table of contents into that series will develop right below this sentence.

Part 1: A Thin Layer of CX? (this post).
Part 2: MVVM to the Rescue.
Part 3: Components on a thread

Introduction

Before Windows 8, I used to develop software in C# .Net for ASP.Net, WPF, and Silverlight, in C++ and some DirectX. What I needed was a better integration of C++, DirectX, and Xaml user interfaces. And low-and-behold, with Windows 8, Microsoft introduces CX. According to MS spokespeople CX is to be used as a thin layer around C++ programs, 99% of the code should be regular ISO C++. The value of the CX layer lies in the ability to cross the ABI (Application Binary Interface). The ABI is what makes it possible for programs written in language A to be used by programs written in language B. By the ABI, CX can also be provided with, among other things, a Xaml UI. Of course I jumped on it, this was exactly what I was looking for!

Now I’ve been writing software in CX (among others) since May 2012. It’s now June 2013, hence it is time to evaluate the experience; for myself, but hopefully this evaluation is of value to other developers too.

Conclusion

Let’s start with the conclusion, and then provide some analysis.

The conclusion is that I truly and intensely dislike CX. In the MS Forums (Fora?) someone wrote that programming the bare WinRT is tedious and rather painful. To me, the same holds for CX as well – although it is meant to relieve just that experience. My aversion for CX brought me to the point that I could not see where CX does come in handy, but thankfully I was able to put myself straight at that point.

What I dislike about CX can be summarized in 3 statements:

1. CX is supposed to be used as a thin layer, but escaping from CX is very hard. Once you start developing your program in it, you are likely forced to keep using it.

2. CX is not C++. That is, practices and idioms that you use with ISO C++11 are, as a rule, not valid for CX. This is why it is being called CX here, its native, but not C++.

3. CX is not C#. The same as above holds for C#, moreover, the developer experience with VS2012 is strongly inferior, as is the support for Xaml interfaces. Community contributions (such as MVVM light for .Net) are minimal.

That is, CX doesn’t meet your expectations as either a C++ developer or a C# developer. nor does it support ‘a thin layer of CX’ – it doesn’t let you go.

So, if you are a .Net developer that would like to author C++, doing CX is not the way to go.

Then there is this nagging question: If CX is meant to be used as only a thin layer around C++ code, then why has MS created the HILO example: a full-blown CX program? To show that CX should not be restricted to a thin layer?

Analysis

Why the aversion? Let’s go over the points in the Conclusion above.

A Thin layer of CX

The intention of a thin layer of CX is OK. It saves you the trouble of having to write code using the WRL (say COM). However, by the way MS has set application templates up that are based on CX, it is hard to restrain CX to a thin layer. A CX application has two powerful assets that bring it everywhere:

1. It defines the outer periphery of the application, that is all contact of the application with the world outside the application is via CX.

2. It runs on the main application thread; the UI thread, so it constitutes the main flow of control of the application.

This is a powerful combination in an application which architecture is to respond to environmental input and some system events. Anything coming from the environment: user input, file contents, network data, and data from files and databases, comes into the system in CX data structures, and on a CX thread.

Restrictions that apply to CX data structures tend to proliferate into user defined data types, and threading restrictions tend to proliferate into user spawned threads. Thus, the CX layer tends to expand. It will not be a thin layer, You will not work in C++, but in CX.

CX is not C++

If you are a C++ developer, you want to code in ISO C++11, not in CX, and certainly not in CX types instead of e.g. STL types. The developer experience of CX is strongly inferior to the C++ experience.

CX is not C#

C# .Net developers are used to a comfortable developer experience. Things tend to ‘just work’ (as they should). With CX, things don’t ‘just work’. Give a C# .Net developer the choice to switch to CX, and he/she will walk away smiling, if not grinning, after a short trial.

So, what happens is that CX hijacks your program, and you will have a hard time to escape. Once caught within CX you will be frustrated because you will be deprived of both the developer experience of both C++ and of C# .Net.

Not all is lost, however. With some architectural maneuvers, we will definitely put CX back in its cage! (But that will be in another installment in this series 🙂 ).

Next up: How CX was received in the forums (fora).

Viewing Kinect Data in the New Windows 8 UI

Introduction

The Kinect SDK is not compatible with WinRT in the sense that software developed using the SDK cannot have a WinRT (Windows RunTime) UI. The reason is that the Kinect SDK is .Net software and you cannot run (full) managed code on the WinRT.

Nevertheless, I want to create software that can show Kinect data in a WinRT UI. For multiple reasons, one being that software written for the WinRT can run on a PC, a tablet, very large screens, now called a Surface, and a Windows Phone. A survey of other solutions, see below, reveals that solutions to this problem are based on networking. Networking allows us to deliver Kinect data anywhere. This then is another reason to work on separating the source of Kinect data from its presentation.

The Solution

The general solution is to make a client-server system. The server lives in the classic Windows environment, the client is a WinRT app. Communication between client and server is realized using networking technology; preferably the fastest available. The server receives the data from the Kinect and does any processing that involves the Kinect SDK. The client prepares the data for presentation on the screen. If multiple servers are involved, it integrates and time-synchronizes data from several servers. Since I’m a C++, DirectX guy, the server and client are built on just these platforms

Other Solutions

Several other solution already exist. Without pretending to be exhaustive, and in any order:

– The KinectMetro App by the WiseTeam

– ‘Using Kinect in a Windows 8 / Metro App’ by InterKnowlogy

– The Kinect Service from Coding4Fun

The KinectMetro App by the WiseTeam

The application by the WiseTeam is described in this blog post. The software is available at Codeplex. The software was written for the Windows 8 Consumer Preview as part of a MS Imagine Cup participation. I’ve downloaded the software, but couldn’t get it to run on the Windows 8 RTM version. The application is based on event aggregation, as found in PRISM, and on WebSockets.

‘Using Kinect in a Windows 8 / Metro App’ by InterKnowlogy

The approach InterKnowlogy took is blogged here. This is the entry point to several blog posts, some videos (Vimeo and Youtube), and a little bit of code. This solution is also written in C# .Net, using WebSockets.

The Kinect Service by Coding4Fun

This software is available from Codeplex. It is not aimed at the WinRT, it aims at distributing Kinect data to a wider spectrum of clients. Hence it can also be used as a base to target the WinRT. Apart from the server, it consists of a WPF client and a phone client. This looks like a high standards, well written solution. Neat! Data transport uses WinSockets (not WebSockets). The code is available in both C# and VB.

Evaluation

In theory, WebSockets are slower than WinSockets. There can be much discussion about what would be the fastest solution under which circumstances. I expect WinSockets to be the fastest solution, therefore I prefer WinSockets.

Also, in theory, a C++ program is faster, and smaller, than an equivalent C# program. There can be much discussion … , therefore I prefer a program written in C++.

Of course, we should do asynchronously, or in parallel, whatever can be done quicker in parallel.

Approach

So, what’s a smart way to develop a client server system to show Kinect color and depth data in a WinRT app? For one, we start from SDK samples:

– A sample from the Kinect SDK that elaborates processing depth and color data together.

– A sample the shows how to use WinSockets (in C++).

– A sample that show how to use the WinRT StreamSocket using PPL tasks (yes we will exploit parallelism extensively 🙂 .

– Windows Service (optional, see below).

The use of a Windows service is an option for later use. To work with a service instead of a simple console application requires that the server is capable of handling all kinds of exceptional situations, if only by resetting itself. Consider e.g. the case that no Kinect is connected, or if the Kinect is malfunctioning? Etc.? And apart from that, I expect the Kinect SDK to be made available for WinRT applications in due time.

Architecture

Server side architecture

The general software architecture looks like this:

The test application instantiates the KinectColorDepthServer DLL. The idea is that in case the DLL is run by a service, the DLL can be loaded and dropped easily / frequently so as to prevent problems that relate to long running processes. So every time the client closes the WinSock connection, the application (or service), drops the DLL and creates a new instance.

The KinectColorDepthServer has a simple interface; you can Run it, Stop it and Destroy it. The interface has this neutral character so we can use the same interface for other data sources, like a stereoscopic camera. The server instantiates a Kinect DataSource on a separate thread, and waits until the connection is closed. The Server also creates two WinSock servers and hands references of these servers to the Kinect DataSource. The WinSock servers are created at a relatively high level, so we can configure them at a high level in the call chain. Lifecycle management of the WinSock servers is in parallel.

The Kinect DataSource contains those parts of the Kinect sample that contain, or refer to Kinect SDK code (which cannot be run in the WinRT client). The Kinect DataSource sends pairs of a depth image and a color image in parallel to the client. The main method in the Kinect DataSource deals with mapping the color data to the depth data.

The WinSock server is just the basic WinSock server sample from the Windows SDK documentation.

Client Side Architecture

The general software architecture looks like this:

The WinRT UI application class manages the lifecycle of the application. The MainPage manages the state of the user interface.

The MainPage references the Scene1 class that inherits from the Scene class in My DirectX Framework. This framework organizes standard WinRT DirectX11.1 code in a structure that is similar to the XNA architecture. This latter architecture support easy creation and management of graphical components very well. So, it keeps my code clean and well organized under a growing number of components. I like that, because I like to have oversight.

The Scene1 class refers to the KinectColorDepthclient, which provides the data, and the KinectImage class which contains the DirectX code (a WinRT port) from the Kinect SDK sample, which it uses to display the Kinect data on the screen, using a SwapChainBackgroundPanel. The Scene1 class also references a Camera class (not shown in the diagram) that allows the user to navigate through the 3D scene.

The KinectColorDepthClient creates two StreamSockets, one for depth data, and one for color data. Reception of depth and color images is parallel, then synchronized so as to keep matching color and depth images together. The resulting data is handed over to the KinectImage.

One goal of this architecture is that the KinectColorDepthClient class can be easily replaced by another class, e.g. when Microsoft decides to release a version of the Kinect SDK that is compatible with WinRT. For this reason it has a limited and general interface.

Parallelism is coded making extensive use of PPL task parallelism. PPL Tasks is really a pleasure to use in code.

WinSock2 sockets cannot be used in the WinRT, as it turns out. The alternative at hand is the StreamSocket. However, the StreamSocket still contains a bug. Closing a StreamSocket is done in C++ by calling delete on a StreamSocket object. This however, raises an unhandled exception (that I did not succeed in catching, by the way). It does not only do this in my code, but also in the StreamSocket sample that can be downloaded from MSDN (12 October 2012). A bug report has been filed.

Performance

So, now that we have this nice software, just what is the performance, that is, how quick is it, and how large are the programs involved?

Dry testing the transmission speed

To gain an idea of the speed with which data can be transported from one process to another, I sent a 1Mbyte blob from a Winsock2 server to a Winsock2 client 10.000 times, and averaged the transmission time.

Clocking was done using the ‘QueryPerformanceCounter’ function, which is quite high res. The performance counter was queried just before the start of transmission at the server, and just after arrival of the last blob at the client. The difference between the tick counts is then divided by ‘QueryPerformanceFrequency’, which give you the result in seconds. So multiply by 1000 (ms) and divide by the number of cycles (10.000). This shows that transmission of 1Mbyte takes about 1.5 ms (release build).

Now, we are planning to send 640 x 480 pixels (4 bytes each), and an equal nr. of depth values (2 bytes each) over the line. This will take us about 1.5 * (1.843.200 / 1.048.576) = 2.6 ms (wow!). The conclusion is that there will be no noticeable latency.

Visual Studio Performance Analysis

This tool is about finding bottlenecks in your code, so you may remove them. In an analysis run of the server, 5595 samples were taken. The CPU was found executing code I wrote / copied myself in 21.4% of the samples, all in one method. It is possible to examine which lines of code take the most time in that method. I measured an average processing time of these lines of code, and they typically take 1.7 ms (release build, debugger attached) to execute. Well, what can I say? Although I suspect the 21.4% could be improved, we will just leave it as it is.

In a second analysis run, the client application was scrutinized. In this run 2357 samples were taken – I guess it turned out harder to take samples. As little as 2.64% of the samples were in ‘my code’ (that is: 58 samples). Another 8.10% is taken up by DirectX – running shaders for my program, I think. So, in all about 11%. Since the rest of the samples hit code that I cannot touch the source code of, and that we may assume is already well optimized, this is a very fine result.

Footprint

And how about the size, the footprint? The release build shows a client that has a working set of around 40 Mbyte, and a server with a working set of about 95 Mbyte. Together about 135 Mbyte. Well, that’s not small, but what should we compare it to? The Kinect Service by Coding4Fun, of course!

I downloaded and ran the WPF sample (pre-built). It turns out that the server usually stays under 130Mb, and the client will stay under 67 Mb. Together slightly less than 200Mb.

In conclusion: the footprint of the C++ application is smaller. Its size is 2/3 of the .Net application size, but it is not dramatically smaller.

Demo video

Below you’ll find a link (picture) to a video demonstrating the Kinect client-server system. First the server is started in a Windows desktop environment, then the user (me 🙂 ) switches over to the Start window to start up the client. You can see the client connect to the server – watch the log window at the lower left – and then see the Kinect data on the screen. The stream is stopped and then restarted. That is, in fact, all. The video has been made using Microsoft Expression Encoder Screen Capture. The screen capture has been processed with Encoder, with which we also made the snapshot that serves as the hyperlink to the download site (SkyDrive – Cloud!).

The jitter in the picture is caused by the depth stream. The depth stream consists of depth measurements expressed as the distance from the camera along the normal emanating from the camera, in mm. These measurements are subject to a certain error, or uncertainty, which causes fluctuations in measurements, hence the jitter in the stream.

Filtering away the jitter is high on my agenda.

The Windows 8 Metro SwapChainBackgroundPanel

Microsoft has provided a nice facility for inter-operation between XAML user interface elements and DirectX graphics: the SwapChainBackgroundPanel. In fact they have provided three alternatives, but here we focus on the high performance alternative that also leaves most control to the developer.

Microsoft was kind enough to provide a sample program that shows how to use the SwapChainBackgroundPanel. However, this program also does a fairly large number of other things. So, I decided to create a small project in which the use of the SwapChainBackgroundPanel is central, but that can also be used as a starting point for a larger program.

You can download the Visual Studio 2012 project from here. You will need Windows 8 (Release Preview) and Visual Studio 2012 (Release Candidate) to build and run the application.

The starter project combines elements from the XAML DirectX 3D shooting game sample (which exemplifies the use of the SwapChainBackgroundPanel, with elements of the standard Visual Studio Metro DirectX application template. All the application does is show a rotating colored cube.

Well, that is not entirely true. Couldn’t resist the temptation to add a slider (and a data bound TextBox that shows the value) that controls the rotation speed and direction of the cube.

The behavior of the slider is not (yet) as desired, see this screen capture video; the slider moves uncontrollably back and forth (albeit once in each direction) when the setting has changed. I’ve issued a feedback item for this, and trust that this problem will be solved in the RTM version.

Some other controls also suffer from this type of problems concerning the sharing of screen ’real estate’ between raw DirectX and the XAML render engine, try e.g. the ComboBox control.

The project setup follows a specific pattern. A Visual C++ project may collect files in filters – much like folders, but not physical. A blank Metro style project already contains the Assets and Common filters, for Metro specific files. I found it is becoming standard practice to collect basic DirectX code under a DirectXBase filter. This filter hides all DirectX related code the can easily be reused in other projects. The Precompiled Headers filter hides just what it says it will hide. It advances build performance (pretty much) to collect all standard and / or multiply used headers in pch.h. For application specific rendering you create your own render engine – hidden by the Render Engines filter. Your render engine will use Shaders – see the Shaders filter. Finally, application specific DirectX render logic, like your standard Update method, is situated in the custom Controller class, hidden by the Controllers filter. Architecturally speaking, the Controller class inherits from RenderEngine, which in turn inherits from DirectXBase. The App class is responsible for application management, and the MainPage class is responsible for management of the visual state.

The intended architecture is also depicted in the UML class diagram below.

This setup is a copy of the shooting game sample. It seems, however, more natural to attach the controller to the MainPage, since the SwapChainBackgroundPanel, which provide the render surface for the DirectX code is in the MainPage as well.

Of course, If you really want to do a clean job, you could separate off the DirectX part into a WinRT dll. This would allow for reuse and interop with C# code. Alternatively, the controller for the SwapChainBackgroundPanel could be attached to, conforming to the MVVM pattern. At this point, however, I was happy to have a working first application and left pimping up the project for another occasion.

PInvoking DirectX from Silverlight

Before moving on to Windows 8 development, I decided to write some legacy software. Well actually, this legacy software is perfectly up-to-date Windows 7 level software; tricks presented here will be useful for years to come. It’s just that Windows 8 (Consumer Preview) provides standard solutions to the problems solved here. This blog post discusses the use of a DirectX application, packaged as a DLL, by a Silverlight application, via PInvoke.

The problems tackled here stem from the desire to have Rich Internet Applications (RIAs) for Windows, that use computational resources on the client computer. In particular DirectX for 3D-graphics, X3dAudio, for 3D-audio, and also the GPU (Graphics Processing Unit – a powerful, highly parallel processor). Silverlight provides the facilities to write RIAs, but has a somewhat outdated 3D-graphics library: a subset of XNA – a managed wrapper for DirectX9 (but we want DirectX11, at least!). This Silverlight 3D-graphics library is not very extensive, it lacks e.g. 3D-audio.

On the other hand, Silverlight does provide facilities for interoperability with native code, e.g. by means of PInvoke: the invocation of native code in Dynamic Link Libraries (DLLs). PInvoke is here the bridge between Silverlight and DirectX code.

This blog post presents:

  • A sample DirectX11 application, and its transformation into a DLL to be used from Silverlight.
  • A Silverlight application that calls methods in the dll.
  • How to install and uninstall the DLL, and how to manage its lifetime explicitly, so the DLL may be uninstalled by the Silverlight application itself.
  • Performance aspects of the Silverlight-DirectX application, and a comparison with a Silverlight application that uses the Silverlight 3D-graphics library for the same task.
  • Concluding remarks, for one thing that this application should have had 3D-audio to decisively mark the advantage of the approach presented here (but at some point, you just have to round up).

The DirectX 11 Sample Application

The DirectX 11 Tutorial05 sample application will serve as the application a user wants to run on his or hers PC, and that uses resources already present on that PC. This DirectX application is the most simple application that contains some animation, and it has also a part – the small cube – that we can multiply in order to generate data for various performance loads.

To that end we transform it into a DLL with as much unnecessary functionality stripped, and an adequate interface added, including the code to transfer the data we need in the Silverlight application. Let’s take a look at the main changes.

Minimizing Window Management Code

For starters, We do not need a window, we use the DirectX application only to compute the 3D-graphics we present in the Silverlight application. The wWinMain (application entry point) function now looks like this:

Sample code like above is entered into the text as pictures. If you would like to have the code, just leave a comment on this blog with an e-mail address and I will ship it to you.

The function has no “Windows” parameters any more, nor has it a main message loop. The InitWindow function has been reduced to:

We do need to create a window in order to create a swap chain, and only for that reason, so we keep it as simple and small as possible. Note that the wcex.lpfnWndProc is assigned the DefWindowProc. That is: the application has no WindowProc of its own.

Create Texture to be Used in Export

In order to export the 3D-graphics data, an additional texture (a texture is a pixel array) called g_pOutputImage is created in the InitDevice function:

This texture has usage “Staging”, so the CPU can access it, and we specified CPU access as “Read”. With these settings we can’t bind the texture to the DeviceContext anymore, so no BindFlags. Note that we cannot have a texture that the GPU writes to, and the CPU reads from. If that would have been possible we could have had a data structure that both DirectX and Silverlight could have used directly. Since this is impossible we will have to perform expensive copy operations. Alas.

A final change in this same function is that we do not release the pointer to the back buffer, but keep it alive in order to export the graphics data in the Render function.

Rendering 3D-Graphics

The Render function has a loop added so we can have multiple small cubes. The idea is to compute a World matrix for each additional small cube. That is, we have only one cube, but draw it multiple times at different locations. Like this:

and:

Converting and Exporting 3D-Graphics Data

Finally, we want to copy the 3D-graphics data into an array the Silverlight client has provided, so that the client can show it to the user. This is done like so:

The above is standard code, I obtained it around here (the direct link seems broken). The ConvertToARGB function, however is a custom addition, replacing the memcpy call (more about that in the section on performance). This ConvertToARGB converts the RGBA format of DirectX to the premultiplied (PM) ARGB format used in Silverlight. This PM ARGB format is considered legacy now. The conversion step is a real performance hit as anyone can imagine. The function looks like this:

Essentially this OR-s 4 integers, the first one is constructed by byte-shifting the A (transparency) byte all to the left, then 3 integers are created by pushing the RGB bytes in place. This is a fast algorithm since shifting is a quick operation. I found it here. After the conversion, the pixels are in the correct format in an array that is owned by the Silverlight client application.

The DLL Interface

The interface has the following methods:

And for performance measurements:

The above functions return an average time over the Render function, and an average time over the conversion and export respectively. Details will be discussed below. The

decoration results in a clean export of the function names. Without the decoration, the C++ compiler will add a number of tokens (among which at least a few like @#$%^&*) to the function name in order to make it unique. The problem with this is that you’ll have a hard time retrieving the actual function name for use in the Silverlight client.

The Silverlight Client

General Architecture

The application has the following structure:

The App class is the application entry point (as usual). The Application_Startup event handler, depicted below,

first checks if the application is running out-of-browser (OOB). Running OOB is the intended normal use of this application. If so, a MainPage control is instantiated which will run the DirectX code. If the application is running in-browser, it still needs to be installed. Only after installation, the application has access to the file system – required to save and load the dll – and to the GPU. The application requires Windows 7 or higher and bails out if a lower level Windows or Apple OS is found.

The install page offers to install the application on the user’s PC, as depicted below,

or tells the user that the application is already installed, and hints at ways to uninstall the application if so desired.

If the user installs the application, it starts running out of browser and shows the MainPage with the DirectX animation.

Installing, Uninstalling, and Managing DLL Lifetimes

Installing includes saving the DirectX application in the DLL to a file on the user’s PC. The DLL is packaged with the Silverlight application as a resource. For execution, the DLL has to be loaded in memory, or be present on the PC as a file. Saving the DLL to file is done with code after an example from the NESL application. We store the application at “<SystemDrive>ProgramDataRealManMonths PInvokeDirectXTutorial05”.

Once the DLL is saved to file we load it into memory using the LoadLibrary function from the kernel32.dll. The reason we manage the dll’s lifetime explicitly instead of implicitly by importing the dll, and calling its functions, is that we need to be able to explicitly remove the dll from memory when exiting the application, see below. Loading into memory requires a dll import declaration:

And a call of this function, in the MainPage_Loaded event handler:

Where DllPath is just the path specified above. Is that all? Yes, that’s all.

When the application is exited, we use the handleToDll to release the library with repeated calls to FreeLibrary. Declaration:

Then we call it in the Application_Exit event handler as follows:

The point is that each method we import from the dll increases the reference count. As long as the reference count is larger than zero we cannot unload the DLL, nor delete its file. Not being able to delete the file means we cannot properly uninstall the application – we would leave a mess. Once the ref count is zero, FreeLibrary unloads the library from memory.

The final question in this section is why we delete the dll file every time we exit the application, and create the file every time we start it up. The reason is that if we do not do that, and the user uninstalls the application from the InstallPage (running in-browser), the application does not have the permissions to access the file system, hence the DLL file will not be deleted. So, all these file manipulations are bound to the runtime of the application in order to have a clean install and uninstall experience for the user.

PInvoking the DirectX Functions

Now that the application can be installed, functions from the DirectX application interface can be declared and executed.

[DllImport(DLL_NAME, SetLastError = true, CallingConvention = CallingConvention.Cdecl)]

public extern static int Init(int width, int height,

[MarshalAs(UnmanagedType.LPWStr)] String effectFilePath);

[DllImport(DLL_NAME, SetLastError = true, CallingConvention = CallingConvention.Cdecl)]

public extern static void Render([In, Out] int[] array);

[DllImport(DLL_NAME, SetLastError = true,CallingConvention = CallingConvention.Cdecl)]

public extern static int Cleanup();

[DllImport(DLL_NAME, SetLastError = true, CallingConvention = CallingConvention.Cdecl)]

public extern static void GetRenderTimerAv(ref double pArOut);

[DllImport(DLL_NAME, SetLastError = true, CallingConvention = CallingConvention.Cdecl)]

public extern static void GetTransferTimerAv(ref double pArOut);

We make a call to the Init function in the MainPage_Loaded event handler, calls to the dll Render function, in the local Render method, and a call to CleanUp in the Application_Exit event handler.

Calls to the timer functions are made when the user clicks the “Get Timing Av” button on the MainPage.

Debugging PInvoke DLLs

At times you may want to trace the flow of control from the Silverlight client application into the native code of the DLL. This, however is not possible in Silverlight. Silverlight projects have no option to enable debugging native code. Manually editing the project file doesn’t help at this point. Now what?

A work around is to create a Windows Presentation Foundation (WPF) client. I did this for the current application. This WPF application does not show the graphics data the DirectX library returns, it just gets an array of integers.

To trace the flow of control into the DLL you need to uncheck ”Enable Just My Code (Managed only)” at (in the menu bar) Tools | Options| Debugging, and to check (in the project properties) “Enable native code debugging” at Properties | Debug | Enable Debuggers.

If you now set a breakpoint in the native code and start debugging from the WPF application, program execution stops at your breakpoint.

Reactive Extensions

In order to have a stable program execution, the calls to the dll’s Render method are made on a worker thread. We use two WriteableBitmaps, one is returned to the UI thread upon entering the Silverlight method that calls the dll’s Render method, the other WriteableBitmap is then rendered to by the DLL. After rendering, the worker thread pauses to fill up a time slot of 16.67ms (60 fps).

Thread management and processing the indices that point into the WriteableBitmap array (implementation detail J ) is done using Reactive Extensions (RX). The idea is that the stream of indices the method returns is interpreted by RX as an Observable collection and ‘observed’ such that it takes the last index upon arrival, and uses the index to render the corresponding WriteableBitmap to screen. This results in elegant and clean code, as presented below.

The first statement create an observable collection from a method that returns an IEnumerable. Note that ‘observing’ is on the UI thread (referred to by the ‘DispatcherScheduler’)

The SubscribeOn(ScheduleNewThread)-clause creates a new thread for the render process. The lambda expression defines the action if a new int (index) is observed.

Rendering on the worker thread proceeds as follows:

To stop rendering we just put IsRunning to “false”. And that’s it.

Performance

DirectX applications – by definition – have higher performance than .Net applications. However, if you pull out the data from a DirectX application and send it elsewhere, there is a performance penalty. You will be doing something like this:

CPU -> GPU -> CPU -> GPU -> Screen instead of CPU -> GPU -> Screen

The extra actions: copying data from the GPU to CPU accessible memory and converting to Premultiplied ARGB will take time. So the questions are:

  1. How much time is involved in these actions?
  2. Will the extra required time pose a problem?
  3. How does performance compare to the Silverlight 3D-graphics library?
  4. Are there space (footprint) consequences as well?

Before we dive into answering the questions, note that:

– The use of DirectX will be primarily motivated by the need to use features that are not present in the Silverlight 3D-Graphics / -Audio library at all. In such cases comparative performance is not at all relevant. Performance is relevant if the use of DirectX becomes prohibitively slow.

For the measurements I let the system run without fixed frequency; usually you would let the system run at a frequency of 60Hz, since this is fast enough to make animations fluent. At top speed, the frequency is typically around 110Hz. I found no significant performance differences between debug builds and release builds.

Visual Studio 11CP Performance analysis: Sampling

If we run a sampling performance analysis – this involves the CPU only, the bottleneck in the process becomes clear immediately: The conversion from RGBA to premultiplied ARGB (and I’m not even pre-multiplying) takes 96.5% of CPU time.

It is, of course, disturbing that the bulk of the time is spent in some stupid conversion. On the other hand, work done by the GPU is not considered here.

To investigate the contribution of the conversion further, I replaced the conversion by a memcpy call. Then we get a different color palette J, like this:

But look, the frequency jumps up to 185 fps (80% more). The analysis then yields:

That is: much improved results, but shoving data around is still the main time consumer. Note that the change of color palette by the crude reinterpretation of the pixel array is a problem we could solve at compile time, by pro-actively re-coloring the assets.

Compare to a Silverlight 5 3D-library application

Would the performance of our application hold up to the performance of a Silverlight application using the regular 3D-graphics library? To find out I transformed the standard Silverlight 3D-graphics starter application to a functional equivalent of our Silverlight-DirectX application, as depicted below – one large cube and 5 small cubes orbiting around it (yes, one small cube is hidden behind the large one).

If we click the “Get Timing Av button”, we typically get a “Client Time Average” (average time per Draw event handler call) of 16.6.. ms, corresponding to the 60 fps. The time it takes to actually render the scene averages to 3.3 ms. This latter time is 0.8ms without conversion, and 2.8ms with conversion for the Silverlight – DirectX application (if we let it run at max frequency). So, the Silverlight-DirectX alternative can be regarded as quicker.

If we look at the footprint, we see that the Silverlight-DirectX application uses 1,880K of video memory, and has an image of 50,048K in the Task Manager. The regular Silverlight application uses 5,883K of video memory, and has a 37,708K image. Both in SLlauncher. So, the regular Silverlight application is smaller.

Concluding Remarks

For one, it is feasible to use DirectX from Silverlight. PInvoke is a useful way to bridge the gap. This opens up the road to use of more, if not all, parts of the DirectX libraries. In the example studied here, the Silverlight-DirectX application is faster, but has a larger footprint.

We can provide the user with a clean install and uninstall experience that covers handling and lifetime management of the native dll.

Threading can be well covered with Reactive extensions.

There is a demo application here. This application requires the installation of the DirectX 11 and the Visual C++ 2010SP1 runtime packages (links are provided at the demo application site). I’ve kept these prerequisites separate, instead of integrating their deployment in the demo application installation the NESL way, mainly because the DirectX runtime package has no uninstaller.

If you would like to have the source code for the example program, just create a comment on this blog to request for the source code, I’ll send it to you if you provide an e-mail address.

The Windows 8, HTML 5, and Silverlight Rumor Circus

In this blog post an overview of the recent wave of fear and anger across the internet concerning the future of .Net in Windows 8 (could be released Autumn 2012), and why it is all a storm in a teacup.

Where and when it started

It all started with the demo of Windows 8 on the AllThingsD Conference (June 1)by Julie Larson-Green and Steven Sinofsky who mentioned that the applications presented were, and further applications could written in Html5 and JavaScript (version 1.8.5, together with CSS 3.0 called the HTML technology stack). Throughout the demo there was no mention of Silverlight or WPF. “This, What Has Not Been Said” tapped into already slumbering fears that HTML5 will compete out Silverlight. Heated reactions followed in discussion groups. See e.g. this one, where the thread was closed by the moderator. Some people, clearly driven by a distinct dislike of the Microsoft company stirred up the fire, as did a journalist of a respected medium.

What it is about

The fear mentioned above is the fear of many developers that costly investments of time and effort will become useless with the release of Windows 8. Of course, if there is only the risk that .Net software would be legacy at the release of Windows 8, investments in .Net and Silverlight software would stop immediately. Not only is there fear, but also dislike. Some developers express the opinion that the HTML5 tech(nology) stack is inferior to Silverlight, and also tedious to work with.

Does it seem justified?

I myself doubt this fear and uproar is justified. The HTML5 tech stack consists of ‘standards’ that are not finished, and have implementations that diverge across browsers, thus forcing web application developers to provide multiple implementations of the same functionality – who would like to pay for that? I know people that sell over the web and implement their web shop in HTML version Long.Ago to guarantee broad accessibility; this is what the Browser Wars have accomplished – people do not like to invest in new versions of the Html tech stack. Microsoft will not make itself dependent on these ‘standards’ it does not control.

Furthermore, Silverlight can do things that the HTML5 tech stack in itself cannot, or never will be allowed to do, if only for security reasons, see e.g. Microsoft’s refusal to support WebGL). Would Microsoft suddenly turn around, embrace this technology and replace its own? Unlikely.

However, Microsoft also didn’t move to take away the fears; they have the Build Conference in September 2011 at which they will tell more. Nevertheless, one might expect some indirect damage control, and it came quickly.

Damage control – differing opinions

First there was the blog post by David Burela referring to analysis of a leaked early Windows 8 build. The conclusion seems to be that software that will be built on top of Windows 8, will be built with .Net and Xaml. A particular group of applications, called ‘immersive’ applications (running within the Windows Shell) can be build using Html5 and JavaScript, much like some types of applications for Vista.

Mary Jo Foley of ZDNet goes further and publishes parts of correspondence with developers who have actually analyzed the early Windows 8 build. The picture that arises from that article is that an improved version of the .Net runtime (referred to as the Windows Runtime) will be central in Windows 8, and is programmable by Xaml in concert with a wide variety of programming languages, i.e. it is like Silverlight / WPF. Within Windows 8, XNA can be used for 3D graphics. Windows 8 apps built using ‘.Net’ should be easy to port to other devices – phones, tablets, etc. say by recompiling and compensating for the form factor. The Html5 apps might depend on the Windows Runtime as well (MD: this would explain the little understood remarks from Microsoft about native support of Html5). WPF and Silverlight may cease to exist as such in Windows 8, but the constituent technologies will be there.

In conclusion, it seems as if Microsoft is creating the facilities to build apps in Windows 8 using the HYML5 tech stack, as an addition to the .Net framework, rather than as a replacement. The motivation to do so, by the way, is to attract more, new developers to the platform. It does not seem to be the case that ‘immersive’ application can be build only in Html5. It seems that ‘Immersive’ is just a namespace, defining an API that is required to build applications which run within the Windows 8 shell.

C++

OK, so software can be built using a variety of .Net languages, among which C++, and the runtime seems to be closer to the metal, thus providing higher performance, because immediate OS layers have been removed.

But there is more. C++ seems to be in what is called a ‘renaissance’. More developers use it in order to gain higher performance, a new specification (C++0x) is on its way, and C++ is recently declared by Google to be the best high performance programming language.

For the next version of Visual Studio (also to be released in 2012) Microsoft announced the AMP Accelerated Massive Parallelism library at the AMD Fusion Developer Summit. AMP promises to provide full C++ access to a heterogeneous set of processors and their memory models. That is to say: you write one program that executes both on a computer with GPU, as well as one without it. Note that GPU’s are not considered to be restricted to rendering graphics. These people consider a GPU a broadly applicable parallel processor (and indeed, there exist Graphics cards without a monitor connector). The demos shown by Sutter and Moth reflect awesome performance; over 1000GFlops.

AMP aims at extreme scalability of single executables, from very simple hardware architectures of a single core processor with dedicated RAM, up to extreme scaling out in Cloud configurations. Sutter showed the aimed for heterogeneity in running an executable on a pc with a multi core CPU with onboard GPU, and also a double separate GPU installed.

My guess is that all this nice stuff also will reflect on the ways software can be build with the evolution of .Net and Xaml on Windows 8.