Archive for February, 2012

Silverlight and COM

Silverlight supports interoperability with COM, and that is a good thing. It does not support registration-free COM, and that seems to me a missed opportunity. This lack of support for registration-free COM has, however, no impact on the importance of COM.


Many books have been written about COM, and none of them can give a short definition of COM that is also comprehensible to people that not already know what COM is. So, here we go, and take the same route J: COM is a software development discipline to develop reusable components. This reusability is at the binary (not human readable) level. There is a standard way of interacting with COM components. Interaction with COM components is programming language and location independent. COM components may (depending on how they are built) support parallelism.


The relevance of COM is enormous. Microsoft has written a great deal of its software (Windows) as COM components. For instance, DirectX consists of a number of COM components. Windows 8 is also being built upon COM. The Windows 8 WinRT layer is an intermediate layer of components that are built on COM (are COM components). Each WinRT component implements the IInspectable interface, which inherits from IUnknown. However, working with WinRT is unlike working with classical COM. See this article for more. The message is that COM, in some form or another will be with us for quite some time to come. Yes we are talking about the Windows platform here. The market share of Windows among computers (including phones and tablets) that access the internet is at time of writing about 84%. Some see this go down, I think it will go up with the introduction of Windows 8 and its emphasis on tablets and phones, and its uniform user experience over a large range of devices.

The good thing for us is that it is far easier to use COM Components than it is to write COM components. Using WinRT components is even easier. On the other hand, let’s not dramatize writing COM. COM programming concerns only a very thin layer in you code; the interface with the outside world.

Writing COM

COM hinges on a few principles that are not very hard to understand. A good book on COM is Don Box’s. Programming COM is quite a bit tougher. To my opinion this has to do with the large amount of macros (#define, #ifdef), typedef-s and odd naming conventions. I understand the typedef-s and macros’ are there to make things simpler. Point is that there are so many simplifications that they pose a problem; why can’t I just use the C++ I already know?


But then, if you really want to write COM, you will use ATL (Active Template Library). This makes writing COM much easier (‘easier’ as in “I don’t know what I’m doing”) writing ATL is a science in itself. I find it very hard to recognize the COM in ATL. An online introduction on ATL is here. The latest version number of ATL is unknown (to me). Visual Studio 2005 carried ATL 8.0. I’ve found no version numbers specific to VS2008 or VS2010.

Attributed ATL

There is a special flavor of ATL called “Attributed ATL” or ATL and Attributes. The attributes were added to make programming ATL simpler. And I think it does. However, the general public doesn’t seem to have adopted this style of programming due to diminished control over the ATL / COM code – the attribute would typically lead to source code insertion. See also the introduction into ATL referred to above. For an introductory article into Attributes ATL see here. In community forums the dominant emotion is that Attributed ATL is deprecated.

Registration-Free COM

Registration-free COM objects are a special case of Isolated Applications and Side-by-Side Assemblies, which is deployment and servicing technology for both native and .Net applications. Introductions into Registration-Free COM are here, and here. Isolated applications and Side-by-Side assemblies depend on the use of manifests. An assembly is an abstract entity defined by the manifest. Then, of course people choke on manifest creation. A handy tool is Regsvr42.

How does registration-free COM activation work? When a client application loads a dll holding a COM component, the loader checks if the application has a manifest before it checks the registry. If there is an application manifest, the loader searches for (a) library manifest(s). If found the COM component gets loaded and executed. This approach makes registration of COM components unnecessary. Requirements for registration-free COM to work are that the client is an application (something that has a process) and that this application has either an embedded or an external manifest associated with it. An embedded manifest cannot be overridden by an external manifest (except on Windows XP).


Silverlight supports COM interoperability. Wouldn’t it be nice then, if we could also ship COM components with the Silverlight Xap over the internet for execution elsewhere. COM components that we made ourselves, for instance. Of course, executing a COM component that is received over the web needs some user initiated installation in order to maintain the necessary protection against evils from internet. So, we do not expect to be able to execute our COM component within the context of our Silverlight application’s resources.

The current mission is to use the DirectX runtimes that are installed on every Windows 7+ systems from a Silverlight application. In this we can take the NESL approach: to install native software, like COM components, if the user installs the Silverlight application (installation of the Silverlight application is, for all practical purposes, required for use of the GPU). The installed application being the custom DirectX application to be used for ultimate 3D graphics and media effects in Silverlight. We could improve the user experience by skipping the installation of COM components, by using registration-free COM. Silverlight would then not require a search in the registry for a ProgId, the loader could read the manifest and simply load the required dll-s.

However, registration-free COM activation doesn’t work for OOB Silverlight Applications. These applications run in the process of the sllauncher.exe, which has an embedded manifest, as inspection with a simple text editor reveals. So, that ends the story of registration-free COM and Silverlight.

What can we do have?

We can have Silverlight applications, for instance Out-of-Browser applications, that consume COM components. If we bring our own component, we have to install it, registration-free activation will not work. Installation can be done the NESL way.

In some community forums thread someone wrote that only interoperability with out-of-process COM servers is possible. This is not correct. The consumed COM component should have a ProgId in the registry. It is not a requirement that is also is an application (has its own process). I’ve tested this statement with a simple in-process COM server that returns a string appended to its input string parameter: “Marc” -> “Hello Marc”. A 100.000 calls take about 760ms on my machine. By contrast, an out-of-Process server that does the same takes about 19 seconds. Both debug builds, by the way.

Since the Windows 8 Windows Runtime is built on COM, investment in COM expertise also seems to be future proof.

PInvoke with Silverlight

As indicated in the previous blog post: PInvoke is the next best thing if you want to use native C++ code from Silverlight. And the first next thing that works.

What is PInvoke

PInvoke, or Platform Invoke is about the invocation from .Net code of dynamic link libraries (dll-s), often written in C++, and frequently part of existing systems like the Windows OS. The idea is that the library in the dll, exports a number of symbols representing methods (including properties) and hooks for callback functions (think: events).

Silverlight supports PInvoke and therefore also unsafe code. Unsafe code requires manually adding the “AllowUnsafeBlocks” tag to the Silverlight project file. Of course the use of PInvoke requires elevated trust, both in-browser and out-of-browser. The dll must be a file on the client computer. If you want to bring along your own dll with your Silverlight application, you will have to write your library to file somewhere, and then load it into memory for invocation. NESL provides an excellent example of this technique.

The hard part of PInvoke is Marshalling (found that out already). You also have to remember that the file location of the dll you want to invoke needs to be in the shell path, or explicitly stated in the dllimport attribute. In the context of Silverlight applications, the ‘current directory’ is not such a fantastic location to search for a dll.

What could you do with PInvoke

The driver for using PInvoke is that you can use resources already present on the client. If you bring your own dll, it can make use of other resources already present on the client. For instance, you could bring your own DirectX application, and have it use the DirectX runtime code present on the client. You can also bring along a dll that explicitly uses the spectacular parallel computing power of the GPU.

For me the reason is that PInvoke provides access to the DirectX and Media resources that a standard installation of Windows 7 brings along. I want to just write an application that may use anything the Windows SDK has to offer related to DirectX, other graphics, and audio & video and combine it with the goodies of Silverlight – notably its UI facilities, its software distribution facilities and its in-browser capabilities. Going directly to the Windows SDK keeps me independent of intermediate frameworks like XNA, SlimDX or SharpDX, so I can keep my software up-to-date myself, as opposed to e.g. XNA. It also seem to me that going this direction anticipates the release of Windows 8, the increasing emphasis on performance and smaller footprint that initiated the renaissance of C++(11), concurrent and asynchronous programming, and the openness of more platforms (form factors) to C++ development.

Of course, PInvoke requires the invoked dll to be present on the client. This may seem a drawback, but scenarios that motivate the use of dll-s for PInvoke (running software on the GPU, mainly) usually require (in practice) the Silverlight application to be installed as an OOB application with elevated trust anyhow. Saving a dll to disk will not make the difference.

Pro’s and Con’s of PInvoke

So, what are pro’s and con’s of PInvoke? Advantages of PInvoke are that it is supported by Silverlight – as opposed to C++ managed extensions mixed with native code, that it allows the use of C++ dll-s, and that these dll-s can be packaged and distributed along with a Silverlight application. Compared to the use of COM componentsthe advantages are the smaller overhead, and the fact that no formal installation is required, because no insertions in the registry are required.

Drawbacks of PInvoke are: well – it’s not as easy as a managed C++ dll that has native code mixed in, there is the marshalling overhead, implying restrictions on possibilities and performance, and PInvoke is not always that easy to use: there are some hard to find gotcha’s. And then there is this question: if I invest time to learn about PInvoke, what will this knowledge be worth when Windows 8 comes? Is PInvoke something of the PC, or will it be supported on other platforms as well?

Debugging PInvoke DLL-s from Managed Code

Many, many questions, and great despair can be found in the community forums concerning stepping through the native code – called from managed code- in debugging PInvoke scenarios.

Indeed, there seems to be no way to debug a PInvoke dll from Silverlight in Visual Studio 2010. The nearest thing is to create a Windows Presentation Foundation (WPF) client and debug it from there (bring code from your Silverlight application). Then, stepping through the C++ code from the .Net client is easy.

In the Properties of the project:

  1. Enable unmanaged code debugging.
  2. If required, rebuild your code, both client and dll.

Set a breakpoint in the native code, and see that process execution stops right there.

How to Work with C++ Managed Extensions from Silverlight

If you would want to add a C++ project to your Silverlight Solution, could that be done? Yes, it can. This article about using C++ with WPF opens the door. However, advantages that hold for Windows Presentation Foundation (WPF) do not hold for Silverlight. The conclusion of this article here will be that there is little to no gains to be expected. This article reports on finished research, and is actually a step-up to its successor about Pinvoke. But do read on to see what one can do and what one cannot do with C++ in the context of Silverlight.

A Direct and Simple Path from Silverlight to Native Code

Why on earth would you want to use C++ with Silverlight? Well, actually, for various reasons. The driver behind the research is that I sometimes need top performance code, and also would like to use software resources that are already present on a client computer.

What I would like to do is to add a native code dll to my Silverlight App. The Silverlight App will probably run out-of-browser on the client with elevated trust. The merit of Silverlight in this scenario is its software distribution and update facilities, and the fact that is delivers premium user interfaces, easy multi media and connectedness – all in an effective, that is, a well-organized and relatively compact framework. The native code dll will be downloaded along with the Silverlight App as a resource. After loading it into memory, we use its functionality.

C++ is known for its performance, small footprint programs and, recently, its practical parallelism. Managed extension of C++ allow you to interact with .Net from C++. It is also possible to build assemblies that contain both native and managed parts. So, the hope is that there can be a direct and simple line from a Silverlight application to C++ native code. Developing software, as opposed to using Pinvoke and COM black boxes, is a lot more transparent, more direct, and hence simpler. It probably also results in faster software since Marshalling is not such a big issue if the conversion or memory management problem is right there before your eyes.

The above scenario will be reality in Windows 8 (if that will be its name), where the WinRT layer will take the role of the CLR (in this scenario). But for Windows 7 it is not. Indeed, you can write a managed C++ class that if compiled with the /clr:safe compiler option, can be loaded and executed from a Silverlight class. However, the /clr:safe option precludes the use of unverifiable code. See the MSDN library articles on Pure and Verifiable code. There seems to be a loophole in the seclusion of the ‘verifiability’ of the C++ class, but a loophole is not a sound basis for software engineering practices (or maintainable software).

SlimDX and SharpDX

Among others, SlimDX uses the above described path to connect the DirectX11 and .Net worlds. This works in the context of general .Net software. SlimDX is therefore not an easy available alternative for XNA in Silverlight. One might be on the look-out for an alternative since the XNA subset in Silverlight 5 is restricted, and XNA is based on DirectX9, which is increasingly seen as becoming outdated by now. See e.g. this blog post by Aaron.


The MSDN library states at multiple points that PInvoke is the alternative in case you want to interact with unverifiable code.

A Windows Port of the 3D Scan 2.0 Framework

The 3D Scan 2.0 Framework by the Chair for Virtual Reality and Multimedia of the Computer Science Institute, TU Freiberg. Is a software framework that uses the Microsoft Kinect to create 3D scans. The software is based on a number of well known open source frameworks:

Open Kinect, OpenSceneGraph, the ARToolkit, and VCGLib. Oddly enough, the 3D Scan 2.0 Framework is written in C++ and available for Linux only. Oddly, since the kinect is a Microsoft product, and all above supporting frameworks have a Windows version as well. Reason to try and port the framework to Windows.

Downloading and Building the Supporting Frameworks

The Windows ports of the supporting libraries are of high quality and present no problems.

For OpenKinect you just follow the instructions on the Getting Started page. These instructions will let you download and use tools like Cmake-Gui, that translates files from the Linux build system into Visual Studio solutions and projects; Tortoise, a Subversion client; 7Zip and Notepad++. This latter tool knows how to handle the Linux / Unix vs. Windows LF/CR issues better than the standard Windows tools, which provides for much more comfortable consumption of readme-s.

Part of OpenKinect for Windows is libusb10emu. The 3D Scan Framework also needs it. So, the sources and headers files will have to be added to the SciVi solution, see below.

The instructions let you download and install libusb-win32, pthreads-win32, and Glut. All provide binaries and include files. Other supporting frameworks use these as well. After working my way through the instructions, OpenKinect works like a charm.

OpenSceneGraph provide the source code, but you can download nightly release and debug builds for Windows from AlphaPixel, also for Visual Studio 2010; both 32 and 64 bits versions. The test programs than usually run fine (there are many, and some just don’t run at once).

VCGLib just needs to be downloaded, no building required.

The ARToolkit does require building. In order to build it with Visual Studio 2010, just convert the solution and build it (a few times; the order of projects is not such that a single build will do). After building a number of test applications will run nicely.

Building the 3D Scan 2.0 Framework

To build the 3D Scan 2.0 Framework you have to create your own Visual Studio 2010 solution. This solution has three projects: the Poisson lib, the Poisson surface construction algorithm; libusb10emu, emulates the libusb1.0 library – usb aspects available for Linux but not for windows; and Scivi, the scanner, the generator and test & benchmark.

Building the solution brought few strange errors like the use of “and” instead of && in the code, but no big issues.

Building the 3D Scan 2.0 Framework

– Revolves around libusb10emu

The Test Applications

– The framework contains a number of test applications: testOsg; testKinect; test; testColor; and testPoisson .

The testOsg Application

Worked immediately, without a problem, see image below.

Shrinking/ expanding a cube is by rolling the scroll button after clicking the cube involved.

The testKinect Application

This is a relatively simple application that reads the serial number from the kinect, and queries and displays its video modes, both color and depth. At this point we ran into some hard problems with the libusb10emu lib. The application presumes an initialized usb device here, that has a ‘context’ and a mutex. However, there is no initialized libusbcontext, and there is not going to be one either. This results in the application using a non-existing mutex, and Bingo!

On the other hand, if we skip the code that digs up the serial number and just provide a mock-up (I chose ‘1’), the application works fine.

The testKinect Application

The central test application. Also digs up the serial number of the kinect. It requires the serial number in order to load and store calibration data. So, I just made up a mock file name in the code: “test”, and skipped querying the serial number. Then the application works fine. It even reeds calibration data, and does some calibrating, after renaming a designated file to “test.yml.

If you look at the picture of me (yes, that’s me) made using the Test application, and compare it to the images in the calibration section of the documentation, you would conclude that the calibration is sufficient, if compared to the image with the uncalibrated kinect.

The code also contains undocumented “I” and “k” control options that rise and lower an AR threshold. I don’t know what that does. There is also an “s” control button. It provides statistics overlays. Push it three times for increasingly more information, and a fourth time to remove all overlays again.

A quirk of this application is that it randomly portrays it subject mirrored or not. One time you’ll see the image presented correctly, another run the scene is presented mirrored.

The Poisson Benchmark and testPoisson Applications

After adjustment of the hard coded files paths, the program gets to work as seems to be its design. It loads the vertex file (phone), counts the vertices (80870), and starts to remove duplicates. Then after a short while the application throws an exception.

The same holds for the testPoisson application. The exception is generated in code for a not very accessible algorithm that removes duplicate vertices. I don’t know why the application throws an exception, but it does both decrements an iterator (also changes its values otherwise) and erases elements from the vector. Most probably the algorithm needs some specific parameters to operate correctly.

The testColor Application

Couldn’t be run because the required data isn’t available for download at the project’s web site.

The Scan and Generate Applications

The scan applications generates a point cloud with data from the kinect, the Generate application creates a mesh from the point cloud.


The scan application required an adaptation in the OSG lib. The Kinect delivers RGB color data, not RGBA color data, so the assumptions in the code had to be adapted accordingly.

I made a scan of our teapot. With some imagination you can discern the teapot from the point cloud, but it is not very convincing. Obviously, some extra parameter tuning needs to be done to obtain result that match the examples from the project site in quality, although I don’t know how.


The generate application loaded the point cloud file successfully

and ran into the above mentioned error when removing duplicates.


I think the idea of using the Kinect for 3D scanning is fabulous. However, it turns out that it is not straightforward to use the software. A number of questions and problems arise, mainly concerning removing duplicate vertices and tuning the scan mechanism (calibration?). Perhaps the members of the 3D scan 2.0 Framework team can help out. If so , more result will follow on this blog.

Silverlight 5 – Moon boundary transition effects

Playing around with the Hollow Moon Video Show, it came to me that the transition from outer space into the moon, vice versa, needs transitional effects.

I also noted that it is hard to stop at a exactly the position one desires around the video screen. This calls for a throttle!

Transitional effect 1: Entering the moon

It seems nice that in approaching the moon the hull seems to open up and show the interior world – as an invitation to come and explore. Next question is how to implement such a feature. The choice for a shader effect seems preferable over a vertex based effect here. A vertex based effect means that an animation will move parts of the sphere (the moon) around to open up. But, the same vertices are used to project the inner world, which would then be invisible where the moon surface was has been removed. So, a shader effect it will be.

In order to complicate matters not too much, the expansion shader used earlier was reused. The effect however presents itself entirely different in this new settings; expansion is now an inverse function of the distance to the surface, and it works on a sphere, not a flat surface. This provides for a quite nice effect.

You can’t immediately see the video screen now, I’ve increased the size of the moon significantly in order to improve the illusion one is travelling through an enormous and desolate landscape. See the section on throttling for more.

Transitional effect 2: Leaving the moon

The shader effect above was relatively simple to implement. However, now the challenge is to create a shader effect within an environment mapping effect where shader sampling is in 3D space. Behold, it is not the math, it is the shader engine (level 2) that responds to any creative idea with the message that the shader is too complex to process. A disappointment. So, for now I’ve settled for a simple black spot (hole in the moon, if you like) that shows the stars in the background (to be implemented J ).

The idea is simple: a small black spot is projected onto the moon interior surface where we will leave the moon. We approach, and the spot grows bigger, then we pass through. We create the hole by just clipping the shader texels at the desired location. But how does one find the right position for the black hole?

The answer to that question was not easy, and it is not quite satisfactory in its final form. The Silverlight XNA library contains an Intersect method for both the BoundingSphere and the Ray classes. The idea is simple: if you have a position and a direction – and a chase camera usually has, you can define a ray, and the Intersect method provides you with the distance to impact, if any. However, this method does not work for rays that originate within the BoundingSphere. OK, no problem, I’ll just project myself out of the Bounding sphere by a known distance, turn around, and then obtain the distance to impact. This gives you a value, BUT, the Intersection method provides you with a distance to the center of the BoundingSphere, not to its hull, and not in the provided direction.

So. As an alternative I found a handy article at StackOverflow containing a decent treatment of how to mathematically solve the intersections of a sphere and a line (ray). A static method implementing the theory takes 0.01 second to computer the distance to either zero, one or two intersection points.

This solution works fine most of the times, but if you navigate through e.g. the top of the moon, the black spot gets distorted into an asymmetrical form that resembles an ellipse, and it gets displaced. It seems to me that the distortion indicates the limits of the rendering system, rather than an error in the algorithm that positions the black spot. The image below depicts the black hole in effective mode.

OK, I do feel an obligation to improve on this. But it will have to do for now. As soon as I thought out an dramatic effect that the shader engine can render, I will implement it. I promise!

Throttling and the parallax-ish illusion

The idea of environment mapping is that the camera is always unboundedly far from the camera, so that a parallax effect is created: the background doesn’t move with the camera (or observer). One technique to implement this is to move the background with the camera (same translation), or alternatively to stitch the background to the far plane.

Although this is theoretically convincing, and sometime right in practice, I find it unsatisfactory. The background seems dead, not just far away. I’ve been in open ground a lot, and when you travel around, you will notice that your view of the landscape changes, especially if you watch the seem scenery for a prolonged time. It changes slowly, though.

In the current situation, that of the Hollow Moon, there is also little else for visual orientation than movement of the landscape. So we need, to move the landscape a little in order to generate the illusion of travelling through is. If not, encountering the video screen, or the boundaries of the moon itself, come a complete surprise.

Now, if we would move slowly relatively to the moon, and also give the impression that the moon is huge, it would take forever (well, weeks) to get from the starting point to the center of the moon. So, we need a throttle. The throttle can be used to move towards and into the moon quickly, and to travel inside the moon increasingly slowly. A throttle has been implemented and is controlled by the S (slower) and F (faster) keys. I’ve also double the maximum speed and increased the size of the moon relatively to the video screen. It is now about 1250 times the size of the screen. How to operate the throttle is also indicated on the start screen message.

Demo Application

A demo application of phase 5 of this project can be found here, scroll down to “Moon boundary transition effects” (no, the # tags still don’t work).

Wish list

It turns out that the further the development of this moon thing progresses, the more wishes arise that are not (yet) implemented. That in itself is curious, but well. This section is to be a recurring section of those wishes, if only as a reminder of imperfection, and a source for philosophical contemplations. So far we have the following wishes.

Dramatic exit effect when leaving the moon

As discussed above, but feasible for the shader compiler level 2, while in the context of environment maps.

Elaborate sound effects that include panning

As noted, it is not easy to implement 3D effects on the sound sources, split by stereo channel. WE cannot apply panning to the individual sound channels of a video. If we could, the user would experience the oral effect of the sound pillars rotating around. Also a Doppler effect would be nice.

Textures that indicate speakers at the sound pillars

The sound pillars that flank the video screen are evenly colored (gold). They could do with some grid like textures that indicate the presence of speakers at those positions. We wouldn’t go as far as to implement moving speakers (woofers), would we?

Higher resolution graphics and MipMapping

If the camera approaches the moon surface or the interior surface, the low resolution of the images used begins to show. So, we would like to be able to use higher resolution. We also should implement mipmap levels (when you get closer to the surface, a higher resolution image is presented), I know. Mipmapping has the drawback that is greatly increases the amount of data that has to be sent over the internet. There has to be stuck a balance here.