Archive for November, 2014

Harder to C++: Monads for Mortals [4], References and Move Semantics

In part 3 we have seen a simple monad with delayed evaluation. Until we find a use for delayed evaluation that adds value to mainstream imperative practices, we will focus on the eager or strict evaluating monad. Until the shootouts, for sure.

In part 2 we have seen a simple implementation of the eager monad. A drawback of that specific implementation is that it copies its value around, which will lead to poor performance in case of large data structures.

In this part we will examine an improved implementation that has the same time performance for any data size (that fits in RAM memory). We will improve the design of part 1 by using references and move semantics. To me, a good read on move semantics seems this article on StackOverflow.

Of course, we want to see the move semantics in action, so we can be sure the presented code indeed uses move semantics and references, hence is a performant solution. To be able to trace how an object, representing the monad value is moved around, we construct a class that will serve as the value, and which contains all necessary constructors, destructors and assignment operators, each fitted with an output statement reporting its execution.

We will add the same trace instruments to the monad template,. This requires that the unit function is integrated into the monad template in the form of various constructors (hence destructors) and assignment operators. To integrate the unit function into the monad template this way is not different from the practice Eric Lippert exercises in his tutorial on monads. There will be no use of a so called swap function in applicable constructors and assignment operators; that would again merge what we are trying to separate here.

Implementation of a Monad with References and Move Semantics

The Value Class

We will use the following class as the value in the monad to evaluate the use of refs and move semantics.

The constructors, destructor and assignment operators are all textbook examples. The class has a payload which we will use in performance measurements. It is a char array of size sz. The ValueObject class has a method (at the bottom) that returns size sz.

class ValueObject
{
    unsigned int sz;
    char* payload;
public:
    // default
    ValueObject();
    // Ref argument constructor
    ValueObject(const int& i);
    // Copy constructor
    ValueObject(const ValueObject& v);
    // Move constructor
    ValueObject(ValueObject&& v);
    // Destructor
    ~ValueObject();
    // copy assignment
    ValueObject& operator=(const ValueObject& v);
    // move assignment
    ValueObject& operator=(ValueObject&& v);

    unsigned int size() const;
};

Each constructor, destructor and assignment operator writes a line to the console to mark that it has been called.

To comfortably output the trace we use an output operator overload:

ostream& operator<<(ostream& os, const ValueObject& v)
{
	os << v.size();
	return os;
}

We test the ValueObject class with code that exercises all methods above.

int _tmain(int argc, _TCHAR* argv[])
{
	SetConsoleTitle(L"Harder to C++: Monads [4]");

	{
		ValueObject v1;
		ValueObject v2(10);
		ValueObject v3(v2);
		ValueObject v4(std::move(v3));
		v4 = v1;
		v2 = std::move(v1);
	}
	// The destructors get called at the end of the scope

	cin.get();
	return 0;
}

Which results in the following output.

The Monad Template

The monad type constructor does double duty as a unit function, with a broad spectrum of constructors. Below you will find the declarations in the template definition.

template<typename V> struct monad
{
	// Wrapped value
	V value;

	// std::move is here to coerce the use of v's move constructor, if any
	// The parameter V& is *not* const. Const coerces the use of V's copy
	// constructor
	monad(V& v); // : value(std::move(v))
	// std::move is here to coerce the use of v's move constructor, if any
	monad(V&& v); // : value(std::move(v))
	// copy constructor
	monad(const monad& m);
	// move constructor
	monad(monad&& m); // : value(std::move(m.value))
	//destructor
	~monad();
	// copy assignment
	monad& operator=(const monad& m);
	// move assignment
	monad& operator=(monad&& m);
};

Most implementations are standard. Note that there is no parameterless default constructor. The signature of the first constructor is special in that one would expect a const V& parameter, like so

    monad(const V& v) : value(std::move(v))

But doing that would coerce the use of the ValueObject’s copy constructor instead of the desired move constructor; the copy constructor has a const parameter, whereas the move constructor has a non-const parameter.

As with the ValueObject class, each method writes a line to the console to mark that it has been executed.

We test the monad template with code that exercises all methods above.

int _tmain(int argc, _TCHAR* argv[])
{
	SetConsoleTitle(L"Harder to C++: Monads [4]");

	{
		int i = 11;
		monad<int> m1(i);
		monad<int> m2(10);
		monad<int> m3(m2);
		monad<int> m4 = std::move(m2);
		m4 = m3;
		m4 = std::move(m1);
	}
	// The destructors get called at the end of the scope

	cin.get();
	return 0;
}

Which results in the following output.

Monad Bind Function

The Bind function has not really changed, compared to previous versions, but it does report on its use now.

// Bitwise OR operator overload: bind function
template<typename A, typename R>
monad<R> operator| (const monad<A>& mnd, monad<R>(*func)(const A&))
{
	cout << line++ << "\t" << "---Function bind" << endl;

	return func(mnd.value);
}

For use with the ValueObject, we will add two example functions. In analogy to previous installments, the first function divides the size of the payload by 2, the second function tells us if the size is even.

// divides arg by 2 and returns result as monadic template
monad<ValueObject> divideby2(const ValueObject& v)
{
	cout << line++ << "\t" << "---Function divideby2" << endl;

	return monad<ValueObject>(ValueObject(v.size() / 2));
}

// checks if arg is even and returns result as monadic template
monad<bool> even(const ValueObject& v)
{
	cout << line++ << "\t" << "Function even" << endl;

	return monad<bool>(v.size() % 2 == 0);
}

Tracing Moves

We are now in a position to trace the application of functions by the monad, and verify that indeed only references and move semantics are applied. We do this with the following code.

int _tmain(int argc, _TCHAR* argv[])
{
	SetConsoleTitle(L"Harder to C++: Monads [4]");

	{
		cout << line++ << "\t" << "---Object creation" << endl;

		ValueObject v1(22);
		monad<ValueObject> m1(v1);

		cout << line++ << "\t" << "---Creation of m2" << endl;

		auto m2 = m1 | divideby2 | even;

		cout << line++ << "\t" << boolalpha << m2.value << endl;

		cout << line++ << "\t" << "---Leaving scope" << endl;
	}
	// The destructors get called at the end of the scope

	cin.get();
	return 0;
}

Which results in the following output.

The good news is: the code indeed only uses references and move semantics.

But OMG, What A Shocking Lot Of Function Calls! What a Hydrocephalic Overhead!!!

Look what happens between entering the function divideby2 and the subsequent call to the bind function: No less than 8 calls to constructors and destructors. What a shocking waste of time and effort. To create 2 objects, apply 2 functions, and then clean up the mess, 24 function calls are required. What circumstances could ever justify such an overhead?

But of course, we cannot blame just the monad for this, it is also the move semantics. And that is what really shocks me. How could they do that? Now I understand why the committee is ready to embrace the monad. Who cares about the overhead of the monad if you already have the overhead of move semantics?

Minimal Verification that Time Performance is Independent of Size

Ok, let’s put the drama behind us and see if this solution is indeed size independent. We will compare a payload of size 1 to a payload of 10,000,000,000.

To make the test size independent we will use an identity function. This function does not modify the data, it just moves the data on – Yes it is still immutable data. The identity function will not muddle time measurements with memory (de-)allocations for the actual data, so we will only measure the overhead for the monad and the move semantics.

Time measurements will be made with the high resolution performance timer by Simon Wybranski for a release win32 build.

The identity function looks like this.

monad<ValueObject> identity(const ValueObject& v)
{
	return monad<ValueObject>(const_cast<ValueObject&>(v));
}

The test code is as follows.

int _tmain(int argc, _TCHAR* argv[])
{
	SetConsoleTitle(L"Harder to C++: Monads [4]");

	double total = 0;
	for (unsigned int k = 0; k < 10; ++k)
	{
		//ValueObject v1(1);
		ValueObject v1(10000000000);
		monad<ValueObject> m1(v1);

		double elapsed = 0;
		for (unsigned int i = 0; i < 10000; ++i)
		{
			t.Start();

			auto m2 = m1
			| identity | identity | identity | identity | identity
			| identity | identity | identity | identity | identity
			| identity | identity | identity | identity | identity
			| identity | identity | identity | identity | identity
			| identity | identity | identity | identity | identity
			| identity | identity | identity | identity | identity;

			t.Stop();
			elapsed += t.Elapsed();
		}
		// The destructors get called at the end of the scope

		cout << "Time 30 x 10000 applications: " << elapsed << " ms" << endl;
		total += elapsed;
	}

	cout << "Average time over 30 x 10000 iterations: " << total / 10 << " ms"
		<< endl;

	cin.get();
	return 0;
}

A typical output is

Comparable values (about 2 ms) are indeed obtained for both the large and the small payload.

Pros, Cons, and Questions

So, a strict monad that is based on references and move semantics takes 2 ms to do nothing 30,000 times, independently of size of the data structure 🙂 .

The amount of time per function application is small, so we might conclude that references and move semantics indeed provide an advantage over code that always copies its data structures, as we had in parts 1 and 2.

On the other hand, I find the amount of constructor and destructor calls distressing: returning a monad from a function takes two constructor calls and two destructor calls. Can we do better than that? Is it worth the trouble to try and do better than that?

I think it surely is worth the trouble. There is always the potential comparison between 2 programs that are functionally equivalent, but one program needs only half the instructions the other program needs. The advantages of the smaller program are of great importance. It can run equally fast on much less expensive hardware, it can save 50% battery power compared to the other program. On equal hardware it leaves room for the execution of other instructions that can contribute to a significantly better UX. So, finding the best performance brings you a competitive edge that can make a world of differences in the long run, or at large scale.

Next episode we will investigate if a pointer based monad can do any better than the constructs presented here.

Advertisement

Harder to C++: Monads for Mortals [3], Lazy implementation

This blog post is part 3 in a series. This part presents a simple implementation of a lazy evaluating monad. It is a minimal variation of the implementation in part 2.

Below we will see how the implementation is constructed, then an example is given that provides initial verification that the construction is indeed a monad.

Implementation of a Lazy Monad

Implementation starts again with some includes and a using statement. The first important construct is the monad type constructor, a C++ template which in this particular case takes the form of a std::function template:


// Monad type constructor: a std::function template
template<typename T>
using monad = function < T() >;

The using statement defines ‘monad’ as a type alias.

Of course, a suitably formed pointer to function object would suffice as well. The std::function template has the advantage that it looks neat, and we can instantiate it in an elegant way using a lambda expression. Thus upholding some of the functional character of the monad’s roots.

The unit function, which creates instances of the ‘monadic’ function object is as follows:

// Monad unit function, wraps a value in a lambda
template<typename V>
monad<V> unit(V val)
{
       return[=](){ return val; };
}

The unit function returns a lambda expression the returns a value, when executed (later some time, if any).

The bind function follows the description of bind operators to the letter. It evaluates the monad argument, and returns a function that applies the function argument to the result of that evaluation:


// Monad bind operator
template<typename A, typename R>
monad<R> operator|(monad<A>& mnd, monad<R>(*func)(A))
{
       // return a callable to evaluate later
       return[=]()
       {
             A unwrapped = mnd();
             monad<R> result = func(unwrapped);
             return result(); //parentheses, because we define a function that returns a value
       };
}

The returned function is not evaluated before function call is applied to it. Hence using this bind operator we can construct long and complex function compositions, which get evaluated only after we explicitly do so.

The above triple is the complete implementation of the monad. Yes I know, there should be two additional operations defined as well: the sequencing operator and the fail operator (see e.g. Mike Vanier, part 3. However, many texts discussing monads indicate that these operations are uninteresting. So, we will skip them in this series.

Let’s add the same functions as in part 2, and run a simple composition:


// divides arg by 2 and returns result as monadic template
monad<int> divideby2(int i)
{
       return unit<int>(i / 2);
}

// checks if arg is even and returns result as monadic template
monad<bool> even(int i)
{
       return unit<bool>(i % 2 == 0);
}

int _tmain(int argc, _TCHAR* argv[])
{
       SetConsoleTitle(L"Harder to C++: Monads [3]");

       // Wrap 22 in monadic template
       auto m1 = unit<int>(22);

       // Apply bind 4 times in function composition
       auto m2 = m1 | divideby2 | divideby2 | divideby2 | even;

       cout << boolalpha << m2() << endl;

       cin.get();
       return 0;
}

Note that the functions ‘divideby2’ and ‘even’ have not changed. The only change in the _tmain function is that in the output statement m2.value has been replaced by m2(). Note the parentheses, which denote lazy evaluation of the constructed composition.

Minimal Verification that Monadic Laws Hold

The minimal verification is the lazy analog to verification in the eager case. It looks like this:


int _tmain(int argc, _TCHAR* argv[])
{
       SetConsoleTitle(L"Harder to C++: Monads [3]");

       // Wrap 22 in monadic template
       auto m1 = unit<int>(22);

       // Left identity
       // First explicitly convert to function pointer (because of template)
       auto pf1 = &unit < int >;
       // then apply bind operator
       auto m2 = m1 | pf1;

       cout << "Left Identity result: " << m2() << endl;

       // Right identity
       auto m3 = m1 | divideby2;
       auto m4 = divideby2(22);

       cout << boolalpha << "Right Identity result : " << (m3() == m4()) << endl;

       // Associativity
       // Case 1: equivalent to bind(m1, divideby2);bind(m5, even)
       auto m5 = m1 | divideby2;
       auto m6 = m5 | even;

       // Case 2: equivalent to Bind(m1, divideby2;bind(..., even));
       // Create function composition
       auto f = [](int i){ return divideby2(i) | even; };

       // Explicitly convert f to function pointer
       auto pf2 = static_cast<monad<bool>(*)(int)>(f);
       auto m7 = m1 | pf2;

       cout << boolalpha << "Associativity result : " << (m6() == m7()) << endl;

       cin.get();
       return 0;
}

which generates output:

image0011

as expected (and desired).

Pros, Cons

Some texts declare that lazy implementation provides advantages over eager or strict evaluation. Advantages presented are:

  1. Calculations are only made when required and in as far as required.
  2. ‘Lazy’ allows he evaluation of lists that are unbounded (endless, infinite)
  3. Delayed evaluation of function compositions allows for an optimization before evaluation of the composition takes place.

What to think of this?

The first advantage strikes me as odd; I usually do not write programs that make calculations which’ results are not required.

The second advantage is odd as well. My computer is as powerful as a Turing machine, hence it can process input lists of unboundedly length. Without lazy evaluation.

The third advantage could be a real advantage, but I wouldn’t know how to do this in C++; the advantage presupposes that it is possible to create code at runtime and then compile and execute it, which isn’t true for C++.

I do see a disadvantage, though, and we will get to it in detail when doing some shootouts between the various alternative monad implementations. The disadvantage is that the software does extra work. It first creates an expression, then evaluates it. So in C++, for each lazy evaluating program, there is an equivalent eager evaluating program that is faster and smaller, hence more competitive.

In the next part of this series we will work on an (eager) implementation that has the same time performance characteristics for any size value. There will be many references and move semantics. Hmm … mjummy 😉

Harder to C++: Monads for Mortals [2], First implementation

This blog post is part 2 in a series. Part 1 is a general introduction. This part discusses a first implementation of the monad in C++.

There are two fundamentally different approaches to implementing monads in C++. Let’s first discuss the approach we will not pursue.

A Way We Will Not to Implement a Monad in C++

The C++ template mechanism is a full functional language (this discovery, by Erwin Unruh, was a bit of a surprise). As a consequence C++ templates are sometimes used for template meta programming: programs can be written using (only) C++ templates and are as such evaluated / executed, at compile time. There are not many practical uses for template meta programming. One practical use was to unroll (small) loops in order to gain performance. This gain, however, can nowadays also be obtained by modern, optimizing C++ compilers.

The discovery that the C++ template mechanism constitutes a functional language, also has lead some people to implement monads in a template meta programming based functional language. Examples are the Cee Plus Plus Monad Example, and Implementing Monads for C++ Template Metaprograms. However, these implementations live in a part of C++ that is little used. Moreover, the required syntax is complex, which makes it unattractive for general use.

The above approach and the goal pursued here are, I think, trying to answer different questions. The above examples seem to try to mimic functional programming with C++, and implement monads in that context. Contrary to bringing C++ to functional programming, I want to know if bringing the functional programming construct ‘monad’ to C++ will provide me with an advantage, and to what cost.

Below we will start with a simple implementation of a monad in mainstream C++ and show that it is indeed a monad. Then we will see what we like or dislike. Note that we will not be doing any functional programming, which, in fact, is quite in the spirit of Eugenio Moggi’s intentions.

A Simple, Mainstream C++ Implementation of the Monad

Implementation starts with some includes and a using statement. The first important construct is the monad type constructor, or rather: the C++ template.

#include <iostream>;

using namespace std;

// monad type constructor
template<typename V>; struct monad
{
	// Wrapped value
	V value;
};

We saw in part 1 that the result of the unit function has the semantics of a computation. As you can see, this C++ class / struct template does not have any computations, it is a trivial computation. For one thing, we could add a function application operator to it ( operator() ), that would perform some non trivial computation, and we might indeed at some point do so.

The next item to define is the unit function, the factory:

// factory: unit function
template<typename V> monad<V> unit(V val)
{
	return monad < V > { val };
}

It is simplicity itself. An alternative could have been to implement the unit function as the monad struct constructor. That would have been more efficient, so we will do that in a number of following implementations.

I have given the bind function the form of an overload of the bitwise OR operator ” | “.

// Bitwise OR operator overload: bind function
template<typename A, typename R>
monad<R> operator| (monad<A>& mnd, monad<R>(*func)(A))
{
	return func(mnd.value);
}

I got the idea to use an operator instead of a function called ‘bind’ from the CeePlusPlus example referred to above. However, that code overloads the ” >>= ” operator, in order to make the code resemble Haskell somewhat. I prefer the ” | ” because it is the same symbol as the Unix / Linux shell pipe operator. And indeed, we will see shortly that in function composition, bind can well be considered to pipe results from function to function.

The above bind function first ‘unwraps the value’, by referencing the value of the monad object, it then applies the function ‘func’ to it, and returns the result wrapped in a new monad template object.

At this point we have a first implementation of a monad! Note how very small and simple it is. Let’s add some functions and give it a spin.

// divides arg by 2 and returns result as monadic template
monad<int> divideby2(int i)
{
	return unit<int>(i/2);
}

// checks if arg is even and returns result as monadic template
monad<bool> even(int i)
{
	return unit<bool>(i % 2 == 0);
}

int _tmain(int argc, _TCHAR* argv[])
{
	// Wrap 22 in monadic template
	auto m1 = unit<int>(22);

	// Apply bind 4 times in function composition
	auto m2 = m1 | divideby2 | divideby2 | divideby2 | even;

	cout << boolalpha << m2.value << endl;

	cin.get();
	return 0;
}

Note the elegant function composition offered by the bind operator.

Minimal Verification that Monadic Laws Hold

By means of a single example we will verify that the monadic laws (see part1) hold:

int _tmain(int argc, _TCHAR* argv[])
{
	// Wrap 22 in monadic template
	auto m1 = unit<int>(22);

	// Left identity
	// First explicitly convert to function pointer (because of template)
	auto pf1 = &unit < int >;
	// then apply bind operator
	auto m2 = m1 | pf1;

	cout << "Left Identity result: " << m2.value << endl;

	// Right identity
	auto m3 = m1 | divideby2;
	auto m4 = divideby2(22);

	cout << boolalpha << "Right Identity result : " << (m3.value == m4.value)
	     << endl;

	// Associativity
	// Case 1: equivalent to bind(m1, divideby2);bind(m5, even)
	auto m5 = m1 | divideby2;
	auto m6 = m5 | even;

	// Case 2: equivalent to Bind(m1, divideby2;bind(..., even));
	// Create function composition
	auto f = [](int i){ return divideby2(i) | even; };
	// Verbosely convert f to function pointer
	auto pf2 = static_cast<monad<bool>(*)(int)>(f);
	auto m7 = m1 | pf2;

	cout << boolalpha << "Associativity result : " << (m6.value == m7.value)
	     << endl;

	cin.get();
	return 0;
}

which generates output:

The funny thing about the associative law is that the intuitive use of the bind operator like in:

auto m2 = m1 | divideby2 | even;

is equivalent to:

auto m2 = bind(bind(m1, divideby2), even);

which is very obviously equivalent to Case 1 above. So indeed, parentheses do not matter as long as we pipe the result of applying divideby2 to m1 into event.

Pros, Cons, and Questions

So, now that we have made a start, what to think of it?

The ‘pros’ are:

  • The small size
  • The very elegant function composition.
  • From a functional programming perspective one might add that this implementation works with immutable data.

The most visible ‘cons’ are:

  • The code copies function arguments and function results, so it is inefficient, and performance for large objects will be bad.
  • The const qualifier is completely absent from the code.
  • It would be more efficient to make the unit function the constructor in the monadic template (as it is in the examples in Eric Lippert’s tutorial on monads in C#).

There is also this question about the bind operator:

  • Why is the bind operator not a member of the monadic template like with Lippert? Note that doing so would turn the monadic template into the complete monad. Well, this is C++, and I like to follow C++ best practices as accumulated by Sutter and Alexandrescu here, see item 44 “Prefer writing nonmember nonfriend functions”. There are more reasons, having to do with composition of different types of monads. But that’s for later…

Next time: a simple implementation of the monad for lazy evaluation.