Linker vs Bindings and IsDirectBinding

If you looked at my previous entries related to the linker and bindings: NewRefcount, UI thread checks, Runtime.Arch or at MonoTouch‘s generated bindings source code then you likely noticed another common pattern, e.g.

	[Export ("sizeToFit")]
	public virtual void SizeToFit ()
		global::MonoTouch.UIKit.UIApplication.EnsureUIThread ();
		if (IsDirectBinding) {
			Messaging.void_objc_msgSend (this.Handle, selSizeToFit);
		} else {
			Messaging.void_objc_msgSendSuper (this.SuperHandle, selSizeToFit);

The use of IsDirectBinding (line #6) is a bit less common in third party bindings, unless the -e option was used with btouch to allow types to safely inherit from your bindings. The fact that such an option exists is a sign that we’d like to get rid of this condition when it’s not required.

If we knew that a type is never subclassed then we could remove the IsDirectBinding (line #6) as its value would be constant, true, and that would also allow the removal of the false branch (line #9).

Along with some previous optimizations that can remove [CompilerGenerated] (line #2) and EnsureUIThread (line #5), we would end up with a smaller, branchless method that simply calls the right selector, like this:

	[Export ("sizeToFit")]
	public virtual void SizeToFit ()
		Messaging.void_objc_msgSend (this.Handle, selSizeToFit);

Now that’s something the linker can find out since it must analyze the application’s code. Furthermore it also knows, since runtime code generation is impossible on iOS, that the application cannot be extended, unless it’s re-compiled (and re-linked). That’s turning a disadvantage into an advantage! and, starting with MonoTouch 5.3.3, the linker gives you this (small) consolation 😉

This optimization saves 22 kb on a release build of TweetStation but, like most other linker-bindings changes, the real goal was not to save space (it’s still nice) but to reduce the number of conditions and branching in the code to allow faster execution.

Posted in linker, mono, monotouch, xamarin | Leave a comment

Managed Crypto vs CommonCrypto : Deciphering results

After the hash algorithms the next part of MonoTouch‘s switch to CommonCrypto covers the symmetric ciphers, commonly referred to CommonCryptor. This includes:

  • DESCryptoServiceProvider, TripleDESCryptoServiceProvider, RC2CryptoServiceProvider and parts of RijndaelManaged inside mscorlib.dll;
  • ARC4Managed inside Mono.Security.dll; and
  • AESManaged inside System.Core.dll

Those are all the symmetric algorithms that Mono provides (using managed code). Now they will all, for MonoTouch, be native except for the non-AES parts of Rijndael, i.e. when its block size is not the default 128 bits.

This time hardware acceleration is easier to detect as we can compare AES Electronic Codebook (ECB) mode with AES Cipher Block Chaining (CBC) mode – the later documented to be accelerated. In software the most basic mode is ECB and the other modes are built on top of it. So ECB is generally a bit faster than CBC. If CBC turns out to be faster then it was either further optimized (small margin) or hardware accelerated (large margin).

The above graphics, made from the iPad 3 results, shows the performance (MB/s) on the vertical scale versus the buffer size (bytes) on the horizontal. Unlike the digest results it’s hard to suspect anything other than hardware acceleration for such a drastic difference.

The next graphics compares the performance of the native/hardware implementations (of CommonCrypto) versus the managed implementations (of Mono). Values are the average, in MB/s, of all my devices. You’ll see that using older algorithms, like DES or TripleDES, is not the way to better performance, managed or native.

Wait! where’s the 35.4x performance increase ?

Uho, the AES results looks more like 10x than 35x, right ? That’s (a bit) because I used the average values of my devices and (mostly) because I did not use the magic tricks. IMO that gives a more honest picture – at least as much as benchmark can be 😉

But, as promised, here are the exact incantations and sacrifices required to get up to 35x increase. You’ll need three things:

1. An iPad 1st generation. It simply has the best performance – well over the iPad3 at 18.8x and iPod4 at 25.2x (see graphics below). It’s large and surprising variation. Yet all results are better than the non-magical 10x average;

2. Your device needs to run on AC power. That will give more juice to the crypto processor – roughly doubling its performance. Power management, under battery, does not allow this level of performance. That’s part of the sacrifice required to get the maximum performance;

3. Use the optimal buffer size (again) but this time we’re talking about huge buffers. In case you did not notice them, on the ECB/CBC graphic above, you will need 512KB to get the maximum throughput (86.9 MB/s on iPad1). If you drop the buffers to 256KB you lose more than 3 MB/s. Drop them to 128KB and you’re down to 68.1 MB/s. Drop to 16 kb (the optimal size for battery operation) and you’ll only get 44.1 MB/s – a long way from the 86.9MB but still better than 29.4MB/s (same buffer on battery). Keep in mind of the I/O required to keep the buffers full – it might be easier (and finally faster) to use smaller ones…

Running iOS application on AC power is uncommon for most people. However it’s not uncommon when developing and testing applications, including benchmarking them. Take care when testing yours! It took my a while and a bit of juggling between computers and devices to figure out why some numbers were so different than others.

Here is the new, battery operated, and the original (from the first blog post), AC powered, performance graphics.

Posted in crimson, crypto, mono, monotouch, xamarin | Leave a comment

Managed Crypto vs CommonCrypto : Digesting results

As part of MonoTouch‘s switch to CommonCrypto most of the digest (hash) algorithm implementations were changed to use CommonDigest. This includes:

  • MD5CryptoServiceProvider, SHA1[CryptoServiceProvider|Managed], SHA256Managed, SHA384Managed and SHA512Managed inside mscorlib.dll; and
  • MD2Managed, MD4Managed and SHA224Managed inside Mono.Security.dll

If you know well the System.Security.Cryptography namespace then you’ll note this is every HashAlgorithm except RIPEMD160Managed and all the extra ones Mono provides to support (mostly older) X.509 certificates.

This also has some indirect effects on some other types, e.g. all HMAC and MAC (e.g. MACTripleDES) implementations are using the default hash algorithm. This means that all of them, except for HMACRIPEMD160, are now using CommonCrypto and will be faster too.

Note: Mono never followed the [Managed|CryptoServiceProvider] suffixes: by default everything was always managed and will now be native (for MonoTouch). So don’t be confused by the names. Source/binary compatibility requires Mono type names to match Microsoft, not how they are implemented.


The best performance comes from SHA1, 107 MB/second (mean values for my iPod4, iPad1 and iPad3 devices). This is also the best enhancement ratio, 6.7x faster than managed – even if SHA1 is the most optimized (i.e. unrolled) managed implementation Mono has.

The secret ? hardware acceleration. Apple’s source code, shows that hardware acceleration is being used (at least on some ARM devices) if the block being processed are large enough (to offset the cost of initializing the hardware).

It’s always a bit hard to be 100% certain we’re using the hardware as there’s no way to ask for (or against) it or even query it’s use / availability. In this case the numbers do not make it very clear, the curves are too similar between all native implementations to jump at a conclusion. My own gut feelings is that we’re seeing the effects of hardware acceleration but it lost its shocking effect because it’s compared to Mono’s best, performance-wise, implementation.

Getting the most of the update

The above graphic shows considerable performance gains: from 3.1x up to 6.7x. Now the question is: can we get such gains inside our applications ?

Without surprise it comes back to selecting an optimal buffer size (and that advice is true for older MonoTouch, Mono itself and other Mono-based products). This is not always as easy as it sounds (e.g. SSL/TLS) but if you can control your buffer size (and measure) then you can get the best performance for your application.

The next graphics shows the performance (managed versus native) of SHA1. There’s no gain to use CommonCrypto before reaching 16 bytes but surprisingly there’s no loss either. That’s unlike the /dev/crypto results and a very good news since it means this update should not degrade performance for existing applications.

Another similarity: in both cases the optimal buffer size is (close to) 4096 bytes – and that’s also true for other algorithms as well. That should not be a huge surprise for managed code, since it’s the default value Mono has used for years when computing the hash on a stream.

This is a (second) good news that the managed and native (and hardware) optimal buffer size are close enough so a single value can be shared. It simply makes coding easier for everyone 🙂

So that’s 6.7x increase is for real ? Not quite. Did not you hear me repeating benchmark are lies ?

Benchmarking cryptographic implementations can be (and generally is) done with minimal I/O. Memory (RAM) is much faster than disk (or any type of storage) and faster than networking. Beside the I/O times this also remove/reduce the need for memory allocations (and the related GC time).

OTOH your application, if it’s not a benchmark apps, will need to do a lot more I/O (loading, saving or transmitting the data) that will result in memory allocations… all of which will reduce the performance gain. Also the faster the cryptographic implementation becomes the less forgiving it will be for I/O times.

So your mileage will vary. Someone already reported gains near 3-4x, which is quite impressive inside a real application. IMO the best news is the lack of scenarios where things gets worse – it should be a win for everyone 🙂

Posted in crypto, mono, monotouch, xamarin | 1 Comment

Managed Crypto vs CommonCrypto

A bit of history…

Mono has always provided fully managed implementations of almost every cryptographic algorithms supported by .NET, either directly (in the base class libraries) or indirectly (e.g. additional algorithms required for X.509 or SSL/TLS support).

That approach gets you full cryptographic support everywhere the Mono JIT/runtime itself works, which covers a pretty large spectrum, with no external dependency required.

Now this was not always the best choice for performance but it did not matter much since .NET (and Mono) crypto stack is meant to be extendable and replaceable thru CryptoConfig. E.g. you can find some faster alternatives inside Crimson using libmhash or /dev/crypto.

Back to MonoTouch…

Things in iOS-land are a bit different. Extensibility does not work as well because of the platform restrictions MonoTouch must abide to. The mono runtime and the class libraries are not shared (each application get its own) often tailored (by both linkers) copies. Not to mention you cannot share code between applications or even load code dynamically. That effectively kills all the benefits that CryptoConfig can provide.

OTOH the lack of control, as a owner, over the device’s operating system means we, as developers, know what’s in there. In this case we can be sure that CommonCrypto is always present. It’s an external dependency but not an optional or additional dependency.

Since we can’t offer extensibility we need to choose what’s offered. In MonoTouch 5.2 (and earlier) that was the managed implementations. For the upcoming MonoTouch 5.4 (starting now with 5.3.3 alpha) this will be CommonCrypto-based implementations.

Why ? Switching has three benefits:

1. Likely incoming FIPS 140 certification

Thanks to NIST it’s no secret (pdf) that Apple is been seeking FIPS 140 certifications for CommonCrypto on iOS. This can be a long process but intermediate results, like AES validations, can already be found online.

It does not mean much today (in progress == no certification) but it could become crucial for some applications, if you want to sell them to either the US or Canadian federal government, in the future. Whenever it happens you’ll be ready, not waiting, for this 🙂

2. Smaller application size

Inside .NET assemblies p/invoke declarations are methods without any body (no IL, just metadata). As such they tend to be small – much smaller than the cryptographic code they replace (even with the additional glue code needed).

This means that by using CommonCrypto we can reduce the size of mscorlib.dll (-19.5 KB), System.Core.dll (-12 KB) and Mono.Security.dll (-2 KB). Also the internal calls required for the pseudo random number generator (PRNG) can be eliminated by the native linker.

As a result my benchmarking application, fully linked and compiled using LLVM for ARMv7 (Release), is 99.5KB smaller than before. That does not include the effects of the other features added in MonoTouch 5.3 – with all the other enhancements the size, when compared to 5.2.11, is 219 kb smaller.

3. Better performance

Even if it’s not hard to find counterexamples native code generally outperforms managed code. That’s often the case for cryptographic code that is optimized or, in some cases, hardware accelerated (recent iOS devices offer this for SHA1 and AES in CBC mode).

For MonoTouch (or Mono in general) this is true when the buffer size are large enough that the managed/native transition cost can be hidden by the faster code. With an optimal buffer size I get the following results when comparing benchmarks on the existing managed code (5.2.11) versus the new CommonCrypto-based code (5.3.3).

The vertical scale represents the improvement factor, i.e. Managed is 1x. Now forget about the high end (AES) numbers that changes the scale of the graphic. I’ll share the exact incantations and sacrifices required, in a future post, but you’re not likely to get them (or even half of them) in real life conditions. Part of it is because benchmarks are lies and a funny fact makes this one an even bigger liar than usual.

Still those are impressive gains that requires nothing but recompiling your application with MonoTouch 5.3.3 (or later). Technical details about what changed and more graphics to follow…

Posted in crimson, crypto, mono, monotouch, xamarin | 2 Comments

Linker versus Customization

MonoTouch 5.3.3 (alpha) has been released with lots of new goodies (and I can predict a few blog posts about them). Let’s start slowly since it’s already the end of the week…

There’s a lot of ways to customize how the linker works on your applications. They covers the most common scenarios but it can sometime feels like it’s missing a final else { } condition. Let’s review the existing conditions:

First the linker’s scope. You can use it on everything (--linkall), on the SDK assemblies shipped with MonoTouch (default) or not at all (--nolink). You can further suppress the linker from executing on one or several assemblies using --linkskip=MyAssembly.

You can also control how the linker code works on your own assemblies. Using the [Preserve] attribute you can ensure that specific parts of your code won’t be linked away (e.g. for reflection/serialization usage). With few (or even no) adjustments most applications can use --linkall and reduce their final size.

Controlling the code linked inside SDK assemblies is less common and a bit harder. The most common way is to write code that use the type, method or fields that you do not want removed. That works well unless you need to preserve some private/internal code. In such case you’re a bit out of luck as using reflection won’t be seen by the linker. Using --linkskip is often your best solution – but you lose the linker benefits over the entire assembly.

Sad ? No more. Starting with 5.3.3 you’ll be able to add extraneous declarations using the new --xml option. Writing XML is never a first choice (at least for me) but it will allow you complete control on your own code (e.g. you do not want or cannot use [Preserve]) and on the SDK assemblies shipped with MonoTouch.

Not surprised ? You should not be 😉 as this is not something really new. The original monolinker has always supported this. It is even used, internally, by mtouch, for the XML definitions used for SDK assemblies (you can find a few similar examples here). However this option was not exposed to MonoTouch developers before 5.3.3.

Posted in linker, mono, monotouch, xamarin | Leave a comment

Touch.Unit Automation Revisited

The basics for test automation have been in place for a while. However that was more potential for automation that real automation. Too many things, outside the Touch.Unit application, were still missing.

Since that time Rolf has filled those missing pieces, mainly in the Touch.Server tool, to allow Touch.Unit projects to be build, executed on the simulator and/or on devices, and to receive all the test results on a local file for further processing and reporting.

So how do I do that ? All the tools are already available, i.e. it’s all in the stable MonoTouch 5.2.x releases. All it takes is some minimal scripting to adapt it to your application.

Here’s an example (Makefile) of how this can be done for Touch.Unit itself (i.e. the sample tests it includes).


all: build-simulator build-device

run run-test: run-simulator run-device

	cd Touch.Server && xbuild

	$(MDTOOL) -v build -t:Build "-c:Debug|iPhoneSimulator" Touch.Unit.sln

run-simulator: build-simulator Touch.Server
	rm -f sim-results.log
	mono --debug $(TOUCH_SERVER) --launchsim bin/iPhoneSimulator/Debug/ -autoexit -logfile=sim-results.log
	cat sim-results.log

	$(MDTOOL) -v build -t:Build "-c:Release|iPhone" Touch.Unit.sln

run-device: build-device
	$(MTOUCH) --installdev bin/iPhone/Release/
	# kill an existing instance (based on the bundle id)
	$(MTOUCH) --killdev com.xamarin.touch-unit
	rm -f dev-results.log
	mono --debug $(TOUCH_SERVER) --launchdev com.xamarin.touch-unit -autoexit -logfile=dev-results.log
	cat dev-results.log

This small Makefile shows you how to use:

  • mdtool to build an existing MonoDevelop solution, Debug or Release;
  • mtouch to install and kill (running) applications on devices; and
  • Touch.Server to start application, including specifying arguments.

All of them can be useful in other circumstances but for our purpose you can simply issue a make run-test from a terminal window to execute the test cases on both the simulator and a connected device.

p.s. you should all encourage Rolf to blog more often has he’s doing incredible stuff all around MonoTouch.

Posted in mono, monotouch, touch.unit, xamarin | 2 Comments

Events vs (Objective-C) Delegates

Objective-C delegates (not to be confused with C# delegates) are a powerful way to customize a type thru delegation. However as a .NET developer it often feels a lot more natural to use events to achieve the same goal.

MonoTouch exposes the *Delegate types (found in iOS API) and it also provides, in many cases, event-based alternatives. It does so by including its own internal _*Delegate inner types – all of them generated code from the bindings.

Here’s an example from ATMHud‘s bindings available in monotouch-bindings github repository. It shows how the delegate type, AtmHudDelegate, can be turned into events, using the Events= property on the [BaseType] attribute.

[BaseType (typeof (UIViewController), Name="ATMHud", Delegates=new string [] { "WeakDelegate" }, Events=new Type [] { typeof (AtmHudDelegate)})]
interface AtmHud {
	[Export ("delegate"), NullAllowed]
	NSObject WeakDelegate { get; set; }
	[Wrap ("WeakDelegate")]
	AtmHudDelegate Delegate { get; set; }
	// ...

This definition, once processed by btouch, generates the source code for the events, the internal delegate type and the glue between them. E.g.

public event EventHandler UserDidTapHud {
	add { EnsureAtmHudDelegate ().userDidTapHud += value; }
	remove { EnsureAtmHudDelegate ().userDidTapHud -= value; }

_AtmHudDelegate EnsureAtmHudDelegate ()
	var del = WeakDelegate;
	if (del == null || (!(del is _AtmHudDelegate))){
		del = new _AtmHudDelegate ();
		WeakDelegate = del;
	return (_AtmHudDelegate) del;

class _AtmHudDelegate : MonoTouch.AtmHud.AtmHudDelegate {
	internal EventHandler userDidTapHud;
	[Preserve (Conditional = true)]
	public override Void UserDidTapHud (MonoTouch.AtmHud.AtmHud hud)
		if (userDidTapHud != null){
			userDidTapHud (hud, EventArgs.Empty);
	// ...

In two words: Clever stuff. Still there are important things to keep in mind when using events instead of delegates.

Rule #1: Don’t mix MonoTouch events and [Weak]Delegate on the same types

When you set a [Weak]Delegate property you’re overwriting any existing one. This includes the internal one used to implement events. E.g.

X x = new X ();
// adding the event will create the internal _XDelegate
x.Event += () => { DoSomething (); }
// assigning MyDelegate will replace _XDelegate
// this will break Event from doing something
x.Delegate = new MyDelegate (x);

The inverse is also true. Settings events after the Delegate property will replace your delegate type with the generated one (see EnsureAtmHudDelegate method above). So the following code won’t work like expected:

X x = new X ();
x.Delegate = new MyDelegate (x);
// there's already a user-supplied delegate but setting the Event
// will create (and assign) a new _XDelegate one
x.Event += () => { DoSomething (); }

Rule #2: Set the Delegate or all events before setting properties or using the instance

Using events is not fully identical to using a delegate type. The keyword being fully. At some point it will be identical but setting a Delegate property is more atomic than setting several events. E.g. the following code is easy to type when using multiple events and properties:

X x = new X ();
// set properties and events by alphabetical order
// helped by IDE code completion feature
x.First += () => { DoSomething (); }
x.Name = "name";
x.Second += () => { DoSomethingElse (); }

When you set the first event on X (e.g. First) an internal _XDelegate instance is created and the event is assigned. Other, yet unassigned, events (e.g. Second) still have a default (in general empty) implementation.

Now if you set some properties (e.g. Name in the above code) then some of the _XDelegate methods may be called (from native code) even if only First is set. That might be a problem if something important must occurs in Second (e.g. if it’s only called once). Such behaviour (when delegate methods are called) is not something well documented in Apple documentation.

Similar problems can occur using the [Weak]Delegate (and in Objective-C too). E.g. if you do not set your delegate soon enough.

X x = new X ();
x.Name = "name";
// if setting Name (tried to) call the Delegate
// then nothing will happen
x.Delegate = new MyDelegate (x);

However it’s a bit common (at least when reading Objective-C code) to see the Delegate set early and, once set, everything (i.e. all the methods) becomes available at once.

X x = new X ();
// this _might_ not behave identically to the previous listing
x.Delegate = new MyDelegate (x);
x.Name = "name";

Is this an unlikely scenario ? It’s not common but we found out that different iOS versions calls delegate methods at different times. IOW the fact that some code works today does not mean it will work identically in the future.

An example of this is UISplitViewController where setting the ViewController property, before the events, cause ShouldHideViewController to be called immediately (on iOS 5.1) instead of later (like iOS 5.0 and earlier). Since the boolean result could differ (between your code and the default value) you could end up looking at a very different UI.

Note: While this was written for MonoTouch it also applies to MonoMac which (mostly) shares the same binding tools and Objective-C coexistence.

Posted in mono, monomac, monotouch, xamarin | 5 Comments