All posts by Ron

Installing GHUnit on Xcode 4.6

There has been some confusion for awhile about how to get GHUnit running on the latest Xcode version(s). I had thought that these issues were resolved awhile back, but I wasn’t sure. I recently upgraded to Xcode 4.6, and I also needed to add GHUnit to one of my projects, so I thought I’d blog about my experience, see if there are still any issues, and provide some instructions if so. The bottom line is, no. GHUnit works just fine under Xcode 4.6. Read on if you’re curious about the issues I ran into, which were basically just configuration things, nothing broken in GHUnit or Xcode 4.6.

Installing GHUnit

I’m starting with an existing Xcode project, and adding GHUnit. Here’s the steps

  1. Add a new target to your project: use the Empty Application template, no unit tests or Core Data. Name it “GHUnit Tests” or whatever you’d like.
  2. Download a copy of GHUnit from Git. Expand the file if you download as zip.
  3. In your project, right-click Frameworks and select the Examples/MyTestable-iOS/GHUnitIOS.framework directory in the files just downloaded. Set the “Copy items into…” checkbox, and select your GHUnit Tests target (only). Verify that GHUnitIOS.framework has been added to the Frameworks group.
  4. Add -ObjC and -all_load to the Other Linker Flags build setting for the GHUnit Tests target.
  5. Delete the files (AppDelegate.h & .m) from the new GHUnit Tests group (leave Supporting Files subfolder).
  6. In GHUnit Tests/Supporting Files group, edit main.
    1. Replace the last argument to UIApplicationMain (

      NSStringFromClass([AppDelegate class])) with

      with @”GHUnitIOSAppDelegate”

    2. Delete #import “AppDelegate.h”
  7. Select the GHUnit Tests scheme, and run it on the simulator.

So, exactly per the instructions, no problems.

First Problem (Deployment Target setting)

Trying this now on an actual device, I encountered a problem, but not related to GHUnit. I’ve been testing my project on an iPhone 3GS, which is the device I mount on my motorcycle handlebars. By default, Xcode 4.6 sets the Deployment Target to 6.1. Since my good old 3GS is running 5.0, it doesn’t appear in the device list. Instead it just says “iOS Device”. Changing the GHUnit target’s Deployment Target setting to 5.0 fixed this problem.

Second Problem (SenTestingKit)

Now the only thing I had left to do was to add the unit tests to the GHUnit Target. I added one of the OCUnit test case files, and reran. Not surprisingly, I got an error because I hadn’t added SenTestingKit to the GHUnit Tests target. So adding it, and rerunning, I got a similar error regarding OCMock.

Third Problem (OCMock)

The test file I added uses OCMock, so I needed to add the OCMock static library, and then add the path to the OCMock headers to the Headers Search Path build setting.

Fourth Problem (Linker path to SenTestingKit)

And finally, once I had everything compiling ok, the linker had trouble finding the SenTestingKit framework. This was fixed by updating Framework Search Paths:

$(SDKROOT)/Developer/Library/Frameworks
$(DEVELOPER_LIBRARY_DIR)/Frameworks
$(SRCROOT)

Once this was done, the tests appeared and ran successfully.

Now, if only GHUnit supported Kiwi tests…

Floating Point Rounding Errors in Kiwi

I ran into a problem this morning caused by unexpected floating point rounding differences. Comparing floating point numbers can be tricky. For example, a floating point number may represent the value one as 0.99999… So testing for the value of a floating point number may require specifying a precision (eg. 1.0 +/- 0.001). This is because 0.999 does not equal 0.999999.

Kiwi provides some support for this. I can test a floating point value using this type construct:

   [[theValue(myFloat) should] equal:1.0 withDelta:0.001];

However, I ran into a problem when specifying expected floating point arguments for a stub. There doesn’t appear to be a way to specify precision for expected arguments.

For example, the following code tests to ensure that the code is persisting a specified floating point value to NSUserDefaults:

// Code under test
- (void)setMyValue:(float)myValue {    
   [[NSUserDefaults standardUserDefaults]setFloat:myValue 
                                           forKey:@"my_value"];
}
// Kiwi test
   ...
   it(@"persists myValue to NSUserDefaults", ^ {
      id mockNSDefaults = [NSUserDefaults mock];
      [[NSUserDefaults stubAndReturn:mockNSDefaults] standardUserDefaults];
      [[mockNSDefaults should] receive:@selector(setFloat:forKey:)
                         withArguments:theValue(123.4),@"my_value"];
      sut.myValue = 123.4;
   });

This code looks pretty straight forward. It creates a mock object for an NSUserDefaults instance, and returns it when the class method [NSUserDefaults standardUserDefaults] is called (hurray class method stubs!). It then verifies that the value 123.4 is passed to the setFloat:forKey method. Evidently 123.4 isn’t exactly 123.4. It appears that that one of the values is being handled with higher precision (double) than the other (float). Then when compared, they are not exactly equal.

This problem can be fixed a couple ways:

  1. Specify the precision of the values (123.4f instead of 123.4).
    This should work in most cases, but might not depending on what’s going on under the covers.
  2. Use constants that convert from decimal to binary exactly.
    For example, the value 128.0 passes in the above example, but 123.4 doesn’t.

In the above example, I simply changed 123.4 to 123.4f in the first occurrence of 123.4, and the tests then passed as expected.

Although this problem occurred in Kiwi, you may run into similar situations in other frameworks when dealing with floating point numbers. Caveat Emptor.

Class Mocks in Kiwi

Having been working with OCUnit and OCMock for awhile now, one of the common problems that I’ve run into is the pervasive use of code that breaks the “Tell, don’t ask” rule. For example, the following code is typical for retrieving a persisted user default:

NSString *myValue = [[NSUserDefaults standardUserDefaults] 
                     stringForKey:@"my_key"];

This code is calling a class method on NSUserDefaults. So how does one mock the returned value, and or verify that this code is being made correctly, using the correct key, etc?

To date, my approach has been to extract this one-line code to a separate method that returns the value. For example:

   ...
[self getMyDefault];
...
- (NSString *)getMyDefault {
  return [[NSUserDefaults standardUserDefaults] 
          stringForKey:@"my_key"];
}

This is ok, and allows testing the calling method using either a partial mock or subclass and override. But how do we then verify that the new getMyDefault called method is working? Up until recently, I’ve been ignoring testing the new method, because there didn’t appear to be a reasonable way to do so. But in Kiwi, this is really easy:

it(@"gets the persisted user default", ^{
    id mockNSDefaults = [NSUserDefaults mock];
    [[NSUserDefaults stubAndReturn:mockNSDefaults] 
                     standardUserDefaults];
    [[mockNSDefaults should] receive:@selector(stringForKey:)
                           andReturn:@"my expected value"
                       withArguments:@"my_key"];
    [[[myObject getMyDefault] should] equal:@"my expected value"];
});

And voila! The above method creates a mock NSUserDefaults object, then uses Class method stubbing to return it when [NSUserDefaults standardUserDefaults] is called.

 

Kiwi has great support for mocking Class methods, as shown above. Refer to the Kiwi website for more details.

Looking at Other Unit Test Tools: OCHamcrest, OCMockito, and Kiwi

I’ve recently started looking at other unit testing frameworks. I’ve been using OCUnit, GHUnit, and OCMock. But I’m finding that as my tests become more extensive and numerous, the tests themselves are becoming more difficult to read. I have been able to test everything that I’ve tried using these tools, but the resulting test code is sometimes hard to read. So my objective in looking at other tools is not for additional functionality, but for better readability.

The first tools that I looked at were OCHamcrest and OCMockito by Jon Reid. I like the syntax provided by these. But I ran into a problem when converting some of my tests because they currently use partial mocks. It appears that the Java version Mockito provides something like partial mocks, using what is called “spy”, but that this capability hasn’t been ported to OCMockito yet.

So while contemplating whether to try pushing forward using categories, swizzling, or subclassing to replace partial mocks, another iOS engineer recommended that I give Kiwi a look. So I did, and it looks very promising. I guess I haven’t given Kiwi a good look before because I heard that it was only for BDD, not unit testing. This turns out not to be the case.

I am going to give Kiwi a workout by converting the tests in one of the existing unit test files in What’s My Speed: the WeatherService. This file contains an assortment of OCUnit tests using both mock and partial mock OCMock objects.

Adding Kiwi to the project and file

The first step is to add Kiwi to the project. I’m going to just add the static library and headers, but there are instructions for adding it as a subproject. I built the code from source following the instructions on the Kiwi website. I then added the static lib and headers to the project, added the Kiwi headers directory to the test target headers search path, and then the kiwi.h to the WeatherServiceTests.m file:

#import "Kiwi.h"

I’m going to try leaving OCMock in the file until all of the tests have been completed. Then rebuild to verify everything ok.

Note: I had to clean and build twice before it would build without errors. This is a long time Xcode bug. Sometimes Xcode appears to cache things. Upon making changes to header paths and such, it sometimes takes a couple builds before strange errors go away. In this case, it was reporting that it couldn’t find WeatherService.h. Build cleaning and rebuilding twice and the reported error went away.

I also encountered an error with missing header files, including NSObject+KiwiSpyAdditions.h. It appears that building Kiwi inside the Xcode IDE results in only part of the header files being copied to the build directory. I fixed this by manually copying the headers from the Kiwi/Kiwi source directory to my project’s Kiwi headers directory.

Converting the first few simple tests

Next I’ll convert the first few tests. These are simple tests that verify the basic singleton operation of the object. So here is the existing tests before converting. I’ve removed some lines that aren’t related to these tests, and I’ll add them back as we go.

#import <SenTestingKit/SenTestingKit.h>
#import <OCMock/OCMock.h>
#import "WeatherService.h"
#import "WeatherService-Private.h"

@interface WeatherServiceTests : SenTestCase

@property (nonatomic, strong) WeatherServiceForTesting *weatherService;

@end

@implementation WeatherServiceTests

- (void)setUp {    
    self.weatherService = [[WeatherServiceForTesting alloc]init];
}

- (void)tearDown {
    self.weatherService = nil;
}

- (void)testInstantiation {
    STAssertNotNil(self.weatherService, @"Test instance is nil");
}

- (void)testSharedInstanceNotNil {
    WeatherService *ws = [WeatherService sharedInstance];
    STAssertNotNil(ws, @"sharedInstance is nil");
}

- (void)testSharedInstanceReturnsSameSingletonObject {
    WeatherService *ws1 = [WeatherService sharedInstance];
    WeatherService *ws2 = [WeatherService sharedInstance];
    STAssertEquals(ws1, ws2, @"sharedInstance didn't return same object twice");
}

Ok, pretty straight forward tests, no mocks needed. Let’s convert these to Kiwi:

#import <SenTestingKit/SenTestingKit.h>
#import <OCMock/OCMock.h>
#import "WeatherService.h"
#import "WeatherService-Private.h"
#import "Kiwi.h"

SPEC_BEGIN(WeatherServiceKiwiTests)

describe(@"Singleton (by choice)", ^{

    it(@"should instantiate using init", ^ {

        [[[WeatherService alloc]init] shouldNotBeNil];
    });

    it(@"should instantiate using sharedInstance", ^{
        [[WeatherService sharedInstance] shouldNotBeNil];
    });

    it(@"should return the same instance twice using sharedInstance", ^{
        WeatherService *a = [WeatherService sharedInstance];
        WeatherService *b = [WeatherService sharedInstance];
        [[a should] beIdenticalTo:b];
    });

    it(@"should not return the same instance twice using init", ^{
        WeatherService *a = [[WeatherService alloc] init];
        WeatherService *b = [[WeatherService alloc] init];
        [[a shouldNot] beIdenticalTo:b];
    });

});
SPEC_END

Now let’s test to make sure the tests are actually working. Cmd+U to execute tests, and everything appears ok. Are the tests actually working? To verify this, I reverse the test logic by replacing “should” with “shouldNot”, and “ShouldNotBeNil” with “shouldBeNil”, rerunning the tests I see the failures. So I have some confidence that the tests are doing what I expect them to be doing.

Our next test methods further verify that init is doing what we expect. It calls 2 other methods, that each do just one thing.

- (WeatherService *)init {
    self = [super init];
    if (self) {
        [self startTimer];
        [self updateWeatherInfoForZipcode:kDEFAULT_ZIPCODE];
    }
    return self;
}

Ok, so with OCMock we used a partial mock in two tests:

- (void)testInitCallsStartTimer {
    id mock = [OCMockObject partialMockForObject:self.weatherService];
    [[mock expect]startTimer];
    id __unused initedMock = [mock init];
    [mock verify];
}
- (void)testInitCallsUpdateWeatherInfoForZipcode {
    id mock = [OCMockObject partialMockForObject:self.weatherService];
    [[mock expect]updateWeatherInfoForZipcode:kDEFAULT_ZIPCODE];
    id __unused initedMock = [mock init];
    [mock verify];
}

Since Kiwi appears to have great support mocks, this should be pretty straight forward. Note that Kiwi’s mock support allows defining stubs and expectations on both mocks and objects. This eliminates the need for partial mocks altogether!.

describe(@"init", ^{

    it(@"starts the timer", ^ {
        id weatherService = [[WeatherService alloc]init];
        [[weatherService should] receive:@selector(startTimer)];
        id __unused initedMock = [weatherService init];
    });

    it(@"updates the weather info", ^{
        id weatherService = [[WeatherService alloc]init];
        [[weatherService should] receive:@selector(updateWeatherInfoForZipcode:) withArguments:kDEFAULT_ZIPCODE];
        id __unused initedMock = [weatherService init];
    });

});

Ok, this new code looks pretty similar. It’s shorter by one line because [mock verify] isn’t needed. And for this small set of fairly simple tests, the difference in readability isn’t much, but I’m seeing the potential for greatly improved readability. The structure of the tests feels much more organized. I need to learn how to really take advantage of that. I’m going to stop this blog here, and continue converting the rest of the tests to Kiwi. I’ll probably have more to say about this in future posts as I learn more about using Kiwi.

Configuring CoverStory

CoverStory is a great little application that will allow you to see how much of your code is being tested. This is referred to as code coverage, hence the clever name.

To use it, you have to configure your unit test target to generate coverage records (*.gcda and *.gcno files). How to do this will depend on the version of Xcode you are running.

For older Xcode version (using gcc):

  1. Add -fprofile-arcs and -ftest-coverage to Other C Flags
  2. Link /Developer/usr/lib/libprofile_rt.dylib into your app

For Xcode version 4.5 and newer:

  1. Set the “Generate Test Coverage Files” build setting to Yes.
  2. Set the “Instrument Program Flow” build setting to Yes.

Once you’ve configured your build settings:

  1. Rebuild your app and run the unit tests.
  2. Locate where the generated *.gcda and *.gcno files were put
    (Organizer -> Projects -> Derived Data -> look in directories below)
  3. Start CoverStory and open the directory containing teh *.gcda and *.gcno files.

Cover story will list all of the files built by your project except those identified as Unit Test files. By default these are files matching *Test.[hHmM]. In my case though, I tend to name my unit test files using the plural *Tests.[hHmM], so the default report lists coverage for these also. This is just clutter.

To remedy this, I just had to add another entry to CoverStory->Preferences->Test Files to include the string “*Tests.[hHmM]”.

In addition, I’m using AFNetworking, so these files are reported also. To eliminate 3rd party code like this, use CoverStory->Preferences->SDK Files to exclude their directory/path. In this case, I added the line “*/AFNetworking/*”.

Another option that I’m exploring is moving the untestable code into a separately, clearly identified file (eg. ViewController+Untested). These files can then be filtered from reporting by adding “*Untested.[hHmM]” also.

After rebuilding your code each time, remember to click CoverStory’s refresh button to see the updated coverage report. Also be careful that you do not accidentally click on of the “Show…” buttons when moving the window around. I had done this, which causes those hidden test files to be displayed, leading me to think that there was a bug in CoverStory’s filtering. Turns out the bug was me 🙂

Using Categories in Unit Tests

Categories are a wonderful feature of the Objective-C runtime. They allow adding or overriding (* see note below) methods on any class. This has profound implications for unit testing. For example, this morning I was enhancing the unit tests for an object that I created to access weather information from the Weather Underground. This service is free for developers, limited to 10 calls per minute, 500 calls per day. So of course I’m monitoring the number of calls to the API that my code is making, and I realize that I’m making about 10 calls to the service when running my unit tests. Yikes, if I add any more tests that call the service, I’m going to exceed my limit every time I run tests. As it is, I’ll have to be careful to not run the tests more than once per minute.

So I started thinking about how to prevent actually calling the weather API during testing without adding flags or #defines or other messiness to the product code. The tests I’m running here unit tests. They aren’t API or integration tests, so I’m not even waiting for, or examining any return data from the weather API. So the obvious thing to do is to simply disable making the call to the weather API. But how to do this without cluttering up the product code?

Categories to the rescue. By simply extracting the call to AFJSONRequestOperation start into a separate method, I can then create a category in my test file that overrides that method. Here’s a simplified example of the code after refactoring it to work this way:

@implementation WeatherService
...
- (void)sendWeatherServiceRequest {
   NSString *requestString = [@"<path and key for weather API>"];
   NSURL *url = [NSURL URLWithString:requestString];
   NSURLRequest *request = [NSURLRequest requestWithURL:url];
   AFJSONRequestOperation *operation = [AFJSONRequestOperation ...];

// [self startWeatherRequestOperation:operation];
}

- (void)startWeatherRequestOperation:(AFJSONRequestOperation *)operation {
   [operation start];
}
...

Now in the unit test case file, which is only included in the unit test target, I create a category with a method to override the start operation:

@interface WeatherService (ForTesting)
- (void)startWeatherRequestOperation:(AFJSONRequestOperation *)operation;
@end
BOOL weatherServiceWasCalled; // Global for explanation purposes only
@implementation WeatherService (ForTesting)
- (void)startWeatherRequestOperation:(AFJSONRequestOperation *)operation {
   //Note that start is not actually called during unit tests.
   NSLog(@"startWeatherRequestOperation prevented for testing");
   //Tell the tests that this happened
   weatherServiceWasCalled = YES;
}
@end

So now when running unit tests, instead of actually kicking off a request to the weather service, it displays an NSLog message.

Alternatively, I could have added a disable flag to the code that would be set somehow during testing, but this violates my rule of not adding code to the product target solely for testing, and exposes a risk of forgetting to turn this flag off when releasing the code.

As a side note, this small refactoring also makes the code easier to unit test. For example, we can set a flag in the category’s startWeatherRequestOperation that will signal when it is called. This can be done using a global as shown here, or better using a class method in the test case class. Again, the category won’t be included in product code, only in the test target.

- (void)testStartWeatherRequestOperation {
    weatherServiceWasCalled = NO;
    [self.weatherService startWeatherRequestOperation:nil];
    STAssertTrue(weatherServiceWasCalled,@"startWeatherService was not called");
}

Using a category that is located in the unit test file provides a very simple way of modifying product code behavior when subclassing won’t work. Read on.

Follow-up 1/4/13

After playing with this for a few more days, and doing some more researching, I’ve changed my mind about using categories to override methods. From what I have been reading, categories are not designed to override methods. It may work, but is probably fragile, and indeed generates compiler and linker errors (as of 4.5.2 anyways). The correct way to override methods is to subclass. In the example code above, there is no reason not to subclass.

I can image scenarios, perhaps involving legacy code, where a subclass won’t work. For example, if the code under test is explicitly creating instances of the specific class being tested. But even then, the correct test solution is probably to refactor that smelly code.

So redoing the above code to use subclassing instead:

@interface WeatherServiceForTesting : WeatherService
@property BOOL weatherServiceWasCalled;
- (void)startWeatherRequestOperation:(AFJSONRequestOperation *)operation;
@end

@implementation WeatherServiceForTesting
- (void)startWeatherRequestOperation:(AFJSONRequestOperation *)operation {
   //Note that start is not actually called during unit tests.
   NSLog(@"startWeatherRequestOperation prevented for testing");
   //Tell the tests that this happened
   self.weatherServiceWasCalled = YES;
}
@end
- (void)testStartWeatherRequestOperation {
    [self.weatherService startWeatherRequestOperation:nil];
    STAssertTrue(self.weatherService.weatherServiceWasCalled,@"startWeatherService was not called");
}

How to Test Calls to Super

It is quite common in iOS object methods to include a call to the parent object using a construct like [super …]. This call is important, albeit simple. So how do we verify in our unit tests that this call is made?

For example, let’s look at testing the default implementation of UIViewController didReceiveMemoryWarning;

@interface ViewController : UIViewController
...
@end
@implementation ViewController
...
- (void)didReceiveMemoryWarning {
    [super didReceiveMemoryWarning];
    // Release any cached data, images, etc that aren't in use.
    ...
}

When I first started looking at this problem, there appeared to be 3 obvious ways to test this:

  1. Using a Category to Intercept Calls to Super
  2. Using a Category on a Subclass of the Parent to Intercept Calls to Super
  3. Swizzling

1: Using a Category to Intercept Calls to Super

The first approach is to use a category on the parent class to intercept the call to super’s method. It could then set a global flag to be inspected by the unit test.

#import "ViewController.h"

BOOL superWasCalled = NO;

@interface UIViewController (OverrideSuperForTesting)
- (void)didReceiveMemoryWarning;
@end
@implementation UIViewController (OverrideSuperForTesting)
- (void)didReceiveMemoryWarning {    
    superWasCalled = YES;    
}
@end

@interface ViewControllerTests : SenTestCase
@property (strong, nonatomic) ViewController *vc;
@end

@implementation ViewControllerTests
- (void)setUp {
    self.vc = [[ViewController alloc]init];
}
- (void)tearDown {
    self.vc = nil;
}

- (void)testDidReceiveMemoryWarningCallsSuper {
    [self.vc didReceiveMemoryWarning];
    STAssertTrue(superWasCalled, @"super not called");
}
@end

There are two issues with doing it this way:

  1. This code does not actually call super’s implementation of the method, which is probably ok for unit testing. The category’s method replaces the original implementation, which is simply lost.
  2. It does not work if the parent method being called is implemented as a category. Only one category will be called, and which one is unpredictable.

2: Using a Category on a Subclass of the Parent to Intercept Calls to Super

This second approach violates one of my testing rules: don’t modify product code just for testing. But this approach works, and is fairly clean. The only difference between this approach and the first one is that an empty subclass is created between the class and its parent. This allows creating a category on the empty class, which can then still call its super.

// Create an empty subclass
@interface ViewControllerSubclass : UIViewController
@end

@implementation ViewControllerSubclass
@end

// Change ViewController to use the subclass instead of UIViewController
@interface ViewController : ViewControllerSubclass
...
@end
#import "ViewController.h"

BOOL superWasCalled = NO;

@interface ViewControllerSubclass (OverrideSuperForTesting)
- (void)didReceiveMemoryWarning;
@end
@implementation ViewControllerSubclass (OverrideSuperForTesting)
- (void)didReceiveMemoryWarning {
    [super didReceiveMemoryWarning];
    superWasCalled = YES;    
}
@end

@interface ViewControllerTests : SenTestCase
@property (strong, nonatomic) ViewControllerSubclass *vc;
@end

@implementation ViewControllerTests
- (void)setUp {
    self.vc = [[ViewControllerSubclass alloc]init];
}
- (void)tearDown {
    self.vc = nil;
}

- (void)testDidReceiveMemoryWarningCallsSuper {
    [self.vc didReceiveMemoryWarning];
    STAssertTrue(superWasCalled, @"super not called");
}
@end

I am assuming that the only change to the product code is the inclusion of an empty subclass between ViewController and UIViewController. During testing, a category is used to change the functionality of the subclass.

It might be tempting to put the code to set the test flag directly into the subclassed method instead of using a category, but then this code would be executed during normal product runtime. I recommend against littering up product code with things that are there solely for testing. Separating test code into test files keeps the product code more readable.

3: Using Swizzling to Intercept Calls to Super

The third approach is to swizzle the method on the parent class. Swizzling is a fairly common practice, but is generally considered to be a hack. Using swizzling, we can redirect the call to the super’s method to any other method we want. In this case, a method that will set a flag to signal that it was called.

Perhaps the easiest way to do this is to use the open source JRSwizzle library. This simplifies the swizzle operations, and makes the code more readable. The down side of using JRSwizzle is that as of this writing, it isn’t unit tested, so code coverage metrics for your project will suffer some. But who knows, maybe somebody in the near future will add unit tests to it.

Like in method 2 above, we’ll subclass the parent, and intercept/swizzle calls to super there. We’ll use a category, so that the code doesn’t get included in product builds. This is optional though. The code could be put into the parent subclass, and just wouldn’t ever be called in product builds.

#import "JRSwizzle.h"

BOOL didReceiveMemoryWarningWasCalled = NO;

@interface ViewControllerTestable (ForTesting)
- (void)didReceiveMemoryWarningOverride;
@end

@implementation ViewControllerTestable (ForTesting)
- (void)didReceiveMemoryWarningOverride {
    // Call original. Swizzling will redirect this.
    [self didReceiveMemoryWarningOverride];
    didReceiveMemoryWarningWasCalled = YES;
}
@end

@implementation ViewControllerTests
...
- (void)testDidReceiveMemoryWarningCallsSuper {

    // Swizzle super's methods
    NSError *error;
    [ViewControllerTestable 
     jr_swizzleMethod:@selector(didReceiveMemoryWarning)
           withMethod:@selector(didReceiveMemoryWarningOverride)
                error:&error];
    [self.vc didReceiveMemoryWarning];
    STAssertTrue(didReceiveMemoryWarningWasCalled, 
                 @"didReceiveMemoryWarning did not call super");

    // Swizzle back
    [ViewControllerTestable 
      jr_swizzleMethod:@selector(didReceiveMemoryWarning)
            withMethod:@selector(didReceiveMemoryWarningOverride)
                 error:&error];
}
...

In this code, when our method under test calls [super didReceiveMemoryWarning], the call will actually be made to didReceiveMemoryWarningOverride. This in turn will set a flag, and call the original didReceiveMemoryWarning (using the swizzled didReceiveMemoryWarningOverride which now points to didReceiveMemoryWarning).

So what’s the best approach? That will depend on your specific test situation. The second approach seems the cleanest and safest. It does, however, complicate the product code, but only slightly.

How to unit test completion blocks

Blocks have become quite pervasive in iOS. iOS engineers around here are getting pretty comfortable working with them, and are using them more and more. However, they present some problems for unit testing:

Blocks are typically used asynchronously 
For example, run when a network API request completes. This sort of delayed execution is definitely bad for unit tests. Unit tests should run fast, otherwise we won’t run them frequently.

Blocks are lexically scoped
This means that variables are captured at the time that the block is defined, not when it is executed. This is one of the wonderful things about blocks, but further complicates unit testing them.

Blocks allow near proximity definition
Block are typically defined inline where needed. This is great for making code more readable, placing the code in near proximity to where it is used. However, using the common technique of extracting the code to a separate method or function for testing undoes this desirable attribute.

So how do we test blocks? Literally every other blog post I’ve read recommends using some sort of delay mechanism to wait for the block to actually run. This might be acceptable for integration tests, but would likely cause unit tests to take too long to run. There are also test frameworks like OHHTTPStubs that can help in specific situations, but I’m looking for a general solution to this problem.

I’d love to have support built-into whichever testing framework is being used, but so far I have not found any that do this yet (although some come very close). After quite a bit of thought and experimentation on this, I think the best general solution for testing completion blocks is to use the Michael Feathers “Subclass and Override” pattern. This approach can be done regardless of the unit testing framework being used. I will demonstrate this with an example using AFNetworking. The following is the (simplified) product code needing to be unit tested:

@implementation MyClass { //in product code
...
- (void)sendAsyncRequest:(NSURLRequest *)urlRequest {
    AFJSONRequestOperation *operation = 
       [AFJSONRequestOperation JSONRequestOperationWithRequest:urlRequest
       success:^(NSURLRequest *request,NSHTTPURLResponse *response,id JSON) 
    {
        NSLog(@"Success: returned object = %@",JSON);
        //do whatever else needs to be done...
    }failure:^(NSURLRequest *request, NSHTTPURLResponse *response, NSError *error, id JSON) 
    {
        NSLog(@"Failed async request");
        //do whatever else needs to be done to handle the failure...
    }];
    [operation start];
}
...
@end

The above code appears to be doing 2 things:

  1. It creates an AFJSONRequestOperation object.
  2. It calls the start method on that object

But to completely unit test this method, we’ll need to verify 7 things:

  1. An AFJSONRequestOperation block is created successfully.
  2. The urlRequest argument is used in creating the AFJSONRequestOperation object.
  3. The success block is used in creating the AFJSONRequestOperation object.
  4. The success block performs as expected.
  5. The failure block is used in creating the AFJSONRequestOperation object.
  6. The failure block performs as expected.
  7. The start method is called on the AFJSONRequestOperation object.

This seems a bit daunting, or at the least like a lot of work for so little a method. But I think that we can establish a pattern for doing this, and maybe some code snippets or reusable code to simplify testing this type of code going forward. So the work here should be mostly a learning effort, and then testing code with completion blocks will be easier going forward.

So let’s focus #4, how to test that the success block executes as expected. The first thing that we’ll need to do is to extract the call to the class method into a separate method. This will give us access to the success and failure blocks that are being defined:

- (void)sendAsyncRequest:(NSURLRequest *)urlRequest {
    AFJSONRequestOperation *operation = 
       [self createOurJSONRequestOperationWithRequest:urlRequest
       success:^(NSURLRequest *request,NSHTTPURLResponse *response,id JSON) 
    {
        NSLog(@"Success: returned object = %@",JSON);
        //do whatever else needs to be done...
    }failure:^(NSURLRequest *request, NSHTTPURLResponse *response, NSError *error, id JSON) 
    {
        NSLog(@"Failed async request");
        //do whatever else needs to be done to handle the failure...
    }];
    [operation start];
}

- (AFJSONRequestOperation *)createOurAFJSONRequestOperation:(NSURLRequest *)request
                    success:(AFRequestSuccessBlock)success
                    failure:(AFRequestFailureBlock)failure {
    return [AFJSONRequestOperation JSONRequestOperationWithRequest:request 
            success:success 
            failure:failure];
}

Next we subclass our object under test (MyClass) and override the newly extracted method:

@interface MyClassForTesting : MyClass

@property BOOL createOurAFJSONRequestOperationWasCalled;
@property BOOL fireSuccessBlock;
@property BOOL fireFailureBlock;

- (AFJSONRequestOperation *)createOurAFJSONRequestOperation:(NSURLRequest *)request
           withSuccessBlock:(void(^)(NSURLRequest *urlRequest,NSHTTPURLResponse *urlResponse,id JSON))success
           withFailureBlock:(void(^)(NSURLRequest *urlRequest,NSHTTPURLResponse *urlResponse,NSError *error,id JSON))failure;
@end

@implementation MyClassForTesting

- (AFJSONRequestOperation *)createOurAFJSONRequestOperation:(NSURLRequest *)request
                            success:(void(^)(NSURLRequest *urlRequest,NSHTTPURLResponse *urlResponse,id JSON))success
                            failure:(void(^)(NSURLRequest *urlRequest,NSHTTPURLResponse *urlResponse,NSError *error,id JSON))failure {

    NSURLRequest *urlRequest = nil;         //dummy request
    NSHTTPURLResponse *urlResponse = nil;   //dummy response
    NSError *error = nil;
    
    //Caveat: if we fire immediately, environment will be different than that
    //        of the product, since product has delay before blocks fired.
    if(self.fireSuccessBlock) {
        success(urlRequest,urlResponse,self.JSON);
    }
    if(self.fireFailureBlock) {
        failure(urlRequest,urlResponse,error,self.JSON);
    }
    return nil;
}
@end

So now in this code, we are overriding the extracted method, and using a flag to determine whether or not to fire the passed success and/or failure completion blocks.

One nice thing about this approach is that the only change to the product code is the change to refactor and extract the createOurAFJSONRequestOperation:success:failure: method. All the other code to subclass and override the method resides in our test files, and is not included in the product code.

Here’s an example of a Kiwi test using the subclassed MyObject. In this case, I’m using literals to define the JSON data of interest (weather data) to fire a completion block that will parse this data and assign it to a temperature property:

    __block MyClassForTesting * sut = nil;
    ...
    it(@"passes a success block that sets temperature", ^{
        self.sut = [[MyClassForTesting alloc]init];
        sut.JSON = @{ @"current_observation" : @{ @"temp_f" : @76.5 } };
        sut.fireSuccessBlock = YES;
        [sut sendWeatherServiceRequest:kTEST_ZIPCODE];
        float temp = sut.weather.temperature;
        [[theValue(temp) should] equal:76.5 withDelta:0.01];
    });
    ...

So that’s a general approach that we can use to intercept method calls on the object under test. I admit that it appears messy as shown. It should be fairly easy to wrap all this up into a generic reusable class to encapsulate and hide all the gory details. This would make the test code easier to create, and more importantly, easier to read and understand.

[[MyObject alloc] initForTesting]?

Normally I don’t believe in adding additional code just to support unit testing. By this I mean adding #defines and ifs and switches and such to product code, to change program flow during testing. The reason I don’t like doing that is because you can create a situation whereby the product code path could be different than the code path that is tested.

However, I think I can make a good argument for splitting code up in ways to make unit testing easier. One situation that I keep coming across has to do with a class init method. On one hand, I’d like init to prepare the class for use, including setting reasonable default values, setting up observers, and so forth. But this could cause problems for unit testing.

I have discussed previously having a method call other helper methods to do the actual work, allowing those helper methods to be tested more directly. I think that is a great pattern, and it applies equally well to init. But this poses a bit of a problem for unit testing. For example:

// In the object code being tested
- (id)initAndStart {
   self = [super init];
   if (self) {
      [self setPostalCode:@"Unknown"];
      [self startLocationUpdates];
   }
   return self;
}

In order to test an object, we need to create and save a reference to a test copy. Typically this is done in our test case file’s setUp method. In this case we’re saving to a location property (not shown).

- (void)setUp {
    [self setLocation:[[Location alloc]initAndStart]];
}

One way to unit test this would be to try to verify that the test object initialized everything correctly. For for example:

- (void)testThatInitSetsPostalCodeToUnknown {
    STAssertTrue([self.location.postalCode isEqualToString:@"Unknown"], @"Default postal code should be Unknown but is %s",self.location.postalCode);
}

This might work. But what if subsequent lines of code in the init method cause the postalCode property to change. In this example, starting location updates is expected to periodically update the postalCode property. If that happens before our test completes, then the test will fail.
So in this case, it might be better to take another approach, and split the initialization of this object in half as shown here:

// In the object code being tested
- (id)initForTesting {
   return [super init];
}
- (id)initAndStart {
   self = [self initForTesting];
   if(self) {
      [self setPostalCode:@"Unknown"];
      [self startLocationUpdates];
   }
   return self;
}

Now we can change our test case file’s setUp method to initialize the object using initForTesting, and perform specific tests on the initAndStart.

Hopefully, you are asking yourself at this point “What is the difference between a standard init and initForTesting? And the answer is “None, in this case”. I’ve only named it this way to point out it’s significance to unit testing.

So how do we test initAndStart? We use a partial mock to intercept the call to initForTesting. If we don’t do this, then our mock object will be replaced with a new instance in the call to initForTesting.

- (void)testThatInitAndStartCallsStartLocationUpdates {
    id mock = [OCMockObject partialMockForObject:self.location];
    __unused id tempmock = [[[mock stub]andReturn:mock]initForTesting];
    [[mock expect]startLocationUpdates];
    __unused id initedMock = [mock initAndStart];
    [mock verify];
}

Note: in the above code, __unused is only there to eliminate compiler warnings.

Unit Testing Prototype Code

I’ve often been told that unit testing things like views, that are very visual, doesn’t make sense. My typical response is “Ok, show me an example of code that shouldn’t be unit tested”. I’m honestly open to the idea that there are situations where unit testing isn’t a good thing, but I haven’t come across very many. So here I’m going to suggest a situation where this may be the case.

I’m currently extending the What’s My Speed code that I developed for the lynda.com unit testing course. The original code had just a MapView with a couple large text views to display time and speed.

I decided to spiff things up a bit, and put either some LED bar graph gauges, or perhaps round, automotive style meters to display temperature, fuel level, etc. Looking around for some open source custom views for these, I came across F3BarGauge. This seems to do a good job of displaying the LED bars. So now I need to add labels for a title (eg. “Fuel”), and level labels (eg. “Full”, “1/2”, etc). We’ll want to wrap all this into a custom view.

Now, at this point, I don’t really know what I want this thing to look like. If I was working with a designer, I’d have the designer figure out what it should look like, create me an Illustrator or Photoshop asset, and go to work implementing the code to display it. But I’m not working with a designer, so I’m going to use code to do all this.

What I’m going to do is:

  1. Using the storyboard, drag a bunch of labels and such onto it.
  2. Hook things up just enough to actually see the control in operation.
  3. Mess with the fonts, sizes, and colors until I get something I like.

Once this is done, I fully expect to have a real mess code-wise. But I’ll know then what I want it to look like. I refer to this as prototyping.

Aha! This may be an example where TDD doesn’t make sense. I don’t know what I want the code to do yet. Now, strictly speaking, I could write unit tests before I write each piece of code. But since I know ahead of time that I’m going to be making lots of changes and adjustments, I think it best to wait until the prototype is done, then start all over using TDD.

So that’s what I’m doing. I prototyped using mostly IB, then once I had the design where I liked it, I started rewriting everything using TDD. But the prototype itself contained no unit tests (blush).