TDD Styles

There are 2 schools of thought in the TDD space: top down and bottom up, the former becoming increasingly popular, while the latter it’s the classical approach used to teach.

Given that a developer is working on an Object Oriented paradigm, if he chooses the bottom up approach he’ll start writing unit tests for the objects he needs to complete the requested feature. The catch here it’s finding which objects are these. This may lead to some refactoring and experimentation until the right set is defined.

Top down approach

A response to the problems above it’s to start writing tests at a higher level of abstraction. So now instead of defining the object interface we start by designing the component interface or even the complete application API. This gives context and help define how the objects are gonna communicate and which messages they should understand.

BDD

BDD it’s a style to help the developer to understand that he is designing not testing and provides a concise language that both, the business and the development team can use. It’s called Gherkin and has several artifacts. The following example shows the format.

1: Feature: Some terse yet descriptive text of what is desired

2:   Textual description of the business value of this feature

3:   Business rules that govern the scope of the feature

4:   Any additional information that will make the feature easier to understand

5:

6:   Scenario: Some determinable business situation

7:     Given some precondition

8:       And some other precondition

9:     When some action by the actor

10:       And some other action

11:       And yet another action

12:     Then some testable outcome is achieved

13:       And something else we can check happens too

14:

15:   Scenario: A different situation

16:       ...

A concrete example:

Feature: Serve coffee

Coffee should not be served until paid for

Coffee should not be served until the button has been pressed

If there is no coffee left, then money should be refunded

 

Scenario: Buy last coffee

Given there are 1 coffees left in the machine

And I have deposited 1$

When I press the coffee button

Then I should be served a coffee

These Gherkin file it’s parsed and some code templates generated from it. This code templates are the placeholders for the testing framework and the start point for the developer to begin coding.

Just enough design

Another approach to work around the limitations of the bottom up approach is to do just enough up front design typically using UML sketches. This allows the development team to brainstorm different designs in front of a whiteboard. The purpose of these designs is to identify the early players of a use case. The implementation is then left to the TDD practice. Common tools are UML’s use case, activity and interaction diagram. I have described this approach before.

 

A few words on the practice of TDD

Whether you choose a top or bottom approach the important thing to keep in mind is that you are not writing tests, you are specifying the way you want the code to be used, even before that code exist. This is more of an experiment than a testing exercise. The idea is to find the abstractions and messages needed to resolve the problem at hand without getting distracted by implementation details such as DB, UI or web services.

TDD and the “Add Value” premise

Discovering what brings value

There are only 2 kinds of code: the one that brings value to the customer, and the one that doesn’t. I’ll call the former domain code and the latter plumbing code. But what does this mean?

Domain code

In simple terms if it’s not in the business jargon, it’s not domain code. That is, all of the business concepts and the rules that dictate how they relate to each other, the services provided by the business to its clients, the actions taken on specific situations (procedures) are all part of the business domain and automating these (completely or partial) are the things that help the business to increment revenue, diminish costs, accelerate the procedures execution and take better decisions. Otherwise it doesn’t add value to the business.

Plumbing code

This is the kind of code that doesn’t directly add value to the business. Is mostly comprised of technical aspects such as the database, software architecture, reporting technology, technology stack, frameworks and so on. It is necessary for an information system to run, but it is not the reason why the system was created in the first place.

The “egg or chicken first” paradox

Common sense dictates that the things that are most important to the business are to be put first. That is that a development team should make sure that the business policies, rules and logic are correctly implemented in the code before anything else. The common practice however, is a different story. That is because the developer usually needs a minimum amount of plumbing code to test if a business rule is working as expected. Consider the case when a developer has to test a simple rule: you can’t get more money from a back account than available. To test this the developer may start creating a BankAccount table, then writing the code for the rule, then creating a test program to exercise that code. And then he would have to add code for transient fault handling in case the database connection fails. So writing the code that adds value (domain code) it’s just a tiny fraction from the whole operation. Most of the actions are about setting up the infrastructure. Even more, a lot of developers take this all the way up to creating an API or UI. This just makes harder to test the code related to the rule, since now there are several points where something may go wrong. So now in order to know if the rule is correctly implemented, the developer has to put an end to end application that may have to be modified in the event that the domain code needs to be rewritten. So which is first: domain or plumbing code? 

Defining what’s to be done

TDD change the focus from the plumbing code back to the domain code. It makes this by forcing the developer to create functional specifications in the form of tests and then create code to fulfill the specification’s expectations.

On a typical development project, a lot of the initial analysis goes into the plumbing code: database, frameworks, operative systems, hardware and so on. On the other hand, the business idea of what the system is supposed to do is subject to evolve as the developers and the business discover the requirements and needs. Unfortunately, a lot of time the developers don’t dig into it until later on, when all the technology stack has been decided. This creates a situation where the domain code is now restricted by the technology stack limitations.

TDD reverses this situation. By having the developers to create the specifications first, they find themselves in the need to understand better what’s the expected outcome for a piece of software. Taking back our previous example, what’s supposed to happen when the bank account has not enough money to fulfill a withdraw operation? An exception? Returns a message? Should that be a string or a structure of sorts? Or a function (closure)? Should these be logged? Should the account owner be instructed to go through some sort of procedure (like a loan)? To answer these questions, the developer has to understand what does the business expect to happen. This will lead him to go back to the business and make questions until he understands enough to continue. This process usually happens on any development project, especially if they’re following an agile methodology, but the use of TDD greatly accelerates it. It allows the development team not only to write the software in the right way, but to help the business to decide if it’s the right thing.

So are you ready to jump in?

Code vs Database: hosting the business logic

Back in 2008 I began working for a startup that had a product for government administration. The thing with the government space it’s that they invest a lot in IT infrastructure: licenses, hardware, custom software and so. This makes it almost impossible to pitch a new system if it can’t leverage what the institution already has. Over time we came to have several customers using different databases. How did we manage to deploy the system to customers using Oracle, MS SQL and pretty much almost any database as a data store? As I have been on different projects since then, whenever I found a restriction imposed by a technology (like a DB system) I found myself asking this question once again.

Achieving database system independence

Government can be a complex beast, with all it’s regulations and rules. It was challenging to make the product flexible enough so it could be easily adapted to a new customer requirement’s. Heck it even had a scripting engine! But looking back I believe that this was possible thanks to 2 things: using an ORM and putting no logic on the DB.

By using an ORM we achieved data type representation independence. That is, all the data needed by the system is represented by the objects in the system regardless of how they are stored. This gave us the liberty to switch from one db technology to another without having to change the code. All we had to deal with was changing the db provider, who had the knowledge to serialize the objects to each DB. We could write our own provider to save to files, had we wanted to.

Since the beginning it was clear to us that having logic hosted in the DB was not a good idea. Had we gone that route, we would had a hard time porting the system from a DB to another. It’s also harder to test. The funny thing is that a lot of developers still follow this practice. Even more, they embrace it!

Encapsulation as a system attribute

I have talked about encapsulation as a code attribute before. However the same principle can be applied to a system as a whole. By having a data representation that is independent of any external technology, we can reduce the impact of external forces (like changing the data storage or presentation technology). That’s the purpose of the so many architectural guides out there: to have the logic resilient to changes from the outside. In my experience this is a natural effect when following this same principle at the object level.

However having the business logic in more than one layer (presentation, data or anything else) always lead to code that is harder to maintain and test. Define a layer that holds your business logic (domain layer) and put all your business rules and logic in there. In other words encapsulate the business logic in a single layer.

A word about SQL

An interesting fact that makes me curious is that the standard SQL specification has no control flow structures. That is why is so hard to move logic from a DB system to another: each one implements it’s own way. But why deal with this when you can use a general purpose language that implements all of this from the get go? If the ANSI SQL does not implement it, why force it?

Abstraction levels

“Abstraction” is the action of simplifying something up to the most important traits.

I like to explain it as a sort of google maps but instead of zooming in and out of a world map you are exploring a model. Just as with google maps we have several views with different levels of detail. We call these abstraction levels. In the context of designing a system this is a useful tool. So using google maps as a metaphor I want to share a personal way to see the abstraction levels on a system.

Abstraction Level Purpose
10,000 ft view Overview of all the actions that can be done on the system
8000 ft view Overview of the way a user executes the actions on the system (UI)
6000 ft view Overview of the steps needed to carry on the action being requested
4000 ft view Overview of the objects that carry on the action being requested
Ground lvl Implementation details of the object

For the sake of this discussion i’ll leave the 8000 ft. view out.

10,000 ft. view

This level can be represented on several ways. My favorite one is using a use case diagram.

The other way I find absolutely useful is using BDD’s “feature” artifact.

 Feature: Create Loan
 In order to pay for a necessary item
 As a customer with no cash at hand
 I want to get a loan

The nice thing about this is that it also expresses the user’s objective.

The 6000 ft. view

In this view we peek inside the actions of the system (use case, feature). Typically, an action has more than one execution path: the default (expected) path and one or more alternatives. This view can be explored using an Activity Diagram.
This view can also be explored using BDD’s “scenario” artifact.

 Scenario: Apply for a loan while having an open loan with a different provider
 Given I already have an account on the site
 And I have an open loan on with a different provider
 When I try to create a new loan
 Then I'll see a message saying "Sorry you can't open a new loan if you already have one open"

Scenario: Apply for a loan with an amount exceding the maximum allowed by the state
 Given I already have an account on the site
 When I try to create a new loan
 And the amount requested exceeds the maximum allowed by the state I live in
 Then I'll see a message saying "Sorry the amount you applied for exceeds the amount allowed by the state"

Scenario: Get a loan
 Given I already have an account on the site
 And I have no open loan
 When I try to create a new loan
 Then the loan will be created

The 4000 ft. view

If the 6000 ft. view allows us to peek into the action, showing us the several execution paths, then the 4000 ft. view it’s all about peeking into the execution paths and the way they are carried along by the business objects. I usually use interactions diagrams on this level.

As you can see this diagram focus solely on the business objects, their responsibilities and the interactions amongst them in order to fulfill the action objectives. In this particular example I’m including 2 paths, as you can see from the lines that return to the Customer actor. I could have one for each scenario.
The point here is that these methods are just loosely defined, still waiting to be fleshed out. This is where TDD comes in. You can create a test declaring what you expect the behavior to be and then code out this particular method, isolating any external dependency.

TDD vs BDD

I originally depicted this while trying to explain that TDD and BDD are basically the same thing just on a different abstraction level.
So if you create tests for anything on the 4000 ft. view before any code is in place, then it’s called TDD, whereas if it’s for anything above that abstraction level, it’s called BDD.

Let me know your thoughts.

 

Ask for help, not for data

Sometime ago I received an email asking for confirmation about an event about to take place. Included in the email was a little snippet meant to make it fun and attractive to developers. As many of us read through it, it brought several opinions on the quality of the code. Here’s the thing:

if (employee.WantsToAttend()) {
	if (employee.IsWorkingFromOffice1())
	{         employee.reply(manager1, "I wanna be there");     }
	else if (employee.IsWorkingFromOffice2())
	{         employee.reply(manager2, "Dude! I wanna be there!");      }
}

I realize that this code is meant to express the idea of the invitation. But the sad thing is that you can find this kind of code on production software. Oh? what’s wrong with this code you said? well, let’s talk it out.

Tenets of OOP

We’ve all heard about the oop principles: encapsulation, polymorphism, inheritance and abstraction. Let’s evaluate the code in the context of these.

Encapsulation

Encapsulation states that you must hide the object internals in such a way that if you change them no dependent object should be afected.

Now consider this method calls:

employee.WantsToAttend()
employee.IsWorkingFromOffice1()
employee.reply(manager1, "I wanna be there");

These calls are all implementation details of the “confirm assistance” scenario. Truth is we only care the employee object to confirm it’s assistance. We could easily move the first evaluation into the object:

class Employee
{
   ...
   public void ConfirmAssistance(string manager, string msg)
   {
      if(wantsToAttend())
		reply(manager,msg);
   }
}

Now the client code would look a little more cleaner:

if (employee.IsWorkingFromOffice1())
	    employee.ConfirmAssistance(manager1, "I wanna be there");     
	else if (employee.IsWorkingFromOffice2())
	    employee.ConfirmAssistance(manager2, "Dude! I wanna be there!"); 

So far so good. We’re now hiding the data by making the decision inside the object. However there is a subtle but important implication: we shift the responsibility of validating wheter the customer wants to assist to an event from the client code to the object. From now on you don’t need to figure out if the employee wants to attend everytime you want to confirm his assistance. The object will do it itself.

Allen Holub calls this asking an object for help instead of data. This is a direct consequence of encapsulation and probably the most influential piece of advice in my transition from a data driven mindset to an OOP one.

Can we stop exposing the employee’s office? we can try:

class Employee
{
   ...
   public void ConfirmAssistance(Func<string,string> msgFactory)
   {
      if(wantsToAttend())
		reply(Office.Manager,msgFactory(Office.Id));
   }
}

And so we have a one liner now:

employee.ConfirmAssistance(officeId => officeId == 1?"I wanna be there": "Dude! I wanna be there!");

If you’re confused by the weird syntax it’s just an inline/anonymous function created using a syntax called lambda expressions (the examples are all C#).

Compared to the previous version which one looks more reusable to you?

A few comments here:

1) now we have a msg for office1 and another for the rest (not only office2)

2) we really don’t care how the manager and office id are stored, we can easily change that to private fields and it would not make a difference to the calling code.

Cleaning up the responsibilities

The reply method implies a third party service. Storing a reference to the service it’s overkill. You have to instantiate that service with the rest of the object graph every time you initialize an employee object. Let’s break this down in 2 parts: the reply message creation and the actual sending of that message.

class MessageGateway
{
    Send(Message msg){...}
}

class Message 
{
   public Message(string recipient, string body)
   {
      Recipient = recipient;
      Body = body;      
   }

   public string Recipient {get;set;}
   public string Body {get;set;}
}

class Employee
{
   ...
   public Message ConfirmAssistance(Func<string,string> msgFactory)
   {
      if(wantsToAttend())
		return new Message(Office.Manager,msgFactory(Office.Id));
	  else
	    return null;
   }
}

The client code:

Message reply = employee.ConfirmAssistance(officeId => officeId == 1?"I wanna be there": "Dude! I wanna be there!");    

if(reply != null) new MessageGateway().Send(reply);

We now delegate the reply message creation to the employee object and the sending of it to the message gateway object. Splitting responsibilities like this allows us for better reusing and comply with the Single Responsibility Principle.

But… we’re breaking encapsulation on the Message class.

Let’s fix that.

class MessageGateway
{
    Send(string recipient, string body){...}
}

class Message 
{
   public Message(string recipient, string body)
   {
      Recipient = recipient;
      Body = body;      
   }

    string Recipient;
    string Body;
    
    public SendThrough(MessageGateway gateway)
    {
       gateway.Send(Recipient,Body);
    }
}

class Employee
{
   ...
   public Message ConfirmAssistance(Func<string,string> msgFactory)
   {
      if(wantsToAttend())
		return new Message(Office.Manager,msgFactory(Office.Id));
	  else
	    return null;
   }
}

and the client code looks like:

Message reply = employee.ConfirmAssistance(officeId => officeId == 1?"I wanna be there": "Dude! I wanna be there!");    

if(reply != null) reply.SendThrough(new MessageGateway());

“What?? all the fuzzle for this? just inverting the way we send the data?” Well yeah, but that’s not all. Do you see that null check there? we can now get rid of it.

Polymorphism

There’s this concept that states that the more execution paths are on a program the harder is to maintain. This is called cyclomatic complexity and is a common indicator of code quality. Bottom line is the less “if” and “switch” statements the better.

Our initial approach removed all of the branching statements from the program. But later we introduced a new one with the null check. Let’s remove it. A common OOP technique it’s the null object pattern. It relies on the polymorphism attribute of OOP. Let’s see how it goes.

1) extract a common interface

interface IMessage
{
   SendThrough(MessageGateway gateway);
}

2) create an object that does nothing (as you would if you received a null)

class Message: IMessage
{
   public Message(string recipient, string body)
   {
      Recipient = recipient;
      Body = body;      
   }

    string Recipient;
    string Body;
    
    public SendThrough(MessageGateway gateway)
    {
       gateway.Send(Recipient,Body);
    }

   //usually the null object it's used in a singleton fashion
   class NullMessage: IMessage
   {
	   public SendThrough(MessageGateway gateway) 
	    {
	       //Do nothing 🙂
	    }
    }

   public static IMessage Null{get;private set;}

   public static Message()
   {
      Null = new NullMessage();
   }
  
}

3) return the null object instead of null

class Employee
{
   ...
   public Message ConfirmAssistance(Func<string,string> msgFactory)
   {
      if(wantsToAttend())
		return new Message(Office.Manager,msgFactory(Office.Id));
	  else
	    return Message.Null;
   }
}

Presto! now let’s update the client code

Message reply = employee.ConfirmAssistance(officeId => officeId == 1?"I wanna be there": "Dude! I wanna be there!");    

reply.SendThrough(new MessageGateway());

look ma! in one line (again)!

employee
	.ConfirmAssistance(officeId => 
	   officeId == 1? "I wanna be there": "Dude! I wanna be there!")   
	.SendThrough(new MessageGateway());

Polymorphism allows us to change a system behavior without changing the code. This is done by creating variations of the same method and swapping them as needed.

If not OOP, then what is it?

Let’s review:

object – data (state)= module (remember vb6?)

object – methods (behavior) = struct (yes, this was already available in C)

You can easily write a program using modules and structs and that’s fine for a lot of situations (forms over data ;))

In conclusion

1) Encapsulation enables Polymorphism

2) Polymorphism enables the use of design patterns and other OOP goodies

OOP shines on making flexible code but it has a price: indirection. If your project is relatively simple (like the example used here) you may want to ponder if there’s a simpler way, like structured programming (modules + structs). But if you still decide on this route, just remember that objects do things. Ask for help, not data!

Extra: a functional twist

Closures can simplify this code a lot. Since they were already present in smalltalk I consider them part of the OOP toolset. Here’s the whole enchilada:

class MessageGateway
{
    Send(string recipient, string body){...}
}

class Employee
{
   ...
   public Message ConfirmAssistance(Action<string,string> confirm)
   {
      if(wantsToAttend())
		confirm(Office.Manager,Office.Id);  
   }
} 

//client code

employee.ConfirmAssistance((manager,officeId)=> {
   var response = officeId == 1? "I wanna be there": "Dude! I wanna be there!";
   new MessageGateway().Send(manager, response);
});

Learn several languages or specialize in one?

I remember a university class when a teacher came and told us that before people tend to generalize: learn a lot of different programming languages while nowadays the tendency is to specialize. While I agree that becoming very good in at least one language is a must on this day an age, I’m sure that not learning more languages is not only a disadvantage but a rather dangerous thing. Here’s why:

The original OOP

I just wanted to share this with you:

https://blog.udemy.com/object-oriented-programming-a-critical-approach/

As mentioned in the post, I also believe that a lot of the beauty of OOP as defined by smalltalk has been lost.

So, if you have not learned smalltalk, you should. It’ll change the way you think about OOP.

Here’s something to help you get started:
http://rmod-pharo-mooc.lille.inria.fr/MOOC/WebPortal/co/content.html

enjoy!

Object oriented vs procedural thinking

I still remember the first time I came in contact with the object oriented concepts. I was surfing on the msdn library (vs6) on a section called books and i stumble upon a book called visual basic 6 business objects. There were only a few chapters included but I found them amazing. I learnt and had been writing vb6 applications for a while back then but found the vocabulary foreign to me: “Business Object”, “Encapsulation”, “Polymorphism” and so on. It immediately hooked me up. The more I learnt, the more I wanted to start coding in this new and awesome way. But when it came to write code I found it so hard to start! The thing is that object oriented requires a new mindset and this change takes time.

Procedural first

I believe that the problem arises because almost every developer is first exposed to procedural programming and usually takes a lot of time to introduce them to object oriented programming. Also the way we are taught object oriented programming is often very poor. This 2 facts combined with a lot of the tutorials out there to learn object oriented languages are procedural exercises further enforces the procedural style into the pure minds of the knowledge seekers.

So what does procedural code looks like? There are so many ways and forms that instead of an example i’ll share some heuristics here.

  1. Your objects contain either just data or just methods
  2. Your objects expose data with the sole purpose to be used by someone else
  3. Almost all of your logic is on static methods

From procedural to object oriented

Procedural programming is a programming paradigm, derived from structured programming, based upon the concept of the procedure call. Procedures, also known as routines, subroutines, or functions (not to be confused with mathematical functions, but similar to those used in functional programming), simply contain a series of computational steps to be carried out. Any given procedure might be called at any point during a program’s execution, including by other procedures or itself. Procedural programming languages include C, Go, Fortran, Pascal, Ada, and BASIC.[1]

So the procedural thinking is all about procedures and the data that you pass to them. You start thinking what the variables are, how they look like (data structures) and what to do with it whereas an object oriented way gets you thinking who does what (responsibilities) and who works with who in order to complete a task (collaboration). The how (implementation) is relegated to a later stage. Instead of thinking on data and procedures, now you have objects that bundle data and procedures in order to do things.

Now comes the tricky part, most of the time you should expose only methods, not data (properties). If you’re familiar with an object oriented language (i.e. java, C#) go and try writing a program without the use of properties. Do not expose any data just for anyone else to use it. Ask for help not for data (encapsulation). This will naturally lead to objects that have data and the methods to manipulate that data. This is good. So instead of writing mailer.send(mail) now you’ll write mail.sendThrough(mailer). And mailer may have something that looks like mailer.send(recipient,sender,subject,body). This subtle change has a big impact on the code. Try it and stop writing pascal on Java 😉

 

TDD: Context is King

So you want to start doing TDD huh? you have read all the blogs about it, seen all the videos on the web and now you’re ready to start. So you open your IDE/code editor of choice and start a new project and a new test project. And you just stare at it. You know what you have to do: write a test, make it fail, write the simplest code to make it pass, refactor the code to make it pretty. Rinse and repeat. The point is, you don’t really know what to test. Or how to know what to test. We all been there, blank face with just one thought: Where do I start?

So the main problem here is the lack of experience. I really can’t think of something else. I guess that’s not a problem when you’re doing TDD with someone who’s already experienced on this, but if this is not your case don’t worry, you just have to do one thing: start.

I guess it’s different for everyone but, I want to share some of my own personal practices on this matter. Hope this helps you getting started.

  1. Define the actions of the system.
  2. Define the business objects on a per use case basis.
  3. Write the test for only one business object first.
  4. Create interfaces for any dependency not defined yet.

Define the actions of the system

One of the most common pitfalls i have seen are the lack of scope and context. I know what i’m going to say sounds dumb but you really, really, really need to sit down and figure out what the system it’s going to do before writing some code. I just have seen so many people failing at this. My favorite technique is using UML use case diagrams. For starters i don’t really dig too much into the details, i just want to have general idea of the system scope and the users involved. A use case diagram let’s you see how the actors are going to interact with it and the specifics actions that a role/persona can do with it. When working on an agile environment i have found that these use cases map nicely with user stories, and even with epics if you’re using scrum. Just keep in mind that this is a high level view of the system. Don’t tangle too much with the details. Even better, don’t think about the details at all. Not a this time.

Define the business objects on a per use case basis

Once you have defined the uses cases for the system, go ahead and select one. Now you can go into the details. So from here i usually use an uml interaction diagram to define the way the objects will cooperate to accomplish the use case objectives. Resist the temptation to start sketching the implementation details: right now the only thing that you care about are the things that you request the object to do and the things that you expect to receive from it. In other words the objects I/O. Remember we are talking about business objects here, unless it’s required by the main flow, don’t put anything infrastructure related, i.e gateways to db, web services and other stuff that doesn’t have business logic but is meant to be used by other objects instead.

Write the test for only one business object first

Now you can start coding! First select the use case that you want to focus on. Now start by creating a file to contain the tests related to one of the related business objects. I adopted some conventions from the bdd movement and usually append specs to the file name. So let’s say i have a business object called BankAccount, the test file would be named BankAccountSpecs. And let’s say that this particular object has a method called Withdraw, then maybe i would create some tests called ShouldDecrementTheFundsOnWithdraw, ShouldFailWithdrawIfNotsufficientFunds and so on. Notice that i started all of my tests with the word “Should”. That’s another bdd adopted convention of mine. I like this one because it’s easier to express what the expected behavior is. In the end you have to remember that you are not testing, you are designing code. At least the implementation of the methods. Also don’t worry if while coding you decide to do things in a different way than that of the interaction diagram, this is part of the show too and it happens very often that an original good idea turns out to be overly complex at the moment of code it. Do not be afraid to try several approaches.

Create interfaces for any dependency not defined yet

If while coding you discover a dependency to an object not yet created, define an interface. Do not struggle trying to create an object to provide the service you need, this probably can be accomplished by another object later on. So just define the interface with the methods you need and use that. Don’t feel bad if you have interfaces with just 1 or 2 methods, this is fine. It’s also a side effect to TDD and a good one 🙂 (google for the interface segregation principle). Anyway now you need an object that implements the interface to be used in your code. You can hand code one. Or you can use a mocking framework. I strongly urge you to consider the later. You should not waste time creating objects for the sake of designing/testing your code. It really does not add value. Go ahead and take a look to any of the many mocking frameworks available for your platform of choice. Experiment until you find one you’re comfortable with and start using it.

Some last words (and warnings)

The purpose of everything that i have covered here it’s to help you getting started. As you keep getting the hang of it you may skip some of the steps outlined here. Maybe you will start creating tests that represent the whole use case instead of using a sequence diagram (i know i have). But if you are on the beginning line I strongly recommend you to follow the procedure I have described above. This will give you something valuable to start doing practicing TDD: Context. What to test? what not to test? where to start? that all depends on the context. Once you have the context set up, the answers to these questions will flow naturally. The rest of the procedure you have already learn from the books, podcasts and posts such as this one: Green, Red, Refactor. Also i have a word of caution here: do not try to write all of the tests before hand. Don’t think of moving from use case to use case gathering all of the use cases to start writing them off later on. This is a bad idea. The reason is simple: you are designing code, not writing tests. Tests are only a tool to design in as much as paper and pen. And when designing code, there are several decisions that have to be made when something unexpected arises from the code. Also some tests will lead to the creation of another tests.

So there you have it. Stay tuned, more on this coming on.

The basics part 3: inheritance

One of the pillars of object-oriented programming is the inheritance or the ability to create hierarchies of objects where you are refining or modifying certain behavior.  For the sake of an example, imagine a company that sells washing machines, stoves and blenders. To motivate their vendors, they have a bonus system that works in the following way:

All vendors have a sales quota. If 25% of sales are coming from washing machines, the seller gets a bonus of 15% of their salary. If at least 50% of the sales come from selling blenders, then receives a bonus of 30% of their salary. Finally if it achieves 25% of its share selling stoves gets a bonus of 20% of their salary.

Now, suppose that these vendors are distributed in the areas north, East and South. Vendors in the southern area have an additional 10% bonus if they exceed the sales quota.

We could represent it in the following way:

Herencia (1)

Then we have a class BonusCalculator. Objects of this class contain all the rules to calculate the bonus of any vendor. One way of implementing this might be something like:

public interface Vendor
{
   bool SalesPercentAchieved(double percent, string productLine);
}


public class BonusCalculator
{
  public virtual decimal CalcBonus(Vendor vendor)
  {
	  decimal bonus = 0;
	  bonus = washMachineSellingBonus(vendor);
	  bonus += blnderSellingBonus(vendor);
	  bonus += stoveSellingBonus(vendor);
	  
	  return bonus;
  }
  ...
}

However for vendors in the South, we have an additional rule to all others.

public class SouthernBonusCalculator:BonusCalculator
{
   override CalcBonus(Vendor vendor)
   {
	   decimal bonus = base.CalcBonus(vendor);
	   bonus + = overSellingBonus(vendor);
	   return bonus;
   }
   ...
}

That’s it. Just use the correct object to calculate the bonus.

void main() {
   List<Vendor> vendors = getVendors("North");
   vendors.AddRange(getVendors("Eastern"));

  var calculator = new BonusCalculator();
  for each(var vendor in vendors)
 {
    decimal bonus= calculator.CalcBonus(vendor);
    Console.WriteLine (bonus);
 }
 
 calculator = new SouthernBonusCalculator();

 List<Vendor>SouthernVendors= getVendors("South"); 
 for each(var vendor in SouthernVendors)
 {
  decimal bonus= calculator.CalcBonus(vendor);
  Console.WriteLine (bonus);
 }
}

Simple. This code leaves much to be desired but we will go adjusting it later.

Now I would like to point out that there’s no properties in the diagram nor in the code.
I did this deliberately to place emphasis on a fundamental principle: inherits it’s all about the behavior, not the data. In other words we use inheritance when we have an object that has a method that you want to adjust a bit to your needs, not when we want to reuse data. That is a common mistake. It comes from the relational databases mindset, which is reduce the duplication of data. In the case of object-oriented programming, it is reducing duplication of methods. Don’t be afraid to repeat data on different objects.

Well that is all for today, next time we will see the meaning of the phrase “favor composition over inheritance” and how that can help us improve the code for this example. Until then.