The UML class diagram: you are probably using it wrong

As I have mentioned before I learned VB6 with the help of a book. The book introduced me to the basic control flow structures and then to databases as a form of storage. It touched a little on the relational theory and how the structure of the database would affect the structure of the data as represented in the software and therefore the need to start modeling the database before anything else. So, for years I followed that pattern. The data model became my de facto starting point. Even when I learned C++ and Java, 2 OOP languages, my analysis approach remained the same. For a long time…

The confusion

A common approach to database design is to use the entity relationship diagram. It shows tables and the way these are related to each other. It’s pretty simple actually and easy to reason about. I believe that’s the reason for their popularity. So when I was starting into OOP (as opposed to only learning OOP languages) and I was introduced to the Class diagram I immediately mapped it to the ER diagram and hence the thinking behind it. So for a long time I would still doing data modeling, but instead of using an ER diagram I was now using a Class diagram.

The UML diagrams classification

Suppose you are tasked with creating a robotic arm to crack eggs open. Where would you start?

Probably you would start by trying to figure out the movements used to crack an egg open. You would do this by looking an expert and maybe trying to learn the trick yourself. Then you would start thinking how to replicate that movement. This may lead you to identify a key components and the interactions between them. You may make some sketches. And from here you probably would start experimenting with different configurations and maybe tweaking the pieces a little bit until your experiment is successful. And then you would create some blueprints so some one else can build your piece of art anytime.

Software development is not different.

Static diagrams

Just like the blueprints for creating a robotic arm, the code we write is the blueprint used by the computer to create the software artifacts of our application. This static aspect of software defines the structure of the components that are going to be created and the UML static diagrams such as class diagrams are used to model this.

Dynamic diagrams

Like the interaction between the different parts of a robotic arm, the UML dynamic diagrams describe the interaction between parts of the software. This is dictated by the messages send back and forth between objects.

Relational vs OOP

So while the ER diagram it’s used to model the database structure, the class diagram it’s used to model the application structure. One is for data storage and the other for the application code. More often than not, the structure used to store data in relational databases is not the best for an OOP application. The opposite is also true. This is due to the approach taken for each paradigm: the relational school makes emphasis on avoiding data repetition using a technique called normalization, while the OOP side stands for avoiding code repetition using inheritance, composition and other techniques. This difference is known as object-relational impedance mismatch.

Spotting the differences

While at a glance the ER and Class diagram look similar the process by which each it’s created it’s very different.

The ER construction process

The process to the ER diagram is like creating the blueprints for construction. Since it’s not so easy to change a relational database once you start putting data on it, it’s better to make it right from the start. To this end we identify the entities on our domain space (the business industry) and then try to identify the relevant information for each from the data flow of the application. An a lot of time we guess some of the information that may be used in the future (at least I have).

The Class diagram construction process

On the other side the Class diagram construction process is more like the construction of a robotic arm: it takes a lot of iterations and you start by modeling (sketching) the mechanisms (dynamic aspects) and then do a lot of experimentation until you get it right. I often use an interaction diagram to model how the objects (not classes) are going to interact. When you have a set of objects and a set of messages send between those objects, the Class diagram is almost a natural step: you already know how those pieces interact, now just have to figure out how they are to be assembled together.

Using the class diagram: a better way

As Allen Hollub points out, we do object oriented programming, no class oriented programming. Classes are just that, classes of objects, or in other words, groups of objects that share some traits. You can only start to classify objects when you already have a bunch of them.

  1. Start with the dynamic side of things: find your key objects and the messages they send to each other.
  2. Experiment using a testing framework so you can see if it’s feasible in the shortest possibly time. Adjust as necessary.
  3. Create a Class diagram and start sketching everything you got so far, trying to identify and remove any duplicated code and to identify and decouple the parts that may change often than the rest. Don’t let the worries about how you are going to store your data affect your Class diagram. Try to delay dealing with persistence as much as possible.
  4. Only after you have a set of stable objects use the class diagram to aid you in the creation of your ER diagram. This is not a 1:1 mapping, don’t be afraid to optimize your ER diagram to your RDBMS.
  5. Use your class diagram to explain the structure of your system to other developers, and the interaction diagram to explain, well, the interactions between your objects. They can serve you to demonstrate the implementation of certain patterns in your application. After all, diagrams are all about communicating ideas.

 

Polymorphism or how to decouple structure from behavior

The 2 sides of the coin

There are 2 aspects to any software piece: a static and a dynamic one, the former being the code and the latter being the execution of that code. The static part deals with the structure of the software while the dynamic part deals with the behavior of the software. This duality in the software is often ignored by a lot of developers, yet is there and its effects are tangible.

The structure and behavior relationship

There is a strong relationship between structure and behavior. The behavior is conditioned to the structure represented in code. You can have a rigid structure, that is, a structure that don’t allow the change of behavior without any changes to itself. The opposite is a flexible structure that allows changes to behavior without changing the structure itself.

Structure vs Code

It’s important to understand that code is not the same as structure. You can change the code without changing the structure of the software. Renaming a variable, even changing a variable, are changes that do not affect the structure of the software. The structure is more of a conceptual model, coupled with some conceptual mechanisms that is implemented by a programming language in the form of code. The way the conceptual model is defined will be affected by the programming paradigm used by the developer. You’ll come to different models/structures using different paradigms. Also the conceptual mechanisms will differ from one paradigm to another and in some cases from one language to another.

The case against switch

Put simply the purpose of polymorphism is to create a flexible structure that allows the software to change it’s behavior without changing the structure. There may be changes to the code, but not to the structure.

Consider the following code:

public enum EmployeeType
{
    Manager, Worker
}

public struct Employee
{
   public string ID {get; set;}
   public string FirstName {get; set;}
   public EmployeeType Type {get; set;}
}

public void MakePayment(Employee employee)
{
    decimal wage = 0;
    switch(employee.Type)
    {
      case EmployeeType.Manager:
        wage = 30m;
      break;
      case EmployeeType.Worker:
        wage = 10m;
      break;
    }
   decimal payment = wage * getWorkedHours(employee.ID);
...
}

Suppose we want to add a new employee type, Janitor. To make the software make payments to the Janitor (change in behavior) we would need to add a Janitor value to the EmployeeType enum and modify the switch to accommodate the new value. This is a rigid structure: you need to tweak it to make it learn new tricks. This is a typical procedural style (using C#).

The worst part of the code above is that by changing the Employee type from struct to class and putting the MakePayment procedure inside a class (most of the time as an static method) an lot of developers believe that they are now doing OOP.

Let’s see how this would look like from an OOP paradigm.

public interface Role
{
   decimal GetWage();
}

public class Manager: Role
{
   public decimal GetWage() { return 30m;}
}

public class Worker: Role
{
   public decimal GetWage() { return 30m;}
}

public class Employee
{
   Role _role;

   public Employee(Role role){
      _role = role;
   }

   public string ID {get;}
   public decimal Wage{ get{ return _role.GetWage(); }}
}

public class Timesheet
{
   public bool Pay(Employee employee)
   {
      decimal payment = employee.Wage * getWorkedHours(employee.ID);
      ...
   }
}

So in this code, if you wanted to add a Janitor employee, all you have to do is to create a new Role class that represent the Janitor role. And that’s it. That’s a code change, not a structural one. The price for this flexibility is indirection. Now there are a lot of classes, each one representing a case in the switch. That’s how we deal with branching in OOP. And that’s why OO software tends to be way more flexible than procedural software.

Final words

I hope this helps making the point clear. I tried keeping it simple, so maybe the example may look silly. I would like to say that polymorphism is not a characteristic of OOP alone: C allowed to define some sort of interface for a function and the languages in the Functional programming paradigm make heavily use of it too. Even when the form may vary, the idea it’s almost always the same: decouple dependencies and allow the creation of a flexible software structure, making it resilient in the face of change.

A tale of 2 paradigms: how curly braces can confuse young programmers

I was like 15 when I learned programming. VB6 was my first experiment. Later on I went on learning C++ on university. And C# on my first work. Then one day I came across C (a flavor known as Dynamic C used to program rabbit micro controllers) on one of my jobs. Once I grasped the concepts on VB6 I never really had a hard time jumping from one language to another. I could easily move the concepts from language to language. And then I learned smalltalk. Somehow it felt (and sometimes still feel) like hitting a wall. I decided to learn that because I read somewhere that if you wanted to truly master OOP you need to give that a shot. Turns out that is true.

It’s all in the mind.

There are several programming paradigms out there, and while OOP is probably one of the most abused words in the technology arena, I’ve found that is also one of the most misunderstood paradigms. At least in my experience. I considered myself a decent OO developer and even had some cool projects listed on my resume. Why then, did I had problems when starting with smalltalk?

The paradigm bridge

As I continue to find code that’s supposedly OO using procedural techniques, often wonder how is it that we don’t realize this. Even worse, we still believe that we are writing OO code and try to apply OO design patterns which often leads to convoluted code. To explain my theory behind this i would like to take a walk through history.

Imperative programming

In the beginning there were this monolithic creatures that walked over the hardware. They view of the world was a simple one: you start on line 1 and then continue executing line after line until you find a GOTO instruction, then you jump to whatever the GOTO points you to. On each line you have to explicitly command the computer on how to do whatever you want. This paradigm is called imperative programming. Even in this day and age you can find some people that still writes code like this for line of business applications (I have).

Procedural programming

At some point in time, Edsger Dijkstra wrote a letter explaining how this instruction (GOTO) was making code harder to understand and maintain. I’m not sure if this was what lead to the notion of structured programming but the concept certainly gained popularity around that time. When applied to imperative programming we got what’s called procedural programming. This is nothing more than a decomposition of an imperative program into a series of smaller programs, procedures, which are called from the main program. With this came a lot of new control flow structures like switch, while and for, to replace GOTO statements. One of the most representatives languages of this paradigm is C.

Object oriented programming

On the 1970’s Alan Kay and the people at Xerox Palo Alto Research Center came out with smalltalk. I was a very concise language (all of the smalltalk reserved words can be written down in 1 card) which ran on a virtual machine and introduced a new paradigm: Object oriented programming. It basically stated that any program could be represented as a series of objects, little programs which communicate with each other sending messages. This was so easy to reason and write that using smalltalk many kids created impressive stuff, even for today.

Mix and match

As OOP started to gain popularity, some people tried to implement the OOP paradigm on languages that were already familiar to a lot of people. So C being wildly popular, became the default choice for this experimentation. The results are C++ and Objective C. I believe that the idea was to reuse the familiar syntax on the new paradigm. And it looks like that was the same reasoning behind Java and C#.

Drinking all in the same glass.

The problem with this, IMO, is the way this languages are taught. You can do the experiment yourself: look for a course on C++, Java or C# and look at the contents table. Most of the time they start with all of the structured programming keywords (the ones inherited from C) before even touching the notion of an object. Most of the courses out there are effectively teaching structured programming and then try to introduce the student to OOP without explicitly telling him. These are 2 different paradigms that require a different mindset. The ‘switch’ keyword concept is not even present in the OO paradigm. It is not needed. Yet the student just learned it as part of the same language so he assumes that it’s safe to use it. Even worse he assumes that’s the way to go. He is having 2 different drinks in the same glass. How can we expect him to distinguish between one paradigm and the other?

Learning and moving forward

Looking back I now understand that before smalltalk I was using an structured programming approach, with some OO features. This limits the benefits of using OO. Learning smalltalk force me to finally switch to a completely OO mindset, which is awesome. I’m still learning, and feels like there’s still a long way to go, but hey, at least now I’m aware of it.

Alternatives any one?

I’ve been thinking on alternative teaching approaches to overcome this problem but still don’t have anything solid. How would you solve this matter?

 

TDD Styles

There are 2 schools of thought in the TDD space: top down and bottom up, the former becoming increasingly popular, while the latter it’s the classical approach used to teach.

Given that a developer is working on an Object Oriented paradigm, if he chooses the bottom up approach he’ll start writing unit tests for the objects he needs to complete the requested feature. The catch here it’s finding which objects are these. This may lead to some refactoring and experimentation until the right set is defined.

Top down approach

A response to the problems above it’s to start writing tests at a higher level of abstraction. So now instead of defining the object interface we start by designing the component interface or even the complete application API. This gives context and help define how the objects are gonna communicate and which messages they should understand.

BDD

BDD it’s a style to help the developer to understand that he is designing not testing and provides a concise language that both, the business and the development team can use. It’s called Gherkin and has several artifacts. The following example shows the format.

1: Feature: Some terse yet descriptive text of what is desired

2:   Textual description of the business value of this feature

3:   Business rules that govern the scope of the feature

4:   Any additional information that will make the feature easier to understand

5:

6:   Scenario: Some determinable business situation

7:     Given some precondition

8:       And some other precondition

9:     When some action by the actor

10:       And some other action

11:       And yet another action

12:     Then some testable outcome is achieved

13:       And something else we can check happens too

14:

15:   Scenario: A different situation

16:       ...

A concrete example:

Feature: Serve coffee

Coffee should not be served until paid for

Coffee should not be served until the button has been pressed

If there is no coffee left, then money should be refunded

 

Scenario: Buy last coffee

Given there are 1 coffees left in the machine

And I have deposited 1$

When I press the coffee button

Then I should be served a coffee

These Gherkin file it’s parsed and some code templates generated from it. This code templates are the placeholders for the testing framework and the start point for the developer to begin coding.

Just enough design

Another approach to work around the limitations of the bottom up approach is to do just enough up front design typically using UML sketches. This allows the development team to brainstorm different designs in front of a whiteboard. The purpose of these designs is to identify the early players of a use case. The implementation is then left to the TDD practice. Common tools are UML’s use case, activity and interaction diagram. I have described this approach before.

 

A few words on the practice of TDD

Whether you choose a top or bottom approach the important thing to keep in mind is that you are not writing tests, you are specifying the way you want the code to be used, even before that code exist. This is more of an experiment than a testing exercise. The idea is to find the abstractions and messages needed to resolve the problem at hand without getting distracted by implementation details such as DB, UI or web services.

TDD and the “Add Value” premise

Discovering what brings value

There are only 2 kinds of code: the one that brings value to the customer, and the one that doesn’t. I’ll call the former domain code and the latter plumbing code. But what does this mean?

Domain code

In simple terms if it’s not in the business jargon, it’s not domain code. That is, all of the business concepts and the rules that dictate how they relate to each other, the services provided by the business to its clients, the actions taken on specific situations (procedures) are all part of the business domain and automating these (completely or partial) are the things that help the business to increment revenue, diminish costs, accelerate the procedures execution and take better decisions. Otherwise it doesn’t add value to the business.

Plumbing code

This is the kind of code that doesn’t directly add value to the business. Is mostly comprised of technical aspects such as the database, software architecture, reporting technology, technology stack, frameworks and so on. It is necessary for an information system to run, but it is not the reason why the system was created in the first place.

The “egg or chicken first” paradox

Common sense dictates that the things that are most important to the business are to be put first. That is that a development team should make sure that the business policies, rules and logic are correctly implemented in the code before anything else. The common practice however, is a different story. That is because the developer usually needs a minimum amount of plumbing code to test if a business rule is working as expected. Consider the case when a developer has to test a simple rule: you can’t get more money from a back account than available. To test this the developer may start creating a BankAccount table, then writing the code for the rule, then creating a test program to exercise that code. And then he would have to add code for transient fault handling in case the database connection fails. So writing the code that adds value (domain code) it’s just a tiny fraction from the whole operation. Most of the actions are about setting up the infrastructure. Even more, a lot of developers take this all the way up to creating an API or UI. This just makes harder to test the code related to the rule, since now there are several points where something may go wrong. So now in order to know if the rule is correctly implemented, the developer has to put an end to end application that may have to be modified in the event that the domain code needs to be rewritten. So which is first: domain or plumbing code? 

Defining what’s to be done

TDD change the focus from the plumbing code back to the domain code. It makes this by forcing the developer to create functional specifications in the form of tests and then create code to fulfill the specification’s expectations.

On a typical development project, a lot of the initial analysis goes into the plumbing code: database, frameworks, operative systems, hardware and so on. On the other hand, the business idea of what the system is supposed to do is subject to evolve as the developers and the business discover the requirements and needs. Unfortunately, a lot of time the developers don’t dig into it until later on, when all the technology stack has been decided. This creates a situation where the domain code is now restricted by the technology stack limitations.

TDD reverses this situation. By having the developers to create the specifications first, they find themselves in the need to understand better what’s the expected outcome for a piece of software. Taking back our previous example, what’s supposed to happen when the bank account has not enough money to fulfill a withdraw operation? An exception? Returns a message? Should that be a string or a structure of sorts? Or a function (closure)? Should these be logged? Should the account owner be instructed to go through some sort of procedure (like a loan)? To answer these questions, the developer has to understand what does the business expect to happen. This will lead him to go back to the business and make questions until he understands enough to continue. This process usually happens on any development project, especially if they’re following an agile methodology, but the use of TDD greatly accelerates it. It allows the development team not only to write the software in the right way, but to help the business to decide if it’s the right thing.

So are you ready to jump in?

Code vs Database: hosting the business logic

Back in 2008 I began working for a startup that had a product for government administration. The thing with the government space it’s that they invest a lot in IT infrastructure: licenses, hardware, custom software and so. This makes it almost impossible to pitch a new system if it can’t leverage what the institution already has. Over time we came to have several customers using different databases. How did we manage to deploy the system to customers using Oracle, MS SQL and pretty much almost any database as a data store? As I have been on different projects since then, whenever I found a restriction imposed by a technology (like a DB system) I found myself asking this question once again.

Achieving database system independence

Government can be a complex beast, with all it’s regulations and rules. It was challenging to make the product flexible enough so it could be easily adapted to a new customer requirement’s. Heck it even had a scripting engine! But looking back I believe that this was possible thanks to 2 things: using an ORM and putting no logic on the DB.

By using an ORM we achieved data type representation independence. That is, all the data needed by the system is represented by the objects in the system regardless of how they are stored. This gave us the liberty to switch from one db technology to another without having to change the code. All we had to deal with was changing the db provider, who had the knowledge to serialize the objects to each DB. We could write our own provider to save to files, had we wanted to.

Since the beginning it was clear to us that having logic hosted in the DB was not a good idea. Had we gone that route, we would had a hard time porting the system from a DB to another. It’s also harder to test. The funny thing is that a lot of developers still follow this practice. Even more, they embrace it!

Encapsulation as a system attribute

I have talked about encapsulation as a code attribute before. However the same principle can be applied to a system as a whole. By having a data representation that is independent of any external technology, we can reduce the impact of external forces (like changing the data storage or presentation technology). That’s the purpose of the so many architectural guides out there: to have the logic resilient to changes from the outside. In my experience this is a natural effect when following this same principle at the object level.

However having the business logic in more than one layer (presentation, data or anything else) always lead to code that is harder to maintain and test. Define a layer that holds your business logic (domain layer) and put all your business rules and logic in there. In other words encapsulate the business logic in a single layer.

A word about SQL

An interesting fact that makes me curious is that the standard SQL specification has no control flow structures. That is why is so hard to move logic from a DB system to another: each one implements it’s own way. But why deal with this when you can use a general purpose language that implements all of this from the get go? If the ANSI SQL does not implement it, why force it?

Abstraction levels

“Abstraction” is the action of simplifying something up to the most important traits.

I like to explain it as a sort of google maps but instead of zooming in and out of a world map you are exploring a model. Just as with google maps we have several views with different levels of detail. We call these abstraction levels. In the context of designing a system this is a useful tool. So using google maps as a metaphor I want to share a personal way to see the abstraction levels on a system.

Abstraction Level Purpose
10,000 ft view Overview of all the actions that can be done on the system
8000 ft view Overview of the way a user executes the actions on the system (UI)
6000 ft view Overview of the steps needed to carry on the action being requested
4000 ft view Overview of the objects that carry on the action being requested
Ground lvl Implementation details of the object

For the sake of this discussion i’ll leave the 8000 ft. view out.

10,000 ft. view

This level can be represented on several ways. My favorite one is using a use case diagram.

The other way I find absolutely useful is using BDD’s “feature” artifact.

 Feature: Create Loan
 In order to pay for a necessary item
 As a customer with no cash at hand
 I want to get a loan

The nice thing about this is that it also expresses the user’s objective.

The 6000 ft. view

In this view we peek inside the actions of the system (use case, feature). Typically, an action has more than one execution path: the default (expected) path and one or more alternatives. This view can be explored using an Activity Diagram.
This view can also be explored using BDD’s “scenario” artifact.

 Scenario: Apply for a loan while having an open loan with a different provider
 Given I already have an account on the site
 And I have an open loan on with a different provider
 When I try to create a new loan
 Then I'll see a message saying "Sorry you can't open a new loan if you already have one open"

Scenario: Apply for a loan with an amount exceding the maximum allowed by the state
 Given I already have an account on the site
 When I try to create a new loan
 And the amount requested exceeds the maximum allowed by the state I live in
 Then I'll see a message saying "Sorry the amount you applied for exceeds the amount allowed by the state"

Scenario: Get a loan
 Given I already have an account on the site
 And I have no open loan
 When I try to create a new loan
 Then the loan will be created

The 4000 ft. view

If the 6000 ft. view allows us to peek into the action, showing us the several execution paths, then the 4000 ft. view it’s all about peeking into the execution paths and the way they are carried along by the business objects. I usually use interactions diagrams on this level.

As you can see this diagram focus solely on the business objects, their responsibilities and the interactions amongst them in order to fulfill the action objectives. In this particular example I’m including 2 paths, as you can see from the lines that return to the Customer actor. I could have one for each scenario.
The point here is that these methods are just loosely defined, still waiting to be fleshed out. This is where TDD comes in. You can create a test declaring what you expect the behavior to be and then code out this particular method, isolating any external dependency.

TDD vs BDD

I originally depicted this while trying to explain that TDD and BDD are basically the same thing just on a different abstraction level.
So if you create tests for anything on the 4000 ft. view before any code is in place, then it’s called TDD, whereas if it’s for anything above that abstraction level, it’s called BDD.

Let me know your thoughts.

 

Ask for help, not for data

Sometime ago I received an email asking for confirmation about an event about to take place. Included in the email was a little snippet meant to make it fun and attractive to developers. As many of us read through it, it brought several opinions on the quality of the code. Here’s the thing:

if (employee.WantsToAttend()) {
	if (employee.IsWorkingFromOffice1())
	{         employee.reply(manager1, "I wanna be there");     }
	else if (employee.IsWorkingFromOffice2())
	{         employee.reply(manager2, "Dude! I wanna be there!");      }
}

I realize that this code is meant to express the idea of the invitation. But the sad thing is that you can find this kind of code on production software. Oh? what’s wrong with this code you said? well, let’s talk it out.

Tenets of OOP

We’ve all heard about the oop principles: encapsulation, polymorphism, inheritance and abstraction. Let’s evaluate the code in the context of these.

Encapsulation

Encapsulation states that you must hide the object internals in such a way that if you change them no dependent object should be afected.

Now consider this method calls:

employee.WantsToAttend()
employee.IsWorkingFromOffice1()
employee.reply(manager1, "I wanna be there");

These calls are all implementation details of the “confirm assistance” scenario. Truth is we only care the employee object to confirm it’s assistance. We could easily move the first evaluation into the object:

class Employee
{
   ...
   public void ConfirmAssistance(string manager, string msg)
   {
      if(wantsToAttend())
		reply(manager,msg);
   }
}

Now the client code would look a little more cleaner:

if (employee.IsWorkingFromOffice1())
	    employee.ConfirmAssistance(manager1, "I wanna be there");     
	else if (employee.IsWorkingFromOffice2())
	    employee.ConfirmAssistance(manager2, "Dude! I wanna be there!"); 

So far so good. We’re now hiding the data by making the decision inside the object. However there is a subtle but important implication: we shift the responsibility of validating wheter the customer wants to assist to an event from the client code to the object. From now on you don’t need to figure out if the employee wants to attend everytime you want to confirm his assistance. The object will do it itself.

Allen Holub calls this asking an object for help instead of data. This is a direct consequence of encapsulation and probably the most influential piece of advice in my transition from a data driven mindset to an OOP one.

Can we stop exposing the employee’s office? we can try:

class Employee
{
   ...
   public void ConfirmAssistance(Func<string,string> msgFactory)
   {
      if(wantsToAttend())
		reply(Office.Manager,msgFactory(Office.Id));
   }
}

And so we have a one liner now:

employee.ConfirmAssistance(officeId => officeId == 1?"I wanna be there": "Dude! I wanna be there!");

If you’re confused by the weird syntax it’s just an inline/anonymous function created using a syntax called lambda expressions (the examples are all C#).

Compared to the previous version which one looks more reusable to you?

A few comments here:

1) now we have a msg for office1 and another for the rest (not only office2)

2) we really don’t care how the manager and office id are stored, we can easily change that to private fields and it would not make a difference to the calling code.

Cleaning up the responsibilities

The reply method implies a third party service. Storing a reference to the service it’s overkill. You have to instantiate that service with the rest of the object graph every time you initialize an employee object. Let’s break this down in 2 parts: the reply message creation and the actual sending of that message.

class MessageGateway
{
    Send(Message msg){...}
}

class Message 
{
   public Message(string recipient, string body)
   {
      Recipient = recipient;
      Body = body;      
   }

   public string Recipient {get;set;}
   public string Body {get;set;}
}

class Employee
{
   ...
   public Message ConfirmAssistance(Func<string,string> msgFactory)
   {
      if(wantsToAttend())
		return new Message(Office.Manager,msgFactory(Office.Id));
	  else
	    return null;
   }
}

The client code:

Message reply = employee.ConfirmAssistance(officeId => officeId == 1?"I wanna be there": "Dude! I wanna be there!");    

if(reply != null) new MessageGateway().Send(reply);

We now delegate the reply message creation to the employee object and the sending of it to the message gateway object. Splitting responsibilities like this allows us for better reusing and comply with the Single Responsibility Principle.

But… we’re breaking encapsulation on the Message class.

Let’s fix that.

class MessageGateway
{
    Send(string recipient, string body){...}
}

class Message 
{
   public Message(string recipient, string body)
   {
      Recipient = recipient;
      Body = body;      
   }

    string Recipient;
    string Body;
    
    public SendThrough(MessageGateway gateway)
    {
       gateway.Send(Recipient,Body);
    }
}

class Employee
{
   ...
   public Message ConfirmAssistance(Func<string,string> msgFactory)
   {
      if(wantsToAttend())
		return new Message(Office.Manager,msgFactory(Office.Id));
	  else
	    return null;
   }
}

and the client code looks like:

Message reply = employee.ConfirmAssistance(officeId => officeId == 1?"I wanna be there": "Dude! I wanna be there!");    

if(reply != null) reply.SendThrough(new MessageGateway());

“What?? all the fuzzle for this? just inverting the way we send the data?” Well yeah, but that’s not all. Do you see that null check there? we can now get rid of it.

Polymorphism

There’s this concept that states that the more execution paths are on a program the harder is to maintain. This is called cyclomatic complexity and is a common indicator of code quality. Bottom line is the less “if” and “switch” statements the better.

Our initial approach removed all of the branching statements from the program. But later we introduced a new one with the null check. Let’s remove it. A common OOP technique it’s the null object pattern. It relies on the polymorphism attribute of OOP. Let’s see how it goes.

1) extract a common interface

interface IMessage
{
   SendThrough(MessageGateway gateway);
}

2) create an object that does nothing (as you would if you received a null)

class Message: IMessage
{
   public Message(string recipient, string body)
   {
      Recipient = recipient;
      Body = body;      
   }

    string Recipient;
    string Body;
    
    public SendThrough(MessageGateway gateway)
    {
       gateway.Send(Recipient,Body);
    }

   //usually the null object it's used in a singleton fashion
   class NullMessage: IMessage
   {
	   public SendThrough(MessageGateway gateway) 
	    {
	       //Do nothing :)
	    }
    }

   public static IMessage Null{get;private set;}

   public static Message()
   {
      Null = new NullMessage();
   }
  
}

3) return the null object instead of null

class Employee
{
   ...
   public Message ConfirmAssistance(Func<string,string> msgFactory)
   {
      if(wantsToAttend())
		return new Message(Office.Manager,msgFactory(Office.Id));
	  else
	    return Message.Null;
   }
}

Presto! now let’s update the client code

Message reply = employee.ConfirmAssistance(officeId => officeId == 1?"I wanna be there": "Dude! I wanna be there!");    

reply.SendThrough(new MessageGateway());

look ma! in one line (again)!

employee
	.ConfirmAssistance(officeId => 
	   officeId == 1? "I wanna be there": "Dude! I wanna be there!")   
	.SendThrough(new MessageGateway());

Polymorphism allows us to change a system behavior without changing the code. This is done by creating variations of the same method and swapping them as needed.

If not OOP, then what is it?

Let’s review:

object – data (state)= module (remember vb6?)

object – methods (behavior) = struct (yes, this was already available in C)

You can easily write a program using modules and structs and that’s fine for a lot of situations (forms over data ;))

In conclusion

1) Encapsulation enables Polymorphism

2) Polymorphism enables the use of design patterns and other OOP goodies

OOP shines on making flexible code but it has a price: indirection. If your project is relatively simple (like the example used here) you may want to ponder if there’s a simpler way, like structured programming (modules + structs). But if you still decide on this route, just remember that objects do things. Ask for help, not data!

Extra: a functional twist

Closures can simplify this code a lot. Since they were already present in smalltalk I consider them part of the OOP toolset. Here’s the whole enchilada:

class MessageGateway
{
    Send(string recipient, string body){...}
}

class Employee
{
   ...
   public Message ConfirmAssistance(Action<string,string> confirm)
   {
      if(wantsToAttend())
		confirm(Office.Manager,Office.Id);  
   }
} 

//client code

employee.ConfirmAssistance((manager,officeId)=> {
   var response = officeId == 1? "I wanna be there": "Dude! I wanna be there!";
   new MessageGateway().Send(manager, response);
});

Learn several languages or specialize in one?

I remember a university class when a teacher came and told us that before people tend to generalize: learn a lot of different programming languages while nowadays the tendency is to specialize. While I agree that becoming very good in at least one language is a must on this day an age, I’m sure that not learning more languages is not only a disadvantage but a rather dangerous thing. Here’s why:

The original OOP

I just wanted to share this with you:

https://blog.udemy.com/object-oriented-programming-a-critical-approach/

As mentioned in the post, I also believe that a lot of the beauty of OOP as defined by smalltalk has been lost.

So, if you have not learned smalltalk, you should. It’ll change the way you think about OOP.

Here’s something to help you get started:
http://rmod-pharo-mooc.lille.inria.fr/MOOC/WebPortal/co/content.html

enjoy!