How to make your c# code more OOP with delegates pt 2

Implement the strategy pattern with delegates

Changing the default behavior of a method under testing (or any other specific circumstance)

Given the following code:

class EmailSender{
    
    public void Send (string recipient, string subject, string body) {//invoke 3rd party}
    
}

class Email{
    
    public EmailSender _sender = new EmailSender();

    public Send(){_sender.Send(recipient, subject, body);}
    
}

Imagine that you cannot change the Email class. How would you unit test it without making a call to a 3rd party service?

Answer: inject a delegate with the desired behavior.

class EmailSender{
    
    Action<string,string,string> _sendAction = _send; //default action
    
    public void Send (string recipient, string subject, string body) {
     _send.Invoke(recipient, subject, body);
    }
    
    public void _send (string recipient, string subject, string body) {//invoke 3rd party}

    internal void ActivateTestMode(  Action<string,string,string> testAction){
     _send = testAction;
    }

}

 

Specializing the rules of a domain object without inheritance

Given the following code:

public class BonusCalculator()
{
  List<Bonus> bonuses = new List<Bonus>();

  public BonusCalculator(ICollection<Bonus> bonus)
  {
    bonuses.AddRange(bonus);
  }

  public decimal CalcBonus(Vendor vendor)
  {
   var amount = 0;
   bonuses.foreach(bonus=>amount += bonus.Invoke(vendor, amount));
   return amount;
  }

}

public class BonusCalculatorFactory()
{

   public BonusCalculator GetSouthernBonusCalculator()
   {
    var bonuses = new List<Bonus>();
    bonuses.Add(new WashMachineSellingBonus()); 
    bonuses.Add(new BlenderSellingBonus ()); 
    bonuses.Add(new StoveSellingBonus ());

    return new BonusCalculator(bonuses);    
   }

}

If we want to add a new bonus that increments the 15% we would have to create a new class just to do that multiplication… So let’s try something different.

public class BonusCalculator()
{
  List<Func<Vendor, Decimal>> bonuses = new List<Func<Vendor, Decimal>>();

  public BonusCalculator(ICollection<Bonus> bonus)
  {
    bonuses.AddRange(bonus);
  }

  public decimal CalcBonus(Vendor vendor)
  {
   var amount = 0;
   bonuses.foreach(bonus=>amount += bonus.Apply(vendor, amount));
   return amount;
  }

}

Now we have to modify the factory

public class BonusCalculatorFactory()
{

   public BonusCalculator GetSouthernBonusCalculator()
   {
    var bonuses = new List();
    bonuses.Add(new WashMachineSellingBonus().Apply); 
    bonuses.Add(new BlenderSellingBonus().Apply); 
    bonuses.Add(new StoveSellingBonus().Apply);
    bonuses.Add((vendor,amount)=> amount * 1.15); 
    return new BonusCalculator(bonuses);    
   }
}

 

Easy peasy. Now depending on how it is implemented, we could start thinking about turning some of the rules into singletons.

Moving the control flow into objects

How many times have you started an operation where you want to know 1) if the operation was successful and 2) the return value. A lot of times this leads to code like:

class OperationResult{
    public bool IsSuccess{get;set;}
    public object ResultValue {get;set;}
}

interface IDataGateway{
    OperationResult UpdateName(string name);
}

class NameUpdaterCommand{
    string _name;
    IDataGateway _data;
    Log _log;

    public NameUpdaterCommand(string name, IDataGateway data, Log log){
       _data = data;
       _name = name;
       _log = log;
    }
    
    public void Execute(){
        var result = _data.UpdateName(_name);

        if(result.IsSuccess)
            _log.Write("Name updated to:" + Result.Value.ToString());
        else
            _log.Write("Something went wrong:" + + Result.Value.ToString());
    }
}

Come on, don’t be shy about it. I’ve done it myself too…

So what’s wrong with it?

Let’s see, the intention behind this code it’s to decide on a course of action based on the result of an operation. In order to carry on these actions, we need some additional info for each situation. A problem with this code is that you can’t handle an additional scenario. For that to happen instead of a boolean IsSuccess you would have to create an enumerator of sorts. Like:

enum ResultEnum{
    FullNameUpdated,
    FirstNameUpdated,
    UpdateFailed
}

class OperationResult{
    public ResultEnum Result {get;set;}
    public object ResultValue {get;set;}
}

interface IDataGateway{
    OperationResult UpdateName(string name);
}

class NameUpdaterCommand{
    string _name;
    IDataGateway _data;
    Log _log;

    public NameUpdaterCommand(string name, IDataGateway data, Log log){
       _data = data;
       _name = name;
       _log = log;
    }
    
    public void Execute(){
        var result = _data.UpdateName(_name);

        switch(result.Result){
             case ResultEnum.FullNameUpdated:
               _log.Write("Full name updated to:" + Result.Value.ToString());
               break;
             case ResultEnum.FirstNameUpdated:
               _log.Write("First name updated to:" + Result.Value.ToString());
               break;
             case ResultEnum.UpdateFailed:
               _log.Write("Something went wrong:" + + Result.Value.ToString());
               break;
        }  
    }
}

So now every time you want to add a new scenario you have to add a new enum value and a new case on the switch. This is more flexible than before but a little more laborious than it should be. Let’s try to replace this enum based code with objects that represent each case:

interface IDataGateway{
    void UpdateName(string name, Action<string> firstNameUpdated, Action<string> fullNameUpdated, Action<string> updateFailed);
}

class NameUpdaterCommand{
    string _name;
    IDataGateway _data;
    Log _log;

    public NameUpdaterCommand(string name, IDataGateway data, Log log){
       _data = data;
       _name = name;
       _log = log;
    }
    
    public void Execute(){
       _data.UpdateName(_name,
                     fullNameUpdated: name  => _log.Write("Full name updated to: " + name),
                    firstNameUpdated: name  => _log.Write("First name updated to: " + name),
                        updateFailed: error => _log.Write("Something went wrong: " + error )
        );
    }
}

So now we have a shorter code. We have also moved the responsibility to control the flow to the object implementing IDataGateway. How it does it is just an implementation detail. We don’t care if it’s using an enumerator or any other mechanism as long as it works.

Phew! I think that’s enough for now. Now go improve your code!

 

How to make your c# code more OOP with delegates pt 1

Since I became a codementor a recurrent theme has been handling delegates. I’ll try to clarify this once and for all. This ended up as a long post so I have decided to break it into 2 parts: in this, we get a feeling of what are delegates. The next one will deal with how and when to use them.

Extending the C# type system

The c# type system is often classified as reference and value types. I won’t go into what’s the difference between these 2 since there’s a lot of information about this topic out there. Typically a developer starts an application by extending this basic type system to better model a solution for the problem he is solving (in effect, he’s creating a DSL). There are several ways to do this: if you want to extend the value type system you usually use structs whereas for the reference type system classes and interfaces are the default way.

However, there’s a 3rd way to declare a new type: delegates.

Understanding delegates

Given that a delegate type syntax it’s different than the rest of the methods to declare a new type, a lot of developers never realize that they are indeed declaring a new type. I mean consider the following:

class StockItem {
    public int SKU {get; set;}
    public string Description{get; set;}
    public decimal price{get; set;}
}

struct Point{
   public int X {get; set;}
   public int Y {get; set;}
}

interface IValidate{
    bool IsValid(object value);
}

somehow they feel alike, right? now check this out:

delegate string CallWebService(string url);

It feels odd right? It looks nothing like the other type definitions we have seen so far. It doesn’t have attributes nor methods. Just what is this?!?!
Calm down, first of all, a delegate it’s an object that holds code declared somewhere else in contrast with classes which define the behavior of its instances inside themselves. With that in mind let me tell you that, what the delegate definition is saying is what kind of code it will contain: which values can accept and return. The “method name” would be the delegate name. Now that we have a type we can create instances of it!

delegate string CallWebService(string url);

class WebServiceUtils {
   public string MakeCall(string url){...}
}

public class Test{ 
            public static main (){ 
                string anUrl = "..."; 
                var caller = new CallWebService(new WebServiceUtils().MakeCall);
                caller.Invoke(anUrl) //or could be just caller(anUrl);
             }
}

So far so good. Actually, the C# team saw the potential of delegate and in C# version 2, they decided to bring some of the best things that could happen to the language: anonymous methods. Anonymous methods are an incarnation of closures, a very powerful concept. Unluckily for us, they decided to reuse the delegate keyword for this.

delegate string CallWebService(string url);

class WebServiceUtils {
   public string MakeCall(string url){ ... }
}

public class Test{ 
 public static main (){ 
 string anUrl = "..."; 
     var caller = delegate(string url) { return new WebServiceUtils.MakeCall(url); };
     caller.Invoke(anUrl) //it could be just caller(anUrl);
 }
}

I can only imagine that the C# team was thinking that since anonymous methods were only going to be used with a delegate, it made sense to use the delegate keyword to declare not only a delegate type but a delegate instance as well. Unfortunately, this leads to further confusion since now when someone talks about a delegate, he could either be talking about a delegate type or an anonymous method.

Even worse! From MSDN:

There is one case in which an anonymous method provides functionality not found in lambda expressions. Anonymous methods enable you to omit the parameter list. This means that an anonymous method can be converted to delegates with a variety of signatures. This is not possible with lambda expressions.

Basically, it means that you can write code like:

delegate string CallWebService(string url);

class WebServiceUtils {
   public string MakeCall(string url){ ... }
}

public class Test{
   public static main (){
     string anUrl = "...";
     var caller = delegate { //<-- no parameters at all!!
            //you can have access to the variables on the same 
            //scope as where the anonymous method was declared
          return new WebServiceUtils().MakeCall(anUrl);
     };
     caller.Invoke(anUrl); 
   }
}

So now you have anonymous methods that don’t conform to the delegate definition but are still regarded as valid.

As if this wasn’t enough the 3rd version of C# brought another way to declare anonymous methods: lambda expressions.

delegate string CallWebService(string url);

class WebServiceUtils {
   public string MakeCall(string url){ ... }
}

public class Test{
   public static main (){
     string anUrl = "...";
      //this is an anonymous method too
     var caller = (url) => new WebServiceUtils().MakeCall(url);//or MakeCall(anUrl)
     caller.Invoke(anUrl); 
   }
}

uff! this was a lot for a single post. Next post will see delegates in action. Stay tuned!

The OOP wars

Some time ago I had an interesting discussion with Tony Marston. Suddenly I found myself on the middle of what seems to be an ongoing war related to what’s OOP. It seems to (still) be a heated debated on some circles, so I want to share some thoughts on the topic.

The origins

So around 1962, 2 guys from Norway (Ole Johan Dahl and Kristen Nygaard) extended the Algol programming language to easily create simulations. They called the new language Simula. The idea was that “A discrete event system is viewed as a collection of processes whose actions and interactions completely describe the operation of the system”. Little did they know that they work would create a revolution in the programming community.

The calm before the storm

Sometime after the invention of Simula, in 1966, a newly graduated from the University of Utah came in contact with it. As he tried to understand the concepts behind this newborn language something made click in his mind. His name was Alan Kay and he was the one who coined the object-oriented programming term. His vision was profound yet simple: a net of interconnected software computing units called objects sending messages to each other. His idea was the software equivalent of the internet. He also had the idea of a network of interconnected computers by either a wired or wireless mean by the way.

Around 1979 a Danish man called Bjarne Stroustrup was working for AT&T Bell Labs, where he had the problem of analyzing the Unix Kernel with respect to distributed computing. The problem was that C was way too low level for a large system. It was then that memories from his Ph.D. thesis, which has been written using Simula, came back. Using Simula as a base, Bjourne extended C to support classes, which he called “C with Classes” and later “C++”.

The smalltalk faction

Smalltalk it’s the brainchild of Alan Kay. It’s the reification of his vision. The language itself it’s pretty compact.

Smalltalk sports a dynamic typing system, that is, the type is not enforced at compile time.

An object is a computing unit that sends and receives messages. The user defines which actions must take place when a given message is received by a specific object. If there’s not action defined for a particular message, then the object notifies the system with a ‘message not understood’ message.

Alan Kay was heavily influenced by LISP. In LISP everything is a list: code, data, everything. This allows powerful metaprogramming techniques. Kay build upon that metaphor: everything in smalltalk is an object. Everything. A number is just an object which knows how to respond to messages like “+ 3”. A string is an object that knows how to respond to messages like “reverse”. Even inline functions/closures are objects (known as blocks) that respond to a “value” message. That’s all there is to it. This is the reason why static typing is unnecessary: you just care whether the object can respond to a message or not.

The C++ camp

C++ was designed with systems creation in mind. As such it deals with stuff like performance and memory footprint. If you are familiar with C, C++ it’s a natural evolution. It can be tricky, however, to get the most out of the object extension. This is due to C++ being a multiparadigm language, meaning that you may still resort to solutions in a different paradigm that could be implemented in a cleaner way using OOP. Stroustrup talked about this in his 1995 OOPSLA paper (see the concrete types section).

It uses a static type system, so the compiler validates every type and related operation.

An object is a structure of data along with methods to manipulate that data. You directly invoke the methods on the object.

Classes are a type extension mechanism, allowing the developer to create a DSL on top of C while still having access to all the lower level features. In order to circumvent some of the problems that arise from a static type system, it introduces templating, which allows a higher reusability.

The eternal bashing warfare

So, the eternal discussion about OOP stems from these 2 schools of thought. To some, OOP is nothing more than procedural programming plus encapsulation, inheritance and polymorphism. To others (myself included) it involves a completely different mindset. The reality is that C++ is indeed an object-oriented extension on top of a procedural language whereas smalltalk is a completely new language that heavily draws from the functional realm. Therefore, the claims from each group are valid depending on the point of view. As someone who learned OOP using C++ I have found very beneficial to learn smalltalk later. Really, having nothing else than objects to work, helped me understand the boundaries between OOP and Procedural programming, helping me shape my approach to OOP design and decomposition.

Peace to the world

So, whether you belong to the smalltalk or the C++ party, remember to be tolerant to other people point of view. It’s an absolute benefit to learn to see from another perspective. So next time you find yourself on another OOP battle camp remember that the ultimate value comes from learning to work together, despite differences, than to demonstrate that you’re right and everybody else is not.

Happy Holidays!

Software developer profiles

In my last post I talked about how a developer could improve his skillset by breaking it down in 3 areas: Principles, Technology and Industry knowledge. So depending on how the time is invested, chances are that he will fall in any of the following stereotypes (T=Technology, I= Industry, P= Principles. Order indicates depth of expertise):

T+I+P

This is by far the most common type of software developer that I have found on my interviewing experience. These are students that graduated from school using visual basic (or any other RAD) and then went on to create forms over data kind of software with not really complex rules. Even when they move to JAVA, they’re still coding with a VB mindset. They can create something out of thing air quickly, but often it’s a BBOM and very hard to maintain. Depending on the time and the kind of projects he/she can start to evolve towards a more principles focused practice. Or just continue doing the same thing for the next 10 years. I usually try to figure out where on the spectrum between these 2 poles is the candidate.

I+T+P

I have seen more and more developers of this kind lately. They are usually people like the accountant that learnt SQL on it’s own. As the final user of the software, he can create and tweak the software to adjust to his necessities. Since they lack any formal engineering education the resulting code is often no better than that of a student. I have worked with this kind of developer but have never interviewed one.

P+I+T

These typically are software developers that spend a lot of time on an enterprise, creating level enterprise software. This forced them to look to better ways to create software that’s stable, maintainable and robust ultimately leading to a better understanding of the principles, patterns and practices. However the rate of adoption of new technologies in the enterprise is rather slow (some are still running on AS400) so they are behind the technological wave. Nevertheless they understanding of the more general principles allows them to pick up quickly on new technologies and languages. Whenever I came across this kind of candidate I usually recommend him/her on the spot.

P+T+I

This is the typical software developer that graduates school and enter to work in a software workshop, creating software for other clients. He understands the importance of creating good software and try to improve his skills as time goes. However unless he/she is assigned to a customer for a very long time, his understanding of the industry is limited to the scope of the projects assigned to him. Whenever I came across this kind of candidate I usually recommend him/her on the spot.

where are you now and where are you heading?

Final thoughts

In my experience the seniority of a software developer is dictated by the deep of his understanding of the principles, patterns and practices. The reason being that the quality of the overall software is deeply affected by this. You can always correct a DOM manipulation done by JQuery to use the Angular mechanisms, but correcting an faulty architecture or a leaking abstraction is a far more complex matter. That is why is important to take these decisions with a solid understanding of its consequences.
So you can have a developer with a good understanding of principles and 0 experience using Angular and expect him to write better software than a developer with 5 years of Angular and a poor understanding of the principles. The latter may be quicker, but the former will create something of a higher quality. Uncle Bob has reiterated this more than once and asked for us as software developers to raise the bar. If you follow on his works (talks and books) you’ll see that his emphasis is on the principles, not the technology.

As always, let me know what you think.

Knowledge management for software developers

There are 3 different kinds of knowledge that a software developer has to manage on his professional career. I called them principles, technology and industry knowledge. There is other relevant stuff such as soft skills but today I’m focusing on knowledge not skill sets.

Principles

Before continuing I want to clarify what I mean by principles: borrowing the title from uncle bob’s famous book I’m referring to principles, patterns and practices (with a little twist from the book’s meaning).

Principles are technology agnostic. They can be applied generally on a wide set of circumstances. An example would be the DRY principle which is universally recognized as a good practice in software engineering (no matter if you work in an OOP or a functional paradigm).

Patterns are often limited to a specific mindset, a paradigm.

A good example here could be the null object pattern. It makes sense in an OOP context, but it lacks when used in procedural programming.

Patterns are usually a trade of simplicity for flexibility the latter being derived of some of the paradigm traits. You could say that it maximize some of the paradigm benefits at the cost of simplicity: the code may be complex to understand to someone not familiarized with the paradigm but at the same time is easier to change once understood. The secret here lies in one’s ability to use the paradigm thinking process. As with everything else, practice leads to mastery.

You can find a compilation of these patterns in almost every development paradigm, with names that make it easy to refer to them when talking with other developers.

Practices refer to the way we develop code. It includes stuff like refactoring, testing, incremental delivery and so on. They’re usually outlined in a software development methodologies and some are expressed as conventions. While they can be widely applied, we are usually use and learn them in the context of a team or project’s specific configuration.

All of us have a certain a familiarity degree with each of these concepts. However, not all of us are conscious that they are interrelated to each other i.e. comprehension of some principles can help us decide when to apply certain patterns. This kind of knowledge ultimately leads to better code and designs.

Technology

This is probably the kind of knowledge that most developers spend the most time learning. This makes sense: with so many new technologies every other week, we must try to keep up or we’ll be at the risk of becoming obsolete. In a sense technology is like a fashion trend: we have something new this summer, but as soon as autumn arrives a new framework that promises help us code faster takes the lead. Unless someone deliberately chooses to ignore the latest trends, there is just not enough time to become really proficient with a single technology. I usually think of technology as software platforms, libraries and frameworks.

Software platforms are the environments on which the code is executed (.net, nodejs, Java). I like to think about software and not hardware platforms because software platforms are often able to run on different hardware platforms i.e. java can run on a mobile, desktop or server platform.

Software libraries provide a very specific functionality that can be used in multiple projects i.e. JQuery purpose is manipulation of the DOM. They are methodology agnostic which means they’re really flexible when it comes to workflow types. This property makes them easy to be reused and ported between team, jobs and industries.

Frameworks often provide a set of libraries to accomplish something more complex. We even have application frameworks such as Spring, which handles everything from retrieving data to displaying it. Or Angular which provide us with the tools to create a presentation layer and communicate with the backend. One difference between a framework and a library is that a library provides just you with the tools to do things while the framework also enforces a (often highly opinionated) way to do things. This makes it harder to integrates them on an ongoing project (as opposed to a library) but are a great choice if you are starting from the ground up.

Most of the time, software libraries and frameworks are tied to a software platform, so you naturally learn the ones that run on your platform of choice (like Java or nodejs). Sometimes ports of these libraries and frameworks can be made (like hibernate to nHibernate), but most often than not they will make some adjustments to take advantage of the platform particular characteristics (meaning there are changes on the API).

Industry

This is often a byproduct from working on a project. As a software developer you really don’t study accounting unless you are creating an accounting software. Or banking. Even worse, sometimes we just limit ourselves to create what the customer requirements document says, without even trying to understand the purpose of the software or the needs of its users. Eric Evans pointed this out and explains that the reason is because this kind of knowledge is not useful to us unless we intend to keep on the same industry (like manufacturing). In other words, its reusability scope is very limited when compared with the other kinds of knowledge. However as Evans also explains, a deep understanding of the industry it’s necessary if we really want to create not only a good thing but the right thing.

Mix and match

The time you spend on each of these kinds of knowledge leads to a different set of abilities. Try it out!

  1. Evaluate yourself on each of these kinds of knowledge
  2. Select the area you’re lacking the most (principles, Frameworks, you pick)
  3. Make a 3 month plan to improve
  4. Start over 🙂

As always, your comments are welcome!

The state pattern

In a previous post I talked about how we could modify a software’s behavior by using object composition (as opposed to class inheritance). A clear example is the state pattern. Let’s take a look.

The problem

So given the following code:

Class Car{

Bool isOn;

double velocity;
double gas;

Public void TurnOn(){
if(!isOn) isOn = true;
}

Public void Accelerate(){
if(!isOn) return;
if(gas<1) return;
velocity+=5;
gas-=2.5;
}

Public void TurnRadioOn(){
if(isOn) …
}

}

So, what’s wrong with this code? Well, the problem is that for every operation we add, we must check if the car is on or off. If you keep on adding states like no gas, you will end with a lot of flags and conditional logic based on that. And, mark my words, it’ll become a bug’s lair and a complicate piece to maintain.

Whenever you find code like this, congratulations, you have found yourself a state machine.

Refactoring to a state machine

A state machine is a way of reasoning that simplifies reasoning about a program by identifying the possible states the software can take at any given moment and the transitions between them. In our example a car can be in an off or on state. If you try to accelerate and the car is off, nothing will happen, however if it’s on, it will increase its speed. To refactor the code to a state machine you need to identify the states, extract the associated behavior to a state on an object and invoke the logic on the state object methods.

Identify the application states

The easiest way to identify the application states is to look for the conditional logic on the application, especially those based on a Boolean flag. In our case let’s suppose that we have the following:

Extract the state associated behavior to an object of its own

Now we must create objects that represent the behavior for each state of the application. Since the operations for each state are the same, we can create an interface.

Interface CarState {
CarState Accelerate(ref double velocity, ref double gas);
void TurnRadioOn();
CarState TurnOn();
}

class CarOff : CarState{

Public CarState Accelerate(ref double velocity, ref double gas){
return this;

}

Public void TurnRadioOn(){
//do nothing
}

Public CarState TurnOn(){
return new CarOn();
}
}

class CarOn : CarState {

Public CarState Accelerate(ref int velocity, ref int gas){
velocity +=5;
gas -=2.5;
if(gas <1)
return new NoGas();
else
return this;
}

Public void TurnRadioOn(){
//turn on the radio
}

Public void TurnOn(){
return this;
}
}

class NoGas {

Public CarState Accelerate(ref int velocity, ref int gas){
return this;
}

Public void TurnRadioOn(){
//do nothing
}
Public CarState TurnOn(){
return this;
}

}

Invoke the logic in the state object methods

Now let’s delegate the state behavior to the state objects:

Class Car{

double velocity;
double gas;

CarState state = new CarOff();

Public void TurnOn(){
state = state.TurnOn();
}

Public void Accelerate(){
state = state.Accelerate( ref velocity, ref gas);
}

Public void TurnRadioOn(){
state.TurnRadioOn();
}

}

Now the Car object is simple to maintain and understand.

When to use the state pattern

  1. Whenever you find yourself looking to a lot of conditions based on booleans, pay attention, you are probably looking to a state machine type of problem. If you have more than 2 states, I strongly suggest that you consider refactoring to the state pattern.
  2.  There are situations when you must evaluate several variables at once, like:
    If(!isOn & gas > 0 & battery >0 ) then …

    Refactor those expressions into Boolean values:

    bool carBroken = !isOn & gas > 0 & battery >0;

    And model your object behavior as a state machine, just like we outlined before.

Closing thoughts

Keep in mind that this example is for illustration purposes only. In real life, this is likely to be way more complicated.

Remember that there is a price to pay for using any design pattern. In this case the flexibility and simplification required the creation of more objects. Always weigh the pros and cons before coding anything!

As always, let me know your thoughts.

The basics part 4: composition

In a previous post we illustrate how inheritance can help to refine the behavior of a particular case. In this post we’ll take a look at a different approach.

Composition over inheritance

In the Gang of 4 book, they advise to use composition over inheritance. Composition is a technique that breaks an object overall behavior in smaller objects each tasked with an aspect of it. This allows for better reuse and a more maintainable codebase.

Let’s see how it works.

Refactoring from an inheritance hierarchy to a composition model

Identify the aspects of the behavior

The first thing we’re going to do is to identify the steps of the behavior being overriden.

public virtual decimal CalcBonus(Vendor vendor)
{

     decimal bonus = 0;

     bonus = washMachineSellingBonus(vendor);

     bonus += blnderSellingBonus(vendor);

     bonus += stoveSellingBonus(vendor);

     return bonus;

}

In this case these would be washMarchineSellingBonus, blenderSellingBonus and stoveSellingBonus. It’s worth mentioning that you’ll find code where the steps are not as clearly seen as in this example. Nevertheless, they’re still there. Every algorithm is just a bunch of steps in a certain order.

Create abstractions as needed

In our example the washMarchineSellingBonus, blenderSellingBonus and stoveSellingBonus are, as the name describes, bonuses. We can make this implicit abstraction explicit by creating an interface to represent it:

public interface Bonus
{
  decimal Apply(Vendor vendor);
}

public class WashMachineSellingBonus:Bonus {…}

public class BlenderSellingBonus:Bonus {…}

public class StoveSellingBonus:Bonus {…}

By doing this, we can have the calculator object with the responsibility to decide which bonuses will apply and keep track of the bonus amount while using the command pattern to contain each bonus calculation logic.

public class BonusCalculator()
{
  List<Bonus> bonuses = new List<Bonus>();

  public BonusCalculator()
  {
    bonuses.Add(new WashMachineSellingBonus());
    bonuses.Add(new BlenderSellingBonus ());
    bonuses.Add(new StoveSellingBonus ());
  }

  public decimal CalcBonus(Vendor vendor)
  {
   return bonuses.Sum(b=>b.Apply(vendor));
  }

}

So far, so good. But what’s the real benefit of this?

Inject behavior at runtime

If we apply the Dependency Inversion Principle, something interesting happens.

Public class BonusCalculator()
{
  List<Bonus> bonuses = new List<Bonus>();

  public BonusCalculator(IEnumerable<Bonus> bonus)
  {
    bonuses.AddRange(bonus);
  }

  public decimal CalcBonus(Vendor vendor)
  {
    return bonuses.Sum(b=>b.Apply(vendor));
  }
}

Now our BonusCalculator class becomes a mere container. This means that the behavior must be setup somewhere else. If needed the definition of the bonus calculator can now be hosted outside the code, like in a configuration file.

public class BonusCalculatorFactory()
{

   public BonusCalculator GetBonusCalculator(string region)
  {
   //Lookup the configuration file or a database or webservice and get the Bonuses 
     that apply to this particular region 
   }

}

The idea here is that you can now modify the behavior of the BonusCalculator without the need of a hierarchy tree.

Advantages over inheritance

The main advantages of using composition are

  • Changing behavior can be done at runtime without a need to recompile the code.
  • You don’t have the fragile base class problem anymore.
  • You can easily add new behaviors (in the example just implement the Bonus interface)
  • You can compose behaviors to create more complex ones

Let’s take a quick look at this last point.

Mix and match to create new behavior

Let’s create a composed bonus object. We can reuse the template functionality from the bonus calculator.

public class ExtraBonus(): BonusCalculator, Bonus
{
  public decimal Apply(Vendor vendor)
  {
    decimal theBonus = CalcBonus(vendor);
    return theBonus > 2000? theBonus * 1.10 : theBonus;
   }

}

Final thoughts

One of the things that I like about using composition, is that it forces you to decompose a problem to its simplest abstraction, allowing you to use this as a building block to create complex behavior and great flexibility at runtime. Now is time for a revelation: the actual text in the GoF book reads

  Favor object composition over class inheritance

It’s not my intention to explain the reasons behind this principle on this post, but the hint is on the words object and class. Think about it and let me know your observations.

Leaky abstractions and how to deal with them

Leaky abstractions is a term given to a faulty model, that is a model that fails to express some domain concepts. I found this to be a natural step on the process of creating a rich domain model. The problem comes when we stop refining the model, ending up with an incomplete work.

The school example

John was tasked to create a system to replace a legacy school administration system. His initial approach was to review the old systems database to extract the underlying entities. If not the code, at least he could reuse the abstractions. So, he ended up with the following abstractions:

Then he proceeded to work with the first use case/ user story: student registration. When finished, it looked something like:

“So far so good”- John thought – and he went to work on another use case. However, as he progressed he notice that the student object was over bloated: it contained info no only related to the student performance but also financial and historical data. Can you guess why?

Hunting a missing abstraction

Turns out that since every career has its own set of requirements, the student object had to accommodate all the data needed by every career prospect evaluator object.

The problem with John’s model it’s that is missing an abstraction. Thus, he is reusing another abstraction in place. Unfortunately, that’s a COMMON mistake. And one with a HUGE impact. The missing part here is the application the student submits. Let’s introduce this into the model.

This frees the student object to represent an actual student and nothing more. Now we can have the AccountingEvaluator evaluate an AccountingApplication.

By doing this we have:

  1. Reduced coupling since the student object is not dependent on the requirements of the program evaluators.
  2. Made the code SRP compliant, hence easier to maintain.

Keeping it simple

By this point some may be thinking that we increased the cyclomatic complexity since now we have to figure out the right evaluator for each application. Something like this:

 

Public void Submit (ProgramApplication app){
    var type = app.ToString();
    switch(type){
    …Code to select the right evaluator…
    }
}

But this can easily be fixed by putting the responsibility into the application objects themselves:

public interface ProgramApplication{
   bool IsApproved();
}

public class EngineeringApplication: ProgramApplication{
  decimal _mathScore;
  public EngineeringApplication(mathScore){
     _mathScore = mathScore;
  }
 public bool IsApproved(){ return _mathScore > 90} 
}
public class AccountingApplication: ProgramApplication{
  decimal _mathScore;
  public AccountingApplication(mathScore){
     _mathScore = mathScore;
  }

   public bool IsApproved(){ return _mathScore > 80}
}

//call on the client side

Public void Submit (ProgramApplication app){
 if(app.IsApproved()) ...
}

This is good design as it encapsulates the application evaluation details.

Closing thoughts

John’s is a typical scenario of leaking abstractions. The problem here is that he believed the domain model of the legacy system to be complete. This is a common mistake when starting a new project (either green or migrating any obsolete piece of code). We must remember that the moment we know less about the business is at the beginning. It’s naïve to expect the domain model to be complete on this stage. I learned at school that aversion to change is a human trait, but don’t hung up to faulty model. If it’s too complex, we’re doing it wrong.

Keep refining your model and have fun!

How to handle dependent observable calls in an async call in Angular 2/4

Recently I’ve been working with Angular 2/4. One day I came across with something like this:

public getCustomersOnArea(zip:string):Observable<Customer>{..}

The problem was that we had to make 2 calls to get the data. Even more one call depended on the data fetched from the other. How to solve this? One way it’s to encapsulate the 2 calls into a third one and return that to be subscribed.

public getCustomersOn(zip:string):Observable<Customer>{

return new Observable<Customer>(subscriber => {
            this.http.post(zip)
                .subscribe(res => {
                    this.http.post(res)
                        .subscribe(r => {
                            subscriber.next(r);                           
                        },
                          e=>  subscriber.error(e),
                          ()=>  subscriber.complete());
                });
}

And that’s it. Do you know another way? Leave it in the comments section