Using metaphors to make the code easy to understand

I mentioned this before, but to me, high-quality code has 3 attributes: it’s easy to understand, easy to change, and correct. I always start with trying to make any piece of code the easiest to understand. If you make it easy to understand, even if it’s not easy to change or correct yet, you are in a much better position than otherwise. Whenever I’m mentoring I always explain it like this: if you can understand the “what” you can change the “how”.

So, making the “what” explicit, that’s the challenge.

The socio-technical space

There’s this idea in the DDD circles called socio-technical space. The way I like to think of it is like a continuum that has technical issues/solutions on one side and social issues/solutions on the other.

When you start looking at social issues, the concepts and their interactions provide you with a nice framework where you can reason about the problem. Often, your design will take after these concepts as the building pieces for your solution. That means that if you are working on a system for a banking domain, you likely will have objects like accounts, money, and credit.

But what when you are solving a highly technical problem where the concepts are too vague, abstract, or low level? Well, you can try defining your own concepts to reason the problem and to explain your solution. Or you could use a metaphor.

A technical challenge for you

Exercism.io is a platform to practice solving problems in a programming language. I recommend it to any developer who takes pride in his/her craft. So I was solving the Spiral Matrix problem (login/sign up to access the problem). Before you continue reading I challenge you to solve it. Go on, I’ll wait for you.

So the problem states that given a size you have to create a Matrix[size, size] and you have to fill it with numbers starting from 1 up to the last element. Suppose you have a Matrix[5, 5] then you would have to fill all the slots with numbers 1 to 25. The tricky part is that you have to follow an inward spiral pattern. Are you interested now? try solving it!

Metaphors to the rescue!

The first time I heard about metaphors in the software development realm, was in relation to XP. The idea is simple: use a metaphor to drive the system design. Kent Beck used this on an overall system design level (architecture). But this time I’ll apply it on a smaller scale: the Spiral Matrix solution.

Each XP software project is guided by a single overarching metaphor… The metaphor
just helps everyone on the project understand the basic elements and their relationships.

-Kent Beck, Extreme Programming Explained

Patterns, patterns everywhere!

There are many ways to solve the Spiral Matrix problem. The most obvious solution is to sense the surrounding cells as you move. However, as I was looking at the numbers, I found a pattern in them. Turns out that you can calculate the turning points.

Here I marked all the turning points for a 3×3 matrix. If you lay out the numbers the pattern makes itself visible.

So starting from the right to left, you’ll notice that the distance between the 2 turning points is 1 (where distance is how many spots you’ll have to traverse before finding the next turning point). After the 2 turning points the distance increases by 1. And the sequence goes on. Every 2 turning points the distance increases by 1 until it reaches size-1. I’ll leave it to you to come up with an algorithm to take advantage of this. By the way, the number of turning points is equal to (size * 2) – 2.

Enough talk, show me the code!

So I wanted to make this pattern as obvious as I could, but after the first implementation, it was everything but obvious. After looking closely I noticed there were several things happening at the same time: keeping track of the corresponding number, moving on the grid, and knowing when to turn. So I decided to create objects to handle those responsibilities… but how should I call them?

Sure, you can call your objects however you want, but I wanted to make everything as clear as possible. Easy to understand, remember? So one of the responsibilities was to “navigate” the matrix. This led me to decide on a map. A map helps you navigate right? who could use a map? An Explorer right? after some iterations I ended with something like:

public static int[,] GetMatrix(int size)
    {
        var terrain = new int[size,size];
 
        var compass = new Compass(size);

        new Explorer().ExploreTerrain(terrain, compass);
        
        return terrain;
    }

So imagine you were tasked with starting the numeration at 3 instead of 1. You come and find this code. You’ll probably be puzzled, but the objects make sense to you. Because you understand the relationship between an explorer and the compass. You understand how the compass is used by the explorer. And knowing that, it makes sense to you that the explorer would use a compass to explore a terrain. Actually, it would be weird if he didn’t. But all of this happens in the back of your mind in a fraction of a second, without you really noticing it. So you go and check the ExploreTerrain method.

  public void ExploreTerrain(int[,] terrain, Compass compass)
        {
            while (_stepsTaken <= terrain.Length)
            {
                mapCurrentPosition(terrain);
                adjustDirection(compass);
                advance();
            }
        }

Again, this code is taking advantage of you existing knowledge on the matter of exploration. Wait what is this mapCurrentPosition doing? I think I know, but let’s confirm it.

 void mapCurrentPosition(int[,] terrain) => 
    terrain[_currentPosition.Y, _currentPosition.X] = _stepsTaken;

oh! so it’s putting a number in there… given what we know, this should be the corresponding number… so that is referenced as _stepsTaken! ok, let’s go back. Wait how is adjustDirection accomplished?

   void adjustDirection(Compass compass)
        {
            if(compass.IsTurningPoint(_stepsTaken)) 
                _currentPosition.TurnRight();
        }

So if the compass says that I need to turn at the current step, I turn right (notice how this didn’t puzzle you. Because using a compass to figure out if you need to turn around is something you understand, maybe even experienced before). Maybe we should rename that _stepsTaken variable to _currentStep? let’s go back and figure out what the advance method does.

 void advance() 
        { 
            _currentPosition.Forward();
            _stepsTaken++;
        }

Well, yeah, as expected. Wonder, how does the _currentPosition move forward? (notice we are questioning the “how” not the “what”. We understand what “moving forward” means when exploring). But hold on! where is that _stepsTaken initialized?

class Explorer
    {
        int _stepsTaken  = 1;
        ...
    }

Bingo! let’s initialize this variable to 3 instead of 1 and presto!

class Explorer
    {
        int _stepsTaken  = 3;
        ...
    }

I think you got the idea. If you want to check the details you can find the whole code here.

Closing thoughts

Hopefully at this point the advantages of using a metaphor have become evident (especially in an object oriented system).

Another benefit of using a metaphor is communication. Good metaphors are based on everyday experiences that a lot of people can relate to. This will allow you to convey ideas about the system design/architecture to non-technical people, which becomes increasingly important in agile settings, where the customer is part of the team.

I hope this picks your curiosity about using metaphors in the code. We already do it to explain our ideas in other settings, so why not use it in our code too? I challenge you to do it!

How to improve the signal to noise ratio of your code

As I have previously shared, code quality can be summarized along 3 axes: It is easy to understand, easy to change, and correct.
Today I want to talk about a trait that indicates how easy to understand is a codebase: Signal to noise ratio.

What is signal to noise ratio?

Signal-to-noise ratio (SNR or S/N) is a measure used in science and engineering that compares the level of a desired signal to the level of background noise

https://en.wikipedia.org/wiki/Signal-to-noise_ratio

In software development, this means how much of you code explains your intention/ideas/knowledge vs how much doesn’t.

Why is signal to noise ratio important?

Well, as mentioned before, this is an indicator of how easy to understand is your code. That means, how much time and mental effort is required to understand what the code does and more importantly, why it does it that way. Understanding these 2 facts is a requirement before changing how the code works. There’s no workaround for that.

What is the most influential factor on the signal to noise ratio?

If I were to pick a single attribute on a codebase to change its signal to noise ratio, that would be the abstraction level. You see in my experience poor signal to noise ratio comes from either under abstraction (too much detail) or over abstraction (too many layers of artifacts, too much indirection).

Under abstraction and its effect on the signal to noise ratio

How many times have you been tasked to make a little, tiny change in behavior, only to find yourself with a 200 lines function (… I just had a PTSD episode). The problem with a 200 lines function is that there’s too much detail to easily figure out the what and why.

This detail overload doesn’t happen just at the level of huge functions, but also at the level of language constructs. Take a look:

decimal orderTotal;
foreach(var line in orderLines)
{
    orderTotal+= line.Total;
}

So as you can see, the idea here is that the order total is the sum of the order lines total. So what code here isn’t relevant to that idea? Think about it for a moment. Done?

decimal orderTotal;
foreach(var line in orderLines)
{
    orderTotal+= line.Total;
}

Surprise! I bet a lot of you didn’t see that coming! This is because sometimes we get so used to the language that we give those things for granted. I know I did. It took me a lot of effort learning Smalltalk (and banging my head against the wall every time I tried to do something new) to rewire some parts of my brain. But you can’t deny it. Iterating over the lines is just a detail to sum up the lines total. I does not help conveying the main idea. It’s noise. How would you fix that? Actually, there are several ways.

decimal sumLinesTotal(){
    decimal linesTotal;
    foreach(var line in orderLines)
    {
        linesTotal+= line.Total;
    }
    return linesTotal;
}
...
decimal orderTotal = sumLinesTotal();

How’s that? Not a big deal right? But, now there’s no doubt about the code intention. I know, some of you may think this is dumb. The code itself wasn’t that complex to start with, why should we create a new function just for this? Well, what do you think would happen to a 200 lines function if you started doing this? Not only for loops but every place where implementation details (the how) appear. I dare you to try it. Now, if you are using C# there are other ways to be explicit about this:

 decimal orderTotal = orderLines.Sum(orderLine=>orderLine.Total);

Over abstraction and its effect on the signal to noise ratio

Over abstraction happens when we add unnecessary artifacts to a codebase. This is a prime example of accidental complexity. A very common cause of this is speculative generality: the idea that someday we may need to do something and preparing the code to handle such cases, even when we don’t have the need right now. But there are more common, more subtle cases.

So let’s say we have a report API to which we make requests:

public EmployeeData GetEmployeeData(Guid id);

public EmployeeData
{
    Guid Id;
    ...
}

public ManagerData GetManagerData(Guid id);

public ManagerData
{
   Guid Id;
   ...
}

So our relational mindset tell us that we are duplicating data here (id) and that we should remove that duplication.

public class ReportData
{
    Guid Id;
}

public EmployeeData GetEmployeeData(Guid id);

public EmployeeData: ReportData
{   
    ...
}

public ManagerData GetManagerData(Guid id);

public ManagerData: ReportData
{  
   ...
}

Great! duplication removed! but wait! we can go even further! Isn’t it everything we’re returning just report data? Let’s make that explicit!

public class ReportData
{
    Guid Id;
}

public ReportData GetEmployeeData(Guid id);

public EmployeeData: ReportData
{   
    ...
}

public ReportData GetManagerData(Guid id);

public ManagerData: ReportData
{  
   ...
}

But now the client code need to cast the result to the concrete type. Maybe we can make the ReportData object accommodate different sets of data?

public class ReportData
{
    Guid Id;
    Dictionary<string, object> Data;
}

public ReportData GetEmployeeData(Guid id);

public ReportData GetManagerData(Guid id);

So now let’s say you are given a ReportData object. How can you know if you are dealing with an employee or a manager’s data? You could query the data dictionary for a particular key that represents a property available only in employee (or manager), or worse, you can introduce a key in the dictionary that says which type of data is contained in it, moving from strongly typed to stringly typed. This is all noise. The signal has been effectively diluted.

Some guidelines to improve your signal to noise ratio

By this point I hope is clear to you that to improve your signal to noise ratio, using the right abstraction level is key. So I’ll share with you some of my observations on the abstraction process.

Step 1: remove noise by encapsulating details away into functions

Encapsulation and abstraction are closely related. I’ll talk about it in another post. Suffice to say that as you are encapsulating details away, you’re also raising the abstraction level. The trick to avoid going overboard is to think about what you want to express: the signal. Is that clear enough? A good rule of thumb is trying to make your functions 5 lines or less.

Step 2: uncover the objects

You will find that some functions act upon the same set of data. Those are objects hidden in the mist. Move, both the data and the functions that act upon it to a class. Naming the class will have an impact on the clarity of your signal, but don’t worry to get it right the first time, you can rename it (and you will) as your understanding increases.

Step 3: wash, rinse and repeat

Repeat the 2 previous steps over and over. If the idea you want to convey is still not clearly expressed by the code go to step 4.

Step 4: select a metaphor

To be discussed on the next post. ūüôā

A quick comment on comments

As I began writing I mentioned that you need to understand the what as well as the why of the code. The former can clearly be expressed by the code. If that’s not the case, you haven’t reached the right level of abstraction yet. As for the latter, this is the only situation in which I find comments justifiable. Explain constraints or whatever it is that lead you to chose the current solution.

Closing thoughts

Man, that was longer than I expected! I hope this can give you some hints on what to look for the next time you are on a code review (yours or someone else’s). As always if you have any comments, doubts or whatever, leave them below. Good coding!

Problem space, solution space, and complexity explained with pictures

For the last couple of years, my work can be described as nothing but refactoring. And I like it. It’s like taking away the mist surrounding the forest. As you move forward you start to gain a better sense of the code intention and start to detect places where complexity has made its nest.

Complexity is a strange beast. According to Ward Cunningham there are 2 kinds of complexity: empowering complexity (“Well that’s an interesting problem. Let me think about that for a while”) and difficulties (blockage from progress). Does this sound familiar? Where complexity and difficulties come from? To answer this, let’s take a look at the idea of problem space and solution space.

Problem and solution space

The problem space

As depicted in the picture the problem space is this conceptual space delimited by some rules and constraints. More important it includes the current state of affairs and the desired state. Is inside the boundaries of this space that solutions are born.

The solution space

As you can see the solutions are not all equal. Obviously, solution 2 is better than solution 1. This leads me to Ward’s definition of simplicity: Simplicity is the shortest path to a solution. Or in the context of our drawing, the shortest path to the desired state. By the same token, we could say complexity is any path that’s longer than necessary.

Now, this may be tricky. Is possible that solution 2 in our example/drawing requires a kind of knowledge that we don’t currently possess. In that case, we can’t even think of that solution. Or we can’t understand it when presented to us. It would take us the extra effort to acquire that knowledge before we can find solution 2 in our problem space. Hence why is important that we try to have a breadth of knowledge of the (mostly thinking) tools out there. But I digress.

So the shortest path, huh? well “shortest” is not the same for everyone.

Essential complexity

In this picture, the solution in problem space 2 is complex than the one in problem space 1, not because of the solution itself but because of the problem space.

This “distance” between the initial and desired state is known as essential complexity. No matter what you do, the solutions in problem space 2 will be complex than most of the solutions in problem space 1. It’s just the problem is more complex.

Accidental complexity

But what about this?

Clearly, problem space 2 is complex than problem space 1. Still, the solution on problem space 1 is complex than the one in problem space 2!

This is known as accidental complexity. It’s the complexity that comes from the solution we chose. Accidental complexity is our fault and is ours to solve.

And what about difficulties?

Well, now we have found where the complexity comes from. But what about difficulties? let’s review the definition:

A difficulty is just a blockage from progress.

mmmm…. from progress? That implies we are already on the path to our destination. Is it the path of solution 1 or solution 2? It doesn’t matter. What matters is that a solution has been selected and we are traversing through it. Keep this in mind as Ward enlighten us once again:

The complexity that we despise is the complexity that leads to difficulty.

That is accidental complexity!
The difficulty is born out of accidental complexity!

Final thoughts

So there you have it. I’ve been thinking about this stuff for a while. I still do.

As I continue to refactor code, I find myself understanding more about the solution, and the problem space itself. I believe the main difference between a programmer and a consultant, is that consultants start in the problem space, this means that they have the autonomy to explore and select solutions, whereas programmers are tasked to work on the solution space from the get-go. That being said, most of the time, we don’t know how good a solution is until we code it.

This leads me to the tip of the day: if it’s hard, that is, if the way is full of difficulties, maybe you are taking the long path. Try stepping back and ask yourself “is there another way to accomplish my objective?”

Depending upon abstraction is not about interfaces, is about roles

I recently stumble upon this code where someone took an object and extracted an interface from its methods, something like:

class Parent: IParent{
    public void Teach(){}
    public void Work(){}
}

interface IParent{
    public void Teach();
    public void Work();
}

I’ve seen many people (including myself, tons of times) do this and think: “There. Now we are depending upon abstractions“. The truth is, we are depending on an interface, but depending on abstraction is way more than that.

An object design guideline

All objects have a raison d’etre: to serve. They serve other objects, systems, or users. Although that may seem obvious, I’ve found that’s something often overlooked.

Warning: Rant ahead.

I have mentioned this before but, I believe the main reason object-oriented programming is often criticized is that is not well understood.

The idea of an object as an abstract concept that can represent either code or data has not reached enough people to change the overall perception.

A lot of the people I have seen complaining about OOP is doing structured programming. They still tend to separate the data from the operations that are done upon it. Basically structs and modules. It’s sad because this yield software that is hard. Hard to change, hard to understand, hard to correct. Is not soft (as in soft-ware). I blame schools for this. At least in my particular experience, OOP is often delivered as an extension of structured programming, much like C++ is often seen as an extension of C.

We need to reeducate ourselves on the way we think: OOP is not about using object-oriented technology but about thinking in an object-oriented fashion.

This is the reason I started this blog.

End of Rant ūüėõ

So thinking of objects as either data bags or function bags is the result of ignoring a fundamental design question: whom does this object serve?

To answer this question you have to start with the client (object, system, user) needs. This leads itself to a top-down analysis/design approach. But a lot of us are trained to start a system design by thinking on the structure of a relational database, which it’s a bottom-up approach. Let’s see how they differ from each other.

The Database first approach

When designing a relational database, the thinking tools available are Entities and the Relationships between them, often displayed in an ER diagram. So we start with Entities from the nouns on the domain: Parent, Teacher, Student, Child, Class, Course, and so on. I’m pretty sure you can think of a domain just by looking at these concepts.

Now that you have these Entities, you have to think about the processes that interact with them. How do we create a new student? How do we update some of its data? How do we delete it? If you look closely you will find that most everything is modeled as CRUD operations around the Entities. In this scenario, the entities are your abstractions.

The Objects first approach

In this case, you would start by thinking about the needs of the user. This often is expressed as tasks. We usually discover and document these in the form of user stories or use cases. This initial set of needs will serve as the basis for the features of the system. We can now start creating the objects to fullfill these needs. Often this objects will represent the tasks expressed by the user. This is what is known as the application layer on DDD.

From here on things start to get interesting. Pick one of these task objects. What do you need to accomplish this particular task? These are the needs of the object. Now here comes the trick: define an interface/abstract class that fulfills one specific need and name it as such. By doing this we force ourselves to define a specific concept for a specific need in a specific operation. We call this kind of concepts: Roles.

I love the naming schema that Udi Dahan uses for Roles: IDoSomething/ ICanDoSomething. In this approach roles are your abstractions.

Entity vs Role

Let us go back to the original issue: what it means to depend on abstractions?
To answer that we need to answer another question first: what is an abstraction?

Let’s consider the difference between the 2 kinds of abstraction we’ve seen so far: Entity and Role.

First, let’s clarify something: Entities as we have discussed so far don’t belong to the OOP paradigm, they belong to the Relational paradigm. We have discussed before that the needs addressed by a model in the relational paradigm are geared toward disk space optimization, whereas the needs of an object model, particularly an object domain model, are about representing business concepts and interactions in a way easy to change and understand.

Side note: There’s actually an Entity concept in DDD.
An Entity is an object with a unique id. Often, DDD Entity objects overlap with their counterparts on the relational world, because both represent business concepts, but restricting the domain entities to the relational ones greatly caps our thinking and designing ability.

And here we come to the big idea: an Entity (or any object for that matter) can take upon many roles.

This is because roles and entities are different kinds of abstraction. Entities represent a thing/idea whereas roles represent a capability.

And often, depend on abstraction means depend on a role.

A (silly) code example

Let us review our previous code:

class Parent: IParent{
    public void Teach(){}
    public void Work(){}
}

interface IParent{
    public void Teach();
    public void Work();
}

A lot of people are OK with creating this interface before figuring out which services are going to be provided to which client. This is a leaky abstraction. It’s weak and ambiguous on its intention. Can you tell what’s the purpose of an IParent on a glance?

Let’s now review the client code. Let’s say a basic math class can be taught by a teacher, but given the COVID-19 situation it can also be taught by a parent at home:

public class BasicMathClass{
        public BasicMathClass(Teacher teacher){
             teacher.Teach();
       }

        public BasicMathClass(Parent parent){
             parent.Teach();
       }
}

public Teacher{
       public void Teach();
}

class Parent: IParent{
    public void Teach(){}
    public void Work(){}
}

interface IParent{
    public void Teach();
    public void Work();
}

When we look at the client code it’s obvious why the parent teaches. But since we extracted the interface without even checking who was using it before, we are now in a dilemma. One way to solve this could be:

public class BasicMathClass{
        public BasicMathClass(IParent parent){
             parent.Teach();
       }
}

public Teacher: IParent{
       public void Teach(){}
       public void Work(){}
}

class Parent: IParent{
    public void Teach(){}
    public void Work(){}
}

interface IParent{
    public void Teach();
    public void Work();
}

Solved. I know, this is silly, but if you think about it, all teachers also work, so it’s not so crazy to have a work method in there.
But not all of them are parents. So what then? Should we revert the interface?

public class BasicMathClass{
        public BasicMathClass(ITeacher teacher){
             teacher.Teach();
       }
}

public Teacher: ITeacher{
       public void Teach(){}
       
}

class Parent: ITeacher{
    public void Teach(){}
    public void Work(){}
}

interface ITeacher{
    public void Teach();
}

Well, this reads better right? All parents teach, so they are teachers, right? Well, that’s not necessary true either. They can teach, but not because they study to do so, and they cannot teach in a school either.

The problem is in the role conceptualization: we are talking about what something is, instead of what it does.

public class BasicMathClass{
        public BasicMathClass(IEducate educator){
             teacher.Teach();
       }
}

public Teacher: IEducate{
       public void Teach(){}
      
}

class Parent: IEducate{
    public void Teach(){}
    public void Work(){}
}

interface IEducate{
    public void Teach();
}

The change is a subtle one but is important nonetheless: instead of depending on an entity (some thing/idea) we are now depending on a role (a capability). The mental model implications are not to be taken lightly. Once you start depending on roles, you’ll start to think more in terms of them.

So here’s the tip of the day: If you want to talk about what something is, use a class. If you want to convey what it does, use an interface.

Objects are meant to act, not to be acted upon

One of the most common issues I find when mentoring people on object-oriented design has to do with the mentality that many people brings when moving from other paradigms. Particularly with the ones coming from the structured programming paradigm. Let’s clear that up.

Paradigm abstraction levels

To simplify, abstraction level = level of detail. Now imagine map application, something like google map, if you zoom out you can see more terrain and, at the same time, you lost sight of some information like store and street names. This is the idea behind an abstraction level. As you go up, the detail level goes down and vice-versa. Now, how does this relate with programming paradigms?

I often explain paradigms like tinted glasses. You put on some red-tinted glasses and everything looks reddish. If you put amber-tinted glasses everything looks brighter but if you put on some dark tinted glasses everything looks darker. So it is with paradigms: like tinted glasses, they affect the way we look at the world. Programming paradigms in specific provide some constructs to represent the world. So, every time you try to explain a world phenomenon you do it using the constructs provided by the paradigm you’re currently using.

So, we can classify a programming paradigm abstraction level by it’s number of constructs: the more it has, the more details you are dealing with, and hence you’re at a lower abstraction level.

So here’s a brief table showing some paradigms ranked by this criteria:

ParadigmConstructs
FunctionalFunction + Types
OOPObject + Message
Structured ProgrammingProcedures, Data Structures, Blocks, Basic Data Types

This is by no means an exhaustive table, but you get the idea. So you can see that OOP and Functional are paradigms at a high level of abstraction, whereas Structured Programming operates at a lower level of abstraction.

So you see, OOP abstracts both data and code under one concept: an object. Just as important, it also abstracts the control flow under the concept of the message. Those are the tools available to you in this paradigm.

The root of all Evil

Well, maybe not of all evil, but surely it has brought a lot of problems. And that is: to believe that you are working on the OOP paradigm because you have an OOP compliance language while keeping a Structured Programming mindset. There, I said it. I know this will irk some people, but there’s no way around it. Let me show you.

var range = Utils.GenerateSequence(from:1, to:7);

So I think that’s pretty straightforward OO snippet, right? Except it isn’t. Let’s see how would it look like if it truly were OO.

var range = 1.To(7);

So let’s review the differences. This may be a little tricky as the differences I am referring to are not in the code itself but in the mindset that generates it. Let’s start with the code and see if we can identify the mind patterns that generate it.

Differences between the Structured Programming and Object-Oriented mindsets

The main problem I find with people I coach or work with, it’s the idea that object == data structure + procedures. The problem with this is that it becomes a limitation. So, in the statement:

var number = 1;

People tend to think of ‘number’ as data since that’s what we are assigning to it. This difference between objects and data is throwing people off in the wrong direction. Remember that there is no such thing as ‘data’ in OOP, just objects and messages. You should think of ‘number’ as an object.

On the other hand, something like:

Action action = Utils.GenerateSequence;

It’s an object that represents code. But most people use the concept of pointer as a way to explain C# delegates. Why? because to them object == data structure + procedure. Anything outside of that definition is no object to them. By the way, this is what a pointer looks like in C#:

int* ptr1 = &x;

So the main question is: are you treating a variable as a data structure that needs to be passed around to functions in order to do something with it (is acted upon)? If that’s the case you are working on a structured programming paradigm (most likely). The Math class in the .net framework is a prime example of this.

On the other hand, do you send messages (‘invoke a method’ in C#/Java lingo… don’t really like the term) to the variable to do something that requires little to no external help (acts itself)? Congratulations, that’s exactly what OOP is about.

Conclusion

It’s not my intention to trash any paradigm out there. Every paradigm is useful in the right context. It’s just that there is so much confusion about them that I often find myself explaining this stuff over and over. So I hope this makes it clearer for you. If you ever find yourself struggling with OOP, try taking a step back and see if you are really operating on the OOP paradigm. Who knows, you may be surprised at your discoveries (as some of my mentees had been). See you on the next post!

What’s holding you (or your organization) from being Agile?

So you got yourself a scrum manager, had a meeting with the team, explain the scrum practices and wrote a product backlog. 4 months later things aren’t going as you expected… this Agile talk is all nonsense – you say as you walk disappointed – we were supposed to be able to ship faster, to fix bugs faster, to add new features faster… Before throwing the baby with the water, let’s consider some of the possible¬†causes.

Your codebase is not Agile

This is by far the most common reason I have found on my experience. You have a code that breaks every time you introduce a change (fragile), or that has you change a lot of places every time you add a new feature (rigid). You cannot be agile with a codebase that fights you every step of the way. Focusing on processes and ignoring the codebase is often the reason why organizations fail when trying to implement Agile methodologies.

Your mindset is not Agile

If you think that a scrum master is a manager, you’re not Agile.
If you think that a backlog is like a Gantt chart, you’re not Agile.
If you think that you need a separate team (or phase) for testing, you’re not Agile.
If you think that story points are a unit of time rather than effort, you’re not Agile.
If you think that value is determined by someone else than the end user, you’re not Agile.

Your feedback loop is too loose

To me Agile means feedback. I remember that one of the things that surprise me the most on a scrum training was this exercise where we get to create something physical, present it, get feedback on it, and turn that into a user story/task. The trainer then proceeds to explain that the sooner we get the feedback, the sooner we would be able to adjust to get on the right track. He talked about how a sprint should have several opportunities to get feedback so by the end we get the right product and not only the product right.

Lack of enough experienced developers

This one is actually kind of logic. If you don’t have enough experienced¬†developers, how do you expect to have a flexible, high-quality codebase? Having enough experienced developers that you can pair with less senior developers helps you improve the overall team level. Whereas having just a few of them tends to become a bottleneck for the whole team since everyone depends on them somehow.

Closing words

I am, by no means, an expert on Agile. These¬†are just my observations on some of the most common errors I’ve seen in my professional career.

Do you think I’m missing one? leave your comments below.

How to make your c# code more OOP with delegates pt 2

Implement the strategy pattern with delegates

Changing the default behavior of a method under testing (or any other specific circumstance)

Given the following code:

class EmailSender{
    
    public void Send (string recipient, string subject, string body) {//invoke 3rd party}
    
}

class Email{
    
    public EmailSender _sender = new EmailSender();

    public Send(){_sender.Send(recipient, subject, body);}
    
}

Imagine that you cannot change the Email class. How would you unit test it without making a call to a 3rd party service?

Answer: inject a delegate with the desired behavior.

class EmailSender{
    
    Action<string,string,string> _sendAction = _send; //default action
    
    public void Send (string recipient, string subject, string body) {
     _send.Invoke(recipient, subject, body);
    }
    
    public void _send (string recipient, string subject, string body) {//invoke 3rd party}

    internal void ActivateTestMode(  Action<string,string,string> testAction){
     _send = testAction;
    }

}

 

Specializing the rules of a domain object without inheritance

Given the following code:

public class BonusCalculator()
{
  List<Bonus> bonuses = new List<Bonus>();

  public BonusCalculator(ICollection<Bonus> bonus)
  {
    bonuses.AddRange(bonus);
  }

  public decimal CalcBonus(Vendor vendor)
  {
   var amount = 0;
   bonuses.foreach(bonus=>amount += bonus.Invoke(vendor, amount));
   return amount;
  }

}

public class BonusCalculatorFactory()
{

   public BonusCalculator GetSouthernBonusCalculator()
   {
    var bonuses = new List<Bonus>();
    bonuses.Add(new WashMachineSellingBonus()); 
    bonuses.Add(new BlenderSellingBonus ()); 
    bonuses.Add(new StoveSellingBonus ());

    return new BonusCalculator(bonuses);    
   }

}

If we want to add a new bonus that increments the 15% we would have to create a new class just to do that multiplication… So let’s try something different.

public class BonusCalculator()
{
  List<Func<Vendor, Decimal>> bonuses = new List<Func<Vendor, Decimal>>();

  public BonusCalculator(ICollection<Bonus> bonus)
  {
    bonuses.AddRange(bonus);
  }

  public decimal CalcBonus(Vendor vendor)
  {
   var amount = 0;
   bonuses.foreach(bonus=>amount += bonus.Apply(vendor, amount));
   return amount;
  }

}

Now we have to modify the factory

public class BonusCalculatorFactory()
{

   public BonusCalculator GetSouthernBonusCalculator()
   {
    var bonuses = new List();
    bonuses.Add(new WashMachineSellingBonus().Apply); 
    bonuses.Add(new BlenderSellingBonus().Apply); 
    bonuses.Add(new StoveSellingBonus().Apply);
    bonuses.Add((vendor,amount)=> amount * 1.15); 
    return new BonusCalculator(bonuses);    
   }
}

 

Easy peasy. Now depending on how it is implemented, we could start thinking about turning some of the rules into singletons.

Moving the control flow into objects

How many times have you started an operation where you want to know 1) if the operation was successful and 2) the return value. A lot of times this leads to code like:

class OperationResult{
    public bool IsSuccess{get;set;}
    public object ResultValue {get;set;}
}

interface IDataGateway{
    OperationResult UpdateName(string name);
}

class NameUpdaterCommand{
    string _name;
    IDataGateway _data;
    Log _log;

    public NameUpdaterCommand(string name, IDataGateway data, Log log){
       _data = data;
       _name = name;
       _log = log;
    }
    
    public void Execute(){
        var result = _data.UpdateName(_name);

        if(result.IsSuccess)
            _log.Write("Name updated to:" + Result.Value.ToString());
        else
            _log.Write("Something went wrong:" + + Result.Value.ToString());
    }
}

Come on, don’t be shy about it. I’ve done it myself too…

So what’s wrong with it?

Let’s see, the intention¬†behind this code it’s to¬†decide on a course of action based on the result of an operation. In order to carry on these actions, we need some additional info for each situation. A problem with this code is that you can’t handle an additional scenario. For that to happen instead of a boolean IsSuccess¬†you would have to create an enumerator of sorts. Like:

enum ResultEnum{
    FullNameUpdated,
    FirstNameUpdated,
    UpdateFailed
}

class OperationResult{
    public ResultEnum Result {get;set;}
    public object ResultValue {get;set;}
}

interface IDataGateway{
    OperationResult UpdateName(string name);
}

class NameUpdaterCommand{
    string _name;
    IDataGateway _data;
    Log _log;

    public NameUpdaterCommand(string name, IDataGateway data, Log log){
       _data = data;
       _name = name;
       _log = log;
    }
    
    public void Execute(){
        var result = _data.UpdateName(_name);

        switch(result.Result){
             case ResultEnum.FullNameUpdated:
               _log.Write("Full name updated to:" + Result.Value.ToString());
               break;
             case ResultEnum.FirstNameUpdated:
               _log.Write("First name updated to:" + Result.Value.ToString());
               break;
             case ResultEnum.UpdateFailed:
               _log.Write("Something went wrong:" + + Result.Value.ToString());
               break;
        }  
    }
}

So now every time¬†you want to add a new scenario you have to add a new enum value and a new case on the switch. This is more flexible than before but a little more¬†laborious than it should be. Let’s try to replace this enum based code with objects that represent each case:

interface IDataGateway{
    void UpdateName(string name, Action<string> firstNameUpdated, Action<string> fullNameUpdated, Action<string> updateFailed);
}

class NameUpdaterCommand{
    string _name;
    IDataGateway _data;
    Log _log;

    public NameUpdaterCommand(string name, IDataGateway data, Log log){
       _data = data;
       _name = name;
       _log = log;
    }
    
    public void Execute(){
       _data.UpdateName(_name,
                     fullNameUpdated: name  => _log.Write("Full name updated to: " + name),
                    firstNameUpdated: name  => _log.Write("First name updated to: " + name),
                        updateFailed: error => _log.Write("Something went wrong: " + error )
        );
    }
}

So now we have a shorter code. We have also moved the responsibility to control the flow to the object implementing IDataGateway. How it does it is just an implementation detail. We don’t care if it’s using an enumerator or any other mechanism as long as it works.

Phew! I think that’s enough for now. Now go improve your code!

 

The OOP wars

Some time ago I had an interesting discussion with Tony Marston. Suddenly I found myself on the middle of what seems to be an ongoing war related to what’s OOP. It seems to (still) be a heated debated on some circles, so I want to share some thoughts on the topic.

The origins

So around 1962, 2 guys from Norway (Ole Johan Dahl¬†and¬†Kristen Nygaard) extended the Algol programming language to easily create simulations. They called the new language Simula. The idea was that “A discrete event system is viewed as a collection of processes whose actions and interactions completely describe the operation of the system”. Little did they know that they work would create a revolution in the programming community.

The calm before the storm

Sometime after the invention of Simula, in 1966, a newly graduated from the University of Utah came in contact with it. As he tried to understand the concepts behind this newborn language something made click in his mind. His name was Alan Kay and he was the one who coined the object-oriented programming term. His vision was profound yet simple: a net of interconnected software computing units called objects sending messages to each other. His idea was the software equivalent of the internet. He also had the idea of a network of interconnected computers by either a wired or wireless mean by the way.

Around 1979 a Danish man called Bjarne Stroustrup was working for AT&T Bell Labs, where he had the problem of analyzing the Unix Kernel with respect to distributed computing. The problem was that C was way too low level for a large system. It was then that memories from his Ph.D. thesis, which has been written using Simula, came back. Using Simula as a base, Bjourne extended C to support classes, which he called “C with Classes” and later “C++”.

The smalltalk faction

Smalltalk it’s the brainchild of Alan Kay. It’s the reification of his vision. The language itself it’s pretty compact.

Smalltalk sports a dynamic typing system, that is, the type is not enforced at compile time.

An object is a computing unit that sends and receives messages. The user defines which actions must take place when a given message is received by a specific object. If there’s not action defined for a particular message, then the object notifies the system with a ‘message not understood’ message.

Alan Kay was heavily influenced by LISP. In LISP everything is a list: code, data, everything. This allows powerful metaprogramming techniques. Kay build upon that metaphor: everything in smalltalk is an object. Everything. A number is just an object which knows how to respond to messages like “+ 3”. A string is an object that knows how to respond to messages like “reverse”. Even inline functions/closures are objects (known as blocks) that respond to a “value” message. That’s all there is to it. This is the reason why static typing is unnecessary: you just care whether the object can respond to a message or not.

The C++ camp

C++ was designed with systems creation in mind. As such it deals with stuff like performance and memory footprint. If you are familiar with C, C++ it’s a natural evolution. It can be tricky, however, to get the most out of the object extension. This is due to C++ being a multiparadigm language, meaning that you may still resort to solutions in a different paradigm that could be implemented in a cleaner way using OOP. Stroustrup talked about this in his 1995 OOPSLA paper (see the concrete types section).

It uses a static type system, so the compiler validates every type and related operation.

An object is a structure of data along with methods to manipulate that data. You directly invoke the methods on the object.

Classes are a type extension mechanism, allowing the developer to create a DSL on top of C while still having access to all the lower level features. In order to circumvent some of the problems that arise from a static type system, it introduces templating, which allows a higher reusability.

The eternal bashing warfare

So, the eternal discussion about OOP stems from these 2 schools of thought. To some, OOP is nothing more than procedural programming plus encapsulation, inheritance and polymorphism. To others (myself included) it involves a completely different mindset. The reality is that C++ is indeed an object-oriented extension on top of a procedural language whereas smalltalk is a completely new language that heavily draws from the functional realm. Therefore, the claims from each group are valid depending on the point of view. As someone who learned OOP using C++ I have found very beneficial to learn smalltalk later. Really, having nothing else than objects to work, helped me understand the boundaries between OOP and Procedural programming, helping me shape my approach to OOP design and decomposition.

Peace to the world

So, whether you belong to the smalltalk or the C++ party, remember to be tolerant to other people point of view. It’s an absolute benefit to learn to see from another perspective. So next time you find yourself on another OOP battle camp remember that the ultimate value comes from learning to work together, despite differences, than to demonstrate that you’re right and everybody else is not.

Happy Holidays!

Software developer profiles

In my last post I talked about how a developer could improve his skillset by breaking it down in 3 areas: Principles, Technology and Industry knowledge. So depending on how the time is invested, chances are that he will fall in any of the following stereotypes (T=Technology, I= Industry, P= Principles. Order indicates depth of expertise):

T+I+P

This is by far the most common type of software developer that I have found on my interviewing experience. These are students that graduated from school using visual basic (or any other RAD) and then went on to create forms over data kind of software with not really complex rules. Even when they move to JAVA, they’re still coding with a VB mindset. They can create something out of thing air quickly, but often it’s a BBOM and very hard to maintain. Depending on the time and the kind of projects he/she can start to evolve towards a more principles focused practice. Or just continue doing the same thing for the next 10 years. I usually try to figure out where on the spectrum between these 2 poles is the candidate.

I+T+P

I have seen more and more developers of this kind lately. They are usually people like the accountant that learnt SQL on it’s own. As the final user of the software, he can create and tweak the software to adjust to his necessities. Since they lack any formal engineering education the resulting code is often no better than that of a student. I have worked with this kind of developer but have never interviewed one.

P+I+T

These typically are software developers that spend a lot of time on an enterprise, creating level enterprise software. This forced them to look to better ways to create software that’s stable, maintainable and robust ultimately leading to a better understanding of the principles, patterns and practices. However the rate of adoption of new technologies in the enterprise is rather slow (some are still running on AS400) so they are behind the technological wave. Nevertheless they understanding of the more general principles allows them to pick up quickly on new technologies and languages. Whenever I came across this kind of candidate I usually recommend him/her on the spot.

P+T+I

This is the typical software developer that graduates school and enter to work in a software workshop, creating software for other clients. He understands the importance of creating good software and try to improve his skills as time goes. However unless he/she is assigned to a customer for a very long time, his understanding of the industry is limited to the scope of the projects assigned to him. Whenever I came across this kind of candidate I usually recommend him/her on the spot.

where are you now and where are you heading?

Final thoughts

In my experience the seniority of a software developer is dictated by the deep of his understanding of the principles, patterns and practices. The reason being that the quality of the overall software is deeply affected by this. You can always correct a DOM manipulation done by JQuery to use the Angular mechanisms, but correcting an faulty architecture or a leaking abstraction is a far more complex matter. That is why is important to take these decisions with a solid understanding of its consequences.
So you can have a developer with a good understanding of principles and 0 experience using Angular and expect him to write better software than a developer with 5 years of Angular and a poor understanding of the principles. The latter may be quicker, but the former will create something of a higher quality. Uncle Bob has reiterated this more than once and asked for us as software developers to raise the bar. If you follow on his works (talks and books) you’ll see that his emphasis is on the principles, not the technology.

As always, let me know what you think.

Knowledge management for software developers

There are 3 different kinds of knowledge that a software developer has to manage on his professional career. I called them principles, technology and industry knowledge. There is other relevant stuff such as soft skills but today I’m focusing on knowledge not skill sets.

Principles

Before continuing I want to clarify what I mean by principles: borrowing the title from uncle bob’s famous book I’m referring to principles, patterns and practices (with a little twist from the book’s meaning).

Principles are technology agnostic. They can be applied generally on a wide set of circumstances. An example would be the DRY principle which is universally recognized as a good practice in software engineering (no matter if you work in an OOP or a functional paradigm).

Patterns are often limited to a specific mindset, a paradigm.

A good example here could be the null object pattern. It makes sense in an OOP context, but it lacks when used in procedural programming.

Patterns are usually a trade of simplicity for flexibility the latter being derived of some of the paradigm traits. You could say that it maximize some of the paradigm benefits at the cost of simplicity: the code may be complex to understand to someone not familiarized with the paradigm but at the same time is easier to change once understood. The secret here lies in one’s ability to use the paradigm thinking process. As with everything else, practice leads to mastery.

You can find a compilation of these patterns in almost every development paradigm, with names that make it easy to refer to them when talking with other developers.

Practices¬†refer to the way we develop code. It includes stuff like¬†refactoring, testing, incremental delivery and so on. They’re usually outlined in a software development methodologies and some are expressed as conventions. While they can be widely applied, we are usually use and learn them in the context of a team or project’s specific configuration.

All of us have a certain a familiarity degree with each of these concepts. However, not all of us are conscious that they are interrelated to each other i.e. comprehension of some principles can help us decide when to apply certain patterns. This kind of knowledge ultimately leads to better code and designs.

Technology

This is probably the kind of knowledge that most developers spend the most time learning. This makes sense: with so many new technologies every other week, we must try to keep up or we’ll be at the risk of becoming obsolete. In a sense technology is like a fashion trend: we have something new this summer, but as soon as autumn arrives a new framework that promises help us code faster takes the lead. Unless someone deliberately chooses to ignore the latest trends, there is just not enough time to become really proficient with a single technology. I usually think of technology as software platforms, libraries and frameworks.

Software platforms are the environments on which the code is executed (.net, nodejs, Java). I like to think about software and not hardware platforms because software platforms are often able to run on different hardware platforms i.e. java can run on a mobile, desktop or server platform.

Software libraries provide a very specific functionality that can be used in multiple projects i.e. JQuery purpose is manipulation of the DOM. They are methodology agnostic which means they’re really flexible when it comes to workflow types. This property makes them easy to be reused and ported between team, jobs and industries.

Frameworks often provide a set of libraries to accomplish something more complex. We even have application frameworks such as Spring, which handles everything from retrieving data to displaying it. Or Angular which provide us with the tools to create a presentation layer and communicate with the backend. One difference between a framework and a library is that a library provides just you with the tools to do things while the framework also enforces a (often highly opinionated) way to do things. This makes it harder to integrates them on an ongoing project (as opposed to a library) but are a great choice if you are starting from the ground up.

Most of the time, software libraries and frameworks are tied to a software platform, so you naturally learn the ones that run on your platform of choice (like Java or nodejs). Sometimes ports of these libraries and frameworks can be made (like hibernate to nHibernate), but most often than not they will make some adjustments to take advantage of the platform particular characteristics (meaning there are changes on the API).

Industry

This is often a byproduct from working on a project. As a software developer you really don’t study accounting unless you are creating an accounting software. Or banking. Even worse, sometimes we just limit ourselves to create what the customer requirements document says, without even trying to understand the purpose of the software or the needs of its users. Eric Evans pointed this out and explains that the reason is because this kind of knowledge is not useful to us unless we intend to keep on the same industry (like manufacturing). In other words, its reusability scope is very limited when compared with the other kinds of knowledge. However as Evans also explains, a deep understanding of the industry it’s necessary if we really want to create not only a good thing but the right thing.

Mix and match

The time you spend on each of these kinds of knowledge leads to a different set of abilities. Try it out!

  1. Evaluate yourself on each of these kinds of knowledge
  2. Select the area you’re lacking the most (principles, Frameworks, you pick)
  3. Make a 3 month plan to improve
  4. Start over ūüôā

As always, your comments are welcome!