Monday, June 30, 2014

Practical Object Oriented Design in London

Between 25-27 June 2014 I had the pleasure of attending a 3-day course of Practical Object Oriented Design facilitated by Sandi Metz (@sandimetz) and Matt Wynne (@mattwynne).

Here's what I learned:

DAY 1

In the morning, we were given red tests and asked to implement a solution without worrying too much about the design ("shameless green"):
- surprisingly, only 2-3 pairs has finished the task
- the reason being we were trying to spot the patterns and write "smart", DRY (Don't Repeat Yourself) code as we were going, instead of focusing on getting the tests green
- trade-off abstraction vs clarity: code that was "smart" was actually hard to read and change. Dirty but simple code with a conditional for every possible input was easy to read, though full of duplication
- bad refactoring might lead people to guess the abstraction too soon. Bad abstraction/pattern is more expensive to work with than duplication

In the next exercises we started with a dirty, simple and green solution (the state we called "shameless green") and worked on refactoring to remove the duplication:
- the guiding rule was: "find the things that are most alike, and make changes to make them more alike"
- it is critical to stay green during all the refactoring (if red, one and only one CTRL-Z takes it back to the latest green)
- in order to make this possible we were practising refactoring a la Katrina Owen (@kytrinyx): compile, execute, use results, clean up unused code. Compile means write new code but don't call it yet (catches syntax errors). Execute means call it somewhere in your method under test but ignore the results (catches undeclared methods, constants, etc). Use the results means replace the old code with the new one (catches mistakenly changed business logic).
- if your tests are red and you get hit by a truck, it will cost other people money to pick them up
- the more it hurts to stay green, the more important it is to do it
- "shameless green" + refactoring tends to be actually faster in overall than trying to implements "smart code" straight away

Horizontal vs Vertical Refactoring (http://www.threeriversinstitute.org/blog/?p=594)
- I should pay attention to whether I want to finish horizontally first or how deep I want to go vertically
- resist the change until you've got all the information to do vertical refactoring

Names:
- when in doubt how to name a variable/method, give a long & descriptive name (e.g. "initial_number_of_bottles")
- these names are cheap to read (unlike x, foo)
- short or bad names are expensive (they might cause you to think wrong about problems)

- defaults are useful when trying to add a new argument to a method while all the clients have not yet been migrated to the new signature
- always use messages instead of directly accessing messages (@variable). Messages create seams - we can easily put a different object behind the message (on the other hand, there are no seems in procedures)
- refactoring temporarily raises complexity before it's finished. After that, though, the complexity is lower

DAY 2

- "squint test" (looking at the shape & colors of all the code on one page) suggests where to start refactoring
- shape (conditionals && reasoning about code) + colors (grouping levels of abstraction)

- extracting the biggest thing in common leads to different results for different people. "Find the things most alike, make them more alike" refactoring leads to identical code for different people
- it's very easy to combine small things back into a big thing (you'll never cause yourself problems with making small objects)
- on the other hand, after extracting big methods/objects it can be difficult to take them apart

When to extract classes:
- if I have a number of methods which use only their method arguments, and not the class fields, it's a smell - probably we need to create new classes

e.g.
 def next_number_of_bottles(number_of_bottles)
 ---- refactor into ---->
 number_of_bottles_object = NumberOfBottles.new(number_of_bottles)
 number_of_bottles_object.next

- if I have bunch of variables with same prefix/suffix (e.g. initial_number_of_bottles, final_number_of_bottles) it probably means I should have an object NumberOfBottles and send messages: initial, final
- top-down/bottom-up : objects that are reusable feature context independence (e.g. more generic names that the current use)

How to extract classes from the small methods created by "Find the things most alike, make them more alike"
- all these methods never use the state of the class, so they should be moved into a new class, where the current method arguments would become the fields of the new class
- create new class and copy all the code in there
- decision on what to copy should be based on arguments of methods, private/public, shape, return types, etc.
- in the new class add accessor_reader for the arguments of the methods and initialize them in the constructor
- new up new class in the old class everywhere (duplication)
- to get rid of unnecessary method arguments in the new class you default them to the attr_accessor methods, then remove invocations from the clients, then remove arguments (with defaults)
- local variable now refers to the attr_accesor message on the object, not to the method variable

Now that all the methods arguments are gone, we can replace conditionals with polymorphism:
- create a special case class (inheriting from existing base) and copy the methods with a conditional in it
- keep only the branch for your special case class
- instantiate it and use it in the clients
- simplify the base class

Inheritance:
- is never a problem if you follow the "Find the things most alike, make them more alike" pattern (which creates small methods switching only on their parameters)

Inheritance doesn't cause problems when:
- it's shallow
- it's narrow
- subclasses are leaf nodes of the objects graph (they are at the edges of the system, not in the center)
- subclasses use all the behaviour of the base class

DAY 3

SOLID principles:
- Open Closed Principle: wait for a requirement to come in and then prepare the code for it (make the code easy to change, then make the easy change)
- Liskov Substitution Principle: nil is an LSP violation because it doesn't respond to the same protocol (the client has to make an explicit nil check)

- the refactoring step of TDD cycle is to maximise clarity
- refactoring that improves the design is used to make the code Open-Closed to new requirements
- don't guess the future. Wait for a requirement, then make the code Open-Closed to it, then add the new feature

- the closer you apply inheritance to the middle of your domain, the more likely it is it will hurt. The middle should usually be made of composition
- it's safe to use inheritance but you have to be ready to abandon it. It's safe to use it at the edges for small things

Roles in ruby:
is_a (inheritance)
behaves_like (duck types)
has_a (composition)

Related links

http://www.confreaks.com/videos/3358-railsconf-all-the-little-things
https://www.youtube.com/watch?v=npOGOmkxuio
http://www.confreaks.com/videos/1115-gogaruco2012-go-ahead-make-a-mess
http://www.confreaks.com/videos/240-goruco2009-solid-object-oriented-design
http://www.confreaks.com/videos/1071-cascadiaruby2012-therapeutic-refactoring

SUMMARY

If you get a chance to attend this course, do not hesitate!

ps: please know that all the points made in this post mirror my understanding of what we practised in the course and may not fully represent Sandi's and Matt's thoughts on the subject

Tuesday, December 31, 2013

2013 summary

This post is meant to be a point of reference for the upcoming year, that is to track my own progress (or lack of it) in the future.

Most notably:
- new job - moved to London to work for Sky Network Services (part of Sky)

Paired (as in Pair Programming):
- remotely with 2 persons (thanks to Avdi Grimm's #pairwithme)
- locally with ~15 people

Coded:
- professionally in: Java, JavaScript
- for fun in: Ruby, Clojure
...and very briefly in: Bash, Python, C#

New frameworks and tools used in projects:
- Gradle (multi-language build tool)
- AngularJS (single-page apps with JavaScript)
- Jasmine and PhantomJS (unit testing setup for JavaScript)
- SpringJDBC (Database interface for Java)
- Jersey (RESTful web services for Java)
- Yatspec (supporting Java tool for Behaviour Driven Development)
- Spark (Sinatra-inspired micro framework for web applications in Java)
- writing one's own MVC framework
- ...and  IntelliJ (Java IDE), Mac, MacOS, TeamCity (Continuous Integration supporting tool)

Attended meetups:
- Global Day of Code Retreat 2013 at Valtech, London (by LSCC)
- Software Craftsmanship Round Tables (by LSCC)
- Evening Code and Coffee / Craft Beer (by LSCC)
- eXtreme Tuesdays Club
- Architectural Kata with Alexandru Bolboaca (by LSCC)
- BDD workshop with Steve Tookie (by LSCC)
- BDD workshop with Meza Meszaros (by eXtreme Tuesdays Club)

Seen live and had a short chat with:
- Kent Beck ("My first feature and beyond: Why I work at Facebook")
- Uncle Bob ("Automated Acceptance Testing", "Professionalism", "Design Patterns")

Seen live (people I had heard of before moving to London):
- Michael Feathers, Steve Freeman, Nat Pryce, Dan North, Tim Mackinnon, Liz Keogh, Keith Braithwaite, Sandro Mancuso, Giovanni Asproni, Enrique Comba Riepenhausen

Read books:
- Practical Object-Oriented Design in Ruby by Sandi Metz
- Smalltalk Best Practice Patterns by Kent Beck
- Responsible Design in Android by J.B. Rainsberger
- Confident Ruby by Avdi Grimm
- Are your lights on? by D. Gause, G. Weinberg
- ... and partially bunch of other ones :)

Watched podcasts/screencasts:
- Destroy All Software by Gary Bernhardt
- Clean Coders by Uncle Bob
- Test-Driven Development by Kent Beck
- Ruby Rogues

And finally read/watched way to many blog posts, articles, interviews, conference talks, etc :)





Saturday, November 2, 2013

Silent Pair Programming

On Thursday 31 October I participated in a Silent Pair Programming event organised by the London Software Craftsmanship community. During the sessions pairs are not allowed to talk about the problem and they can only discuss secondary issues as the IDE, keyboard shortcuts, etc. The goal of the exercise is to communicate with code and maximise its readability. Here are a couple of my learnings from the session.

If the partner writes a test which requires too much of implementation code at once, one way to solve it is to make it pass by hardcoding the response and then writing a smaller test. Once implemented, the following test should prove the hardcoded response insufficient and in consequence to its removal by generalising the production code.

When working on the production code, it might be worth following Kent Beck's Composed Method pattern (from "Smalltalk: Best Practices and Patterns"). That is, let your partner follow your thoughts by implementing the method (almost) entirely with well-named private methods and variables. Programming language specific features and APIs should be hidden in the private methods. The reason is that they are often too generic to convey the intent. Once the test is green, the pair might want to minimise the code by inlining some of the private methods, if the underlying generic code does not obscure the readability.

If you don't understand what a piece of code written by your partner does, you might want to select it in the editor and hand over the keyboard to them, so that they refactor it towards more clarity.

If you can think of any other tips for Silent Pair Programming sessions, please feel free to post them in the comments :)





Wednesday, September 18, 2013

Manual Dependency Injection with Jersey and embedded Jetty

I wrote a little application demonstrating how to manually (through constructors) inject dependencies into Jersey resources in an embedded container like Jetty. 

The benefits are:
- lightweightness (no need to use Spring, Guice, etc.),
- more control over your application (less magic behind the scenes),
- controlling dependencies in tests (via Dependency Injection)
- running a web app with a simple Java main method (via embedded Jetty)

The example application is called Time Expert. It exposes the current time via a RESTful web service, so when I run it in the production (via Main.java) I can use it as following:

Here's how I test it (TimeAcceptanceTest.java) :

However,  I cannot rely here on the real time because it changes every time I run tests . The problem can be easily solved with the Clock pattern. My tests require a FixedClock which allows me to set the current time to any value:



 So I start my server with a fixed clock:


... and then set it to, say, 20:15:


Since my Jersey resource class is only aware of the clock interface, it will display 20:15:


In order to create your Jersey resources with manually injected dependencies you have to register org.glassfish.jersey.servlet.ServletContainer with a customized org.glassfish.jersey.server.ResourceConfig. In that config Jersey resources can be newed up with their dependencies:

Of course the production provides a real clock (which returns new Date()):


Code is available under: https://github.com/unclejamal/TimeMaster

Enjoy!

Tested using:
- Java 1.7,
- Gradle 1.7 (easily convertible to Maven :)),
- Jersey 2.2,
- Jetty 9.0.5.

Wednesday, October 31, 2012

Literals and variables in unit testing

Today at the Path11 Book Club we talked about chapters 21 and 22 of the book "Growing Object-Oriented Systems Guided By Tests" by S. Freeman and Nat Pryce.

In the part Literals and Variables of the chapter 21 the authors advise to use variables/constants in place of meaningless literals:
...test code tends to be more concrete than production code, which means it has more literal values. Literal values without explanation can be difficult to understand because the programmer has to interpret whether a particular value is significant (e.g. just outside the allowed range) or just an arbitrary placeholder to trace behavior (e.g. should be doubled and passed on to a peer).
...
One solution is to allocate literal values to variables and constants with names that describe their function.
While this rule is rather self-explanatory, I have noticed some interesting pattern. When I do TDD pretty often in the refactoring phase I go through the test code and replace literals with constants (unless it obscures the readability). What happens is 2 kinds of constants emerge:
  • Example Constants (one value out of many possible)
public static final String USER_NAME = "Joe"; //in fact it could be also "John" or "Sue"
public static final int INVALID_ID = 666; //in fact it could be also 667, 668, 669 and so on...
  • Significant Constants (concrete value having special meaning)
public static final String ATTR_EVENT_ID = "eventId"; // significant value
public static final int AGE_OF_CONSENT = 18; // significant value
Significant Constants will be very often needed in both test and implementation. So they can be moved to the implementation class and then referred to in the test class.

On the other hand, Example Constants will usually stay only in the test code.

Here is an example using Spring MVC:

Update: as pointed out in the comments, sharing Significant Constants in both tests and implementations carries the risk of uncaught errors when editing the constant value. The safest way is indeed to have the test class be entirely a specification, thus defining its own constants/variables and not refer to the implementation class.

PS: don't forget to visit awesome Path11 Book Club :)

Wednesday, November 30, 2011

BDD with Robot Framework and Java

Inspired by Matt Wynne's podcast "BDD as it's meant to be done" I reproduced his sample application with usage of Robot Framework and Java (in place of Cucumber and Ruby). This post is intented to give an overview of a possible setup for Java-based BDD (source code).

Idea

According to Gojko Adzic, 90% of teams that fail with ATDD don't structure they tests properly which results in so called scripts (dozens of test code lines which mix up specification, workflow, user interface) that are a extremly difficult to mantain. The tests should answer the question 'what' to test as opposed to 'how' to do it.

Matt suggests to structure acceptance tests in a layered stack (I rephrased the layer names):


  • Examples - table-based set of input values introduced to the system and expected output values upon performing some action
  • Scenario - Given-When-Then style of describing the action from Examples step-by-step
  • Steps - detailed definition of Scenario steps kept in an technology-independent manner
  • Glue - glue code connecting tests with the app itself (uses domain or interface classes in order to manipulate the Application)
  • Application - the application itself (should work according to the Examples)

Benefits:
  • Examples & Scenario layers clearly communicate the specification to Anybody (Customers could even modify it themselves)
  • Steps layer is actually a DSL (Domain-Specific Language) of the application. Exploring it creates a common domain-based language for Team Members and Customers. Further, it allows to execute the same set of tests via different interfaces implemented in the Glue layer
  • Glue layer drives us to create testeable classes and methods in the Application
  • Turns test (a.k.a. specification) writing into a creative and useful activity


Sample application driven by Robot Framework

Source code is available under: https://github.com/unclejamal/CashDispenser
Prerequisites: Java 1.6, Maven 3.x, Robot Framework 2.6+ (this is the setup I tested it with)

Building:
mvn clean install
Testing
cd atdd/cashdispenser-robot/target/
test.bat

Test logs will be created in:
atdd/cashdispenser-robot/target/output


Nice-to-haves and what-nots

In Java EE it would be nice to create two Glue layer implementations. First, that manipulates directly Java classes and uses test doubles to be separated from external services (this could be run after every single build). Second, that uses Jython to execute EJB public methods directly in the container (this is much slower, but permits a more end-to-end testing).

Finally, to see a hands-on example of BDD with Cucumber and Ruby, I strongly recommend watching Matt Wynne's podcast "BDD as it's meant to be done".