Thursday, December 28, 2006


Test Driven Development

  • The importance of writing tests
    • They force the developer to think of the class design
    • They provide a safety harness while refactoring
    • They ensure that the state of code is always stable
    • New developers can make changes comfortable with the knowledge that if they break something that was working, the tests will inform them
    • Test after development is not the same as test first development. Test after development does not reap all the benefits of test first development
  • Writing the tests
    • Think about the class it’s responsibilities and it’s API
    • Write the tests to test every method and various conditions in the methods
    • Whenever you think of writing a print statement or generate a log, it might be a scenario to include a test
    • Write enough production code to ensure that the tests compile and fail
    • Write production code to pass all tests one by one, while also ensuring that previous tests do not fail
  • What not to test
    • Database entities need not be tested
    • Do not go overboard with tests. First write tests that are most likely to fail. Think of the cost benefit ratio while writing tests
  • The test class
    • Tests classes end with the work Test. The test class for Account will be AccountTest
    • Test methods begin with the word test. The test method for creditAccount() will be testcreditAccount()
    • Tests can either exist in a different source tree in the same package as the class they are testing, or in the same source tree, but in a different package.
  • Managing dependencies
    • Unit tests should ideally not have any dependencies
    • Dependencies can be eliminated with mock objects (so that units can be tested in isolation)
  • AllTests
    • There should have one class AllTests that will run the entire test suite

    Requirements (Java Unit Tests):

    • JUnit 3.8 for JDK 1.4 and before
    • JUnit 4.x for JDK 1.5 and after
    • StrutsTestCase for testing Struts Action classes


Notes: This text was originally posted on my earlier blog at

Wednesday, December 27, 2006

Agile Timeboxing

A few days back, I was discussing Agile Timeboxing and estimation with some developers. What follows are some suggestions I gave them and thoughts that emerged from the suggestions.

In the examples I have used to explain certain points, I have assumed a J2EE application which has a database, an entity layer, an application layer, Struts Action classes, and JSP's. The concepts can be extrapolated into any other type of application as well.

Given a requirement we should be able to determine the amount of time it will take us to fulfill it. This is much easier said than done. Some reasons why we are not able to come up with accurate estimates are:
  • Lack of familiarity with the code base

  • Overestimating our capabilities

  • Underestimating the amount of work needed to be done and it's potential ripple effects

  • Working with gut feel without a proper process to identify the work that needs to be done

In this post I will focus on a process that can be used to identify the amount of work that needs to be done to fulfill a requirement. Once we know the amount of work or it's complexity, the team will have to correlate it to a time frame based on their capabilities.

Let's start with the requirement given by a client. We first need to ensure that it is not very complex and large. If it is, then break it down into manageable sub-requirements. Then break down each of these into tasks. These tasks should ideally be vertical and not horizontal (in your system architecture). So for example, if you are required to modify the design of a few tables and add some business logic, the tasks should NOT be “modify database schema”, “update all classes in the application layer”, “update all Action classes”, and so on. The problem with this approach is, when you modify the database schema, the application will have broken, and will remain unstable until the last task has been completed. It is not a good idea to keep the application unstable for such a long time. Ideally we want the application to come back to stability as soon as possible. Hence, we create tasks along vertical lines, such as “update USER table and support the new field in the view”. This will entail updating the table and all corresponding layers that are affected by that entity. By creating vertical tasks, we ensure that the software will become unstable only as long as we are working on that task. The software will be back in a stable state as soon as we complete the task. A simple rule of the thumb is, a task should not take more than 16 hours to complete. Break tasks that take a very long time to complete, into smaller tasks.

Once we have identified tasks, we have to estimate the time effort. Most of the time developers work on gut feel, but it may lead to extremely inaccurate results. Every task consists of either modifying existing classes or adding new classes. Work your way up the layers of the software and identify all the classes that will have to be modified or added to fulfill that task. For example if we are required to add a new field 'domain' to the login form, we know the USER table has to be modified, and the corresponding entity has to be updated. We then identify all the classes in the application layer that will be affected by the USER entity, followed by all the Action classes and the JSP's. If any new classes need to be created then add them to the list as well. Your IDE can be very helpful in identifying dependencies. After we have outlined all the classes that will be affected or need to be added, we should determine the complexity of work to be done on each class. A simple way is to assign a complexity level of simple, medium, or complex to each class. There are several ways to determine the complexity. One of them is to use the number of unit tests generated to guess the complexity. To understand the complexity, we begin writing unit tests for each class. Since we are still in the estimation phase, we do not have to write the test bodies. It will suffice to write the test methods with a single 'fail()' statement. Be sure that each test, tests one and only one thing, and all possibilities of failure and success are covered by the tests. The number of tests generated will give a fair indication of the complexity involved in updating/creating that class. Since the definition of complexity as well as the team's capability differ, the amount of time a team takes to complete tasks at a complexity level will vary from team to team.

A possible result of this phase will be a table similar to the one below.








Time (Hrs)




Optimistic Estimate




Multiplication factor











Final Estimate




Estimate: (21.6 + 14.4 + 14.4) = 50.4 hrs = approx 50 hrs

The table above shows the effort required to complete a task. We have identified simple changes in 6 classes, medium changes in 1 class, 1 new class of medium complexity, and complex changes in 1 class. Note that we do not differentiate for classes that need to be modified and classes that need to be added. We simply write 2 classes of medium complexity (even though one is to be updated and one is a new class). As we mentioned earlier, every team will have their own correlation of complexity to time. Let us assume that complexity estimates for our team are 2 hours for a simple tasks, 4 hours for medium, and 8 hours for complex tasks. Next we multiply the number of classes with the time. Our first estimate is usually optimistic and must be multiplied by some factor to account for the unknowns like code explorations, technology road blocks (like having to work your way around some potential limitation of Struts), some classes that were missed out in the impact analysis or any other unknown factor. A multiplication factor brings our estimate closer to reality, but in my experience we still have to account for extra time taken due to integration issues, minor requests from the clients, etc. It is usually a good idea to add a buffer of 20% to account for these factors. After applying the multiplication factor and the buffer we add up values in all the 3 columns to get the final estimate.

Once we have estimates for all the tasks, developers are ready to pick them up. In the IPM each developer usually gives his or her available time in the next iteration, and picks up tasks such that they do not add up greater than the available time. We usually assume that all the available time will e spend in coding. We do not consider time spent in client conference calls, email and IM communication, planning for the next iteration, reading necessary documents, etc. The amount of time we will actually have for development is our availability subtracted by time taken for ancillary tasks. Always keep this in mind before picking up tasks.

This is a very practical process for time boxing tasks, that I have often found useful. Several things in this process, like the multiplication factor, buffer, and ancillary tasks are team and project dependent. Appropriate values that better resemble your project and team capabilities may have to be used. The presence of a multiplication factor should not be used as an excuse for lax estimation. It is used to initially account for lack of familiarity with the domain and code base, but should be adjusted as the team gets better at estimating.

Notes: This text was originally posted on my earlier blog at
Here are the comments from the original post

DATE: 12/30/2006 05:28:03 AM
I have used a similar technique to estimate. I would simply multiple my gut feel with 3. This almost always produced reasonably accurate estimates.

Friday, December 08, 2006

A very informative website

A few days back I came across a very good and informative website. Before you say to yourself... oh no not another website, and skip this post, let me tell you that this one is really good. Like all good things it is community driven. The community bring up the latest new items related to software development on the website, but the most useful content according to me is their interview series. They have interviewed several software luminaries like Joshua Bloch (on API design), Martin Fowler (on DSL's), Ron Jeffries (on Agile), Brian Goetz (on concurrency in Java), and many more. These interviews  are available for viewing on their website. The overall content quality is superlative.

I'm sure I have at least got you interested enough to check it out.

Monday, December 04, 2006

MAX Memory For Your JVM

There is an interesting discussion going on the Java Posse Google Group on the maximum memory that can be allocated to a JVM.

Now if you are running a 32-bit system, then 4 GB is the maximum addressable memory. Of this, the OS (Windows)  takes about 2 GB (though it can be tweaked to take less according to a post on the Java forum), leaving 2 GB for your apps. You could theoretically assign this 2 GB to your JVM, but thread stacks are allocated memory outside of the JVM. So if your application uses a lot of threads, you will have to leave some space for the thread stacks.

Now if you use a 64-bit system, then you can allocate a LOT more memory to your JVM, but to do this your entire stack (CPU, OS, JVM, any other dependencies) needs to be 64-bit compliant. 

If you want an unlimited heap size for your JVM, you might want to check out Azul. John Reynolds wrote an interesting blog on scaling JVM's with Azul. 

By the way of you enjoy listening to podcasts and are into Java, then you absolutely must listen to the JavaPosse podcast. It is very informative and entertaining.

Notes: This text was originally posted on my earlier blog at
Here are the comments from the original post

DATE: 01/17/2007 10:00:39 PM
hey.. sourceforge has serverfarm which they share with developers to work on it. You might be able to try various hardware available with them to check performance of systems and jvms...

DATE: 01/18/2007 05:30:14 PM
Hey Amish,
Good to hear from you. That is a good idea to check for peformance.

I will checkup with them for access, meanwhile, do you know if we need to apply or we can just create an account somewhere and start using their servers?