Reservoir Dogs: DueDates 2.0 Edition

Posted on Monday, December 08, 2008
0 commentsAdd comments

Reservoir Dogs is a film by Quentin Tarantino from 1992 in which a would-be heist by a gang of ice thieves is set spiraling out of control when a rat (actually, a mole…er, undercover cop) sets the thieves up. After the opening scene just moments before the heist, the movie picks up right as everything has gone haywire with the gang scattered, either lying dead in the jewelry shop or running for their lives while dozens of police are searching for them.

There is a scene about an hour in where two of the gang members, Mr. Orange and Mr. Blonde (the members all went by color-name aliases) are holed-up in the after-heist staging area. Mr. Orange with a gut-wound is lying on the floor bleeding out, passed out and near death, while Mr. Blonde begins to torture a cop he happened to take hostage while fleeing the scene of the failed heist.

In Reservoir Dogs: DueDates 2.0 Edition, Mr. Blonde would be Wicket, Mr. Orange would be the DueDates application, and the cop would be my team of developers: Vincent Leung, Creighton Okada, and John Zhou. In this rendition, Mr. Blonde (read Wicket) definitely tortured our project and the team—“Look [programmers], I’m not gonna bull!@#* you. I don’t really give a good !@#* what you know or don’t know. But, I’m gonna torture you anyway. Regardless. Not to get any information. It’s amusing to me…to torture…[programmers].” And what’s a movie without the most appropriate soundtrack, as the torturing begins the radio is playing Stealers Wheel’s Stuck in the Middle With You. Here are the first two verses to set the mood as the rest of the story unravels:

Well I don't know why I came here tonight, I got the feeling that something ain't right,
I'm so scared in case I fall off my chair, And I'm wondering how I'll get down the stairs,
Clowns to the left of me, Jokers to the right, here I am, Stuck in the middle with you.

Yes I'm stuck in the middle with you, And I'm wondering what it is I should do,
It's so hard to keep this smile [on] my face, Losing control, yeah, I'm all over the place,
Clowns to the left of me, Jokers to the right, Here I am, stuck in the middle with you.

Clowns, Jokers, and Stuck in the Middle
Well, let’s clear up just who the “clowns,” “jokers,” and those “stuck in the middle” are…these would all be me and the other members of the team of which Wicket and DueDates pulled strings making us dance like little Marinette dolls. Our team consisted of me, Vincent Leung, Creighton Okada, and John Zhou.

I went into this project with an open-mind of my teammates. Although, I had heard what to expect from each from others, as well as, having read their blogs and looked over their prior DueDates projects, I had hoped that we would be successful at our endeavor to migrate DueDates 1.x to 2.0…or from a console application to that of a web application.

Honestly, I’m not sure what I would have done differently within the group. Each team member performed the tasks that were asked of them. We divided the tasks up, such that we could all work on something working towards the goal of merging our efforts. Creighton and Vince focused on getting the JAXB, XML, and configuration manager operating, while John worked on setting up the sorting method, making the libraries extensible, and cleaning up code. Vince also took on the task of setting up the email function and timer.

Being the sole team member with extensive web development experience (albeit in ASP.NET), I took on the task of the web UI, with the hopes of just “wiring” the rest of the team members’ efforts. In the end, this resulted in being the Achilles’ heel of our project. More on this later…

All team members were responsible for testing and writing documentation (including wikis).

As in the DueDates-Silver implementations, the team met several times (almost daily) throughout the project cycle, however, most of this occurred during the second week. The main issue with first week was the occurrence of Thanksgiving. While my team members celebrated their holiday on the traditional Thursday, my family waited until Saturday to celebrate.

This coupled with getting to know each other and mitigate our schedules really impacted our success during the first week. I don’t really blame anyone for this, as it’s just a matter of Life. We met initially the first day the assignment was given and divvied up tasks with the expectation that our initial tasks would be completed when we met again following the Holiday. This all went as expected with everyone completing their assigned tasks leaving the second week for wiring up the web UI and back-end components.

As mentioned earlier, following this we met at least a couple hours every day the remainder of our allotted project time. During this time, every one worked on their tasks. About the only thing I think we did not do sufficiently was communicate, including me, who had kind of taken on the role of project manager. More on this later…

I don’t know if the Hackystat SICU accurately reflects the entirety of the group’s performance. While I believe it accurately records the time, effort, and productivity of the individual programmers, it has no way of capturing what its sensors aren't designed to record. In this case, I am alluding to the fact that my teammates were doing their parts, but it was really hard for them to do more because they needed to wait for me to complete the wiring of the web UI interface to their back-end components. More on this later…

The Failed-Heist
It’s really difficult to judge, in my opinion, how much of the project we completed. We resolved the back-end implementations for support of an XML configuration file (R1), provided sorting (R4), filtering (R6), and email alerts (R7). The web application had some of the basic functionality (R2) with all the necessary CSS formatting (R3). (Note: There was no R5 in the requirements.) Unfortunately, R2 was the hinge-pin of the system, and I was never able to complete the wiring up of the back-end components. Which frankly, drove me to the point of such frustration…for many reasons…but, most of which had to do with the fact that I develop web applications for a living (not via Wicket). The major issue was I never quite wrapped my mind around property models, at least for listview and dataview “controls”. This in effect, killed our project. Mostly due to the fact that I spent way too much time on trying to get it to work by myself before asking for help…from my teammates and classmates. In many respects, I felt like I was wasting my teammates' time, with little for them to do, while I was tortured by Wicket, and they were held hostage.

So while DueDates 2.0 isn’t a viable production application, all the parts are there…just disconnected…like the body parts of some poor dead soul after an autopsy.

By the time I decided I needed help, there was sufficient time, but even after speaking with Arthur Shum and reviewing DueDates-Akala’s code, and my DueDates-Silver partner Robin Raqueno and reviewing DueDates-Ahinahina’s code, the pieces of the puzzle still remained allusive to me. Much appreciation to both Arthur and Robin for their time, support, and guidance.

Also, there was the issue of working with 3 additional programmers with six requirements in a short time span interrupted by a holiday. The biggest issue here was that I think there were too many moving parts, which proved difficult for me to manage especially under the pressure of a failing web UI implementation and having the capacity to be there to help my teammates with what they were doing when they ran in to issues. But, more importantly to me is being a good, helpful, and responsible team member, so I gladly stepped up each time, even returning the favor of assisting Robin by spending an hour showing him the ins-and-outs of CSS.

Another prominent issue with DueDates-‘Oma’oma’o was it began with an initial loading of DueDates-Silver’s code which eventually got hodgepodged with code from other projects. I think this caused a lot of overhead, which caused the team members to not really understand how the project worked or would work. A top this, most of the group had not worked with XML and JAXB, so this was additional overhead the project suffered from initially as well.

Track Two
If Reservoir Dogs: DueDates 2.0 Edition had another song on its soundtrack it would be Queen’s Under Pressure, which begins:

Pressure! Pushing down on me
Pressing down on you no man ask for
Under pressure


The second aspect of what I learned by working on this project was the amount of pressure (and responsibility) I can currently handle. As you may already know, I work a full-time salaried position as an Information Management Officer at First Hawaiian Bank. At this job I manage about nine applications, as well as several databases and workflow processes (both automated and not), as well as being a valued contributor to several other projects falling within my department’s primary responsibility.

The amount of unbridled pressure I have been under the last month has basically zapped, fried, and overloaded my brain. My most prominent application at work is undergoing a conversion to a new platform which has been undergoing a belaboring degree of setbacks. In addition, my department’s holiday party of which I got the luck of being on the planning committee this year was about to take place. This along with the DueDates 2.0 all had deadlines of just days to a week apart.

This resulted in an extreme amount of performance pressure and anxiety to complete all three to the best of my capacity which I was ill-prepared to cope with.

This had a detrimental effect on me and in the end, my ability to keep a certain level of “numbness” out of my ability to think and get the Wicket concepts. The entire experience felt like a log-jam, overload of information and my brain just seized up. And if you take on the role of project manager, either through assertion or organically, then a situation like this can bring a team down. In many respects, I believe this is what happened to DueDates-‘Oma’oma’o being successful.

The Aftermath
I learned over the past few weeks that continuous integration is difficult, especially, when working with a group of developers who have probably been silo-ed for the majority of their projects. Secondly, I learned that continuous integration only works if it’s used. I think our group had a few too many cases of code conflicts, because on occasion code was committed prior to the committer performing an update on their system.

In Reservoir Dogs, Mr. Orange shoots Mr. Blonde moments before he’s about to set the cop, who at this point has been drenched in gasoline, afire. It would appear Mr. Orange has saved the cop’s life, but eventually, the rest of the heist gang arrive and promptly kill the cop…and then, Mr. Orange. The movie ends with everyone lying dead on the floor, either having shot each other or by the cops that have arrived to the scene.

Reservoir Dogs: DueDates 2.0 Edition ends similarly so for DueDates-'Oma'oma'o.

Download your (dysfunctional) copy here.

Reflections
I’m not sure what’s more important…completing a task and learning from that experience, or trying (desperately so) to succeed and still failing and learning from that experience. “It wasn’t for a lack of trying” the saying goes. Software engineering is a process, an inexact one a t that. Not every project comes in on time, budget, or fully meets specs. It’s part of the business…or science…or art…that is software engineering. There’s another saying that can be applied with equal resolve to software engineering…”Hate the [program], not the [programmer].”

MyStack.com (sic): A Wicket WebApp

Posted on Saturday, November 22, 2008
0 commentsAdd comments

Wicket is a framework for building Java-enabled web applications. If you have done web design in ASP.NET and AJAX, then using Wicket is somewhat similar to that process and very different in other ways. However, if you have ever worked with spaghetti code, such as classic-ASP, or Macromedia’s ColdFusion MX, which I had the displeasure of working with in my ICS 415 course, then Wicket simplifies web application development by separating the business logic from the presentation logic.

I really enjoy working in ASP.NET, so I am usually very critical when using and evaluating another web application framework. What I like about Wicket is that the developer is able to use a code-behind model just as ASP.NET development facilitates. What I don’t like is being required to manually “rebuild” the web-page in the Java code-behind by adding page elements, which have to reflect the DOM. The difference here is, at least when using Visual Studio, the elements are added by the code generator to the web page, then the developer is able to directly access the elements methods and write custom event handlers for their behaviors. For example in ASP.NET, I can write custom code for a button’s click behavior in its Click event handler once it has been added to the web page. Whereas, in Wicket, I first need to manually add the button element to the web page, then manually add it in the Java code-behind page ensuring that it is added in the same container element as it resides on the web page. Otherwise, using Wicket was an enjoyable experience and roughly on par with using ASP.NET, but worlds apart from using something like Macromedia’s ColdFusion MX.

Hello, Wicket
Getting started was a bit of a task. However, my ICS professor Philip Johnson has provided a superb set of Wicket examples. To use these examples, first requires installing Wicket, Jetty, and SLF4J libraries. Instructions for setting these libraries up can be found on the Wicket examples web site. To build my implementation of an online stack application, I focused mostly on Example 04 for its session state capabilities, and then Example 02 for its form and table demos. Completing the assignment was quite simple due to my familiarity with building web pages and session state handling.
Hello, MyStack.com (sic)
This is a screen shot of my implementation, which uses the name “MyStack.com” and is in no way associated to any official company or web sites that use the same name. Although, I was not able to reach any URL of the form http://www.mystack.com at the time of this posting.



My implementation can be downloaded here.

DueDates-Silver v1.2: The Evolution of a Simple Application

Posted on Monday, November 17, 2008
0 commentsAdd comments

This week my team was tasked with adding two new features to our DueDates project. First, the project was to now support the capability of sending its results to an email account the user specified. The second enhancement was to allow scheduling of requesting and outputting the results. Actually, the first criteria specified that support was to be for either output to the console or the email—throwing an error if neither was explicitly supplied. However, we decided against requiring explicit invocation to output to the console. We reasoned that if the user is calling the application from the command line that the default output should be to the console and that explicit specification of email output by the user was sufficient to override the default (console) specification.
Gremlins, Baci, and Instant Karma
Personally, I seem to be suffering through a patch of difficult times with regards to my computer. In the past month, I have had to re-install my operating system, which required reconfiguring it once more to support the setup required for my ICS 413 course work. Then, just last week my laptop’s adapter cord frayed! That left me with no way to keep its battery charged until I forked over $100 to get a new adapter. That cost me one day of development during the enhancement to v1.2. Atop this, my commits were not being sent to the SICU correctly, which by all accounts I can only attribute to gremlins. Luckily, here my professor was able to resolve getting the commits into the SICU, but the problem of why this is occurring continues to plague me.

Finally, my latest bout of rotten luck came in the form of another wasted (or lost) day of development. While working on setting up the email feature, I kept encountering numerous security issues and NoClassDefFound exceptions for either the JavaMail Message and MessagingException classes. After working on trying to resolve this for about half a day, I committed my files and my project partner updated his build. To our astonishment, the files not only built, but ran correctly!!! There was an actual email sent to his account. We decided to break for the day, as he had been working on the scheduling issue and had been having hours of his own issues with that. Considering my recent bout of poor-quality karma, it was likely it was affecting him, as well. The next morning, I called him up, and we resolved the issue over the phone. So, a day lost, but issue fixed. Later that night, several classmates were having similar issues, which I was able to share how Robin and I had resolved it for me. This solution seemed to work for at least three other classmates. The solution was that the JavaMail mail.jar file need to be copied to both the Java JDK lib/ext and the Java JRE lib/ext directories.

One Minor Version Older And Deeper In Debt
This iteration had me feeling like Hank Thompson’s poor man in 16 Tons. It seemed the more we did, the more that needed to get done. There were just so many details and minor issues that seemed to plague me that it really impacted our success this time. While we did finish and were able to get everything implemented, the process seemed belabored and fraught with frustration at things that we felt more-or-less to be non-issues. (Non-issue: something that should be trivial, but consumes more time and resources than would seem required to accomplish.) But, all-in-all our project remained stable. While some stats climbed and dipped here and there, we were able to manage them and keep them near or at healthy levels.

Also, we took the opportunity to do many things, including cleaning up code (causing a major code churn event one day that was mostly due to sorting methods and arranging class behaviors more appropriately), and implementing three new classes (Email, Messenger, DueDatesTimerTask) with associated interfaces.

What’s on the Horizon?
It appears DueDates v2.0 will be in development soon and a projected migration to a web based application. From the first day we met, Robin and I intended this to be the targeted platform for our DueDates project. All the “measure twice, cut once” practices we have been using coupled with using TortoiseSVN, Hudson, the HackyStat SICU, JUnit testing, and the many other tools have put our project into a state that should facilitate this migration with moderate ease and resource management.

It’s Called A Head-to-Toe Exam For A Reason

Posted on Friday, November 07, 2008
0 commentsAdd comments

I was quite excited this week when presented with the idea of a Software ICU (SICU) for gauging the health of a software project based on a number of “vital signs”. Just like an Intensive Care Unit of a hospital, the idea of a SICU is to provide round-the-clock status monitoring a software project, in this case, the patient. The patient’s vitals consist of both measurable statistics that can be compared to a set of agreed upon quality development norms, as well as, additional statistics that offer insight, although somewhat ambiguous and less deterministic, as to a project's health.

Measurable Vitals
• Coverage Testing represents the amount of code a project’s test cases cover.
• Complexity determined through analysis of branching and looping in the project.
• Coupling reflects the bounding of the project classes to other classes (external and internal) and reliance by other classes (external and internal) upon them.
• Churn represents the amount of code change over a period of time.
• Code Issues determined by the number of compilation issues a project incurs.


Less Deterministic Vitals
• Size (LOC) reflects the size of the project in lines of code.
• Development Time reflects the aggregated development time of project members.
• Commits reflect the number of SVN commits members have made during the period.
• Builds represent the number of builds members have made during the period.
• Tests represent the number of test runs members have made during the period.

I’m a programmer, not a doctor, Jim!
A SICU allows a programmer to put on the hat of a software doctor, or perhaps more precisely, become a software surgeon. This is because, a SICU provides project team members the opportunity to focus attention where the patient most needs it. Just as a real ICU monitors the appropriate vital signs for its patients, when one or more of these vitals needs attention, the team of experts is able to focus quickly and exactingly where the problem is occurring. A SICU provides developers a direct measurement of the health of their project, and where vitals deviate from the accepted norms; they focus their attention and bring the patient back inline with those norms.
Exam Room 2 Is Open.
If you have been following my blogs over the past few weeks, you should be well aware that my partner and I have been working on the project DueDates-Silver, which queries libraries to return the items checked out by the user to a console window. Our objective this week was to connect our project to the electrodes and other measuring equipment provided by the Hackystat system. Just like an ICU, Hackystat provides all these measurements from a centralized location.

There were many steps to connect all the Hackystat electrodes to the DueDates-Silver project to get monitoring of its vitals on a daily basis. However, Hackystat has an excellent set of tutorials which make the process very clear and easy to follow, implement, and achieve with little hassle or issue.

We’ve got a pulse, doctor!
Being quite anxious to see what the vitals of DueDates-Silver would be, I took the time to get our project set up as quickly as possible. I was very pleased with our initial set of vitals, which continue to reaffirm the “measure twice, cut once” philosophy our project has used from its inception. Our vitals were:

vitals

It should be noted that the Size(LOC) column is not calculating appropriately at this time and should reflect at ~2400 lines of code.

When May I Schedule Your Follow-up Exam?
The information a SICU provides a team of developers is so empowering. It empowers them to focus on what is important and needs to improve, but more importantly it provides a holistic approach to management and evolution of the project. While DueDates-Silver fairs moderately well by its initial readings, it has room for improvement. I would like to decrease the project’s complexity to the range of 1.0 to 1.5. The coupling is manageable for now, but I would not like to see it increase much. And testing will continue to be a challenge to maintain at 90% as each new code line, block, method, and object class requires new tests be added to ensure the coverage remains at appropriate levels.

Ladies and Gentlemen, it’s my pleasure to now introduce to you DueDates-Silver v1.1. DueDates v1.1 has come a long way since just two weeks ago when v1.0 was first released. In this release, my project partner, Robin Raqueno, and I strove to make it satisfy even more of the Prime Directives of Open Source Software (OSS) Engineering. (You can read about these directives in my blog JFreeChart: Open Source Charting Solutions.) And, the “measure twice, cut once” methodology first embraced during the development of DueDates v1.0 continued to pay off while enhancing the project to meet the specifications of DueDates v1.1.
OSS Priorities
While we had new specifications to build into the latest release of DueDates, my top priority was having our project add a linchpin in the form of an XML file that held library data. This XML file would bind the project together in a data-centric way that both Robin and I had envisioned in the project since development of version 1.0. It would also provide excellent OSS support of the prime directives by freeing users and contributing developers to easily expand the project to include their own set of libraries without much effort. By simply following the repository template we provide, they could add their own library information and even remove the libraries that we provide. Peers who reviewed our project were amazed at this, believing it to be a great feature and one that all DueDates project should incorporate.
New Specifications
This version of DueDates supports sorting of results by library or by the date items are coming due. A second specification was to support a query of items coming due within a specified number of days. Also, prior to the v1.1 release, we had a peer review of our project from which we were to incorporate any additional suggestions as appropriate and possible.
Measuring Again, Cutting Again
Robin and I really didn’t have many issues completing this week’s tasks and moving DueDates v1.0 to DueDates v1.1. Prior to adding the new specification, we looked into two aspects of changes to our project. First, we wanted to move to an XML based repository for our library data. We searched for a day on the topic and eventually found this article from Sun Developer Network. Following these instructions, we were able to add this feature to our project without much effort.

Our next goal was to find an appropriate command line parsing software to assist us in that task. We found an absolutely wonderful OSS command line parser from John E. Lloyd called ArgParser. This did everything we needed, along with providing bells and whistles.

After resolving those issues, we set about the task of adding our new specifications. We were basically able to do this over the course of the first few days of development, leaving the remaining time to do our peer review projects and provide additional testing of our system. The biggest change in implementing version 1.1 was to alter the UserLibraryInfo (ULI) class from being a single object class to a collection object class. Version 1.0 allowed ULI to take in only one user-library pair and query results for that single value pair. The latest version allows multiple user-library pairs to be added to the ULI object and all queried at once.

Having abstracted the problem out even further this week allowed us to make an additional change to the project that allowed the user to specify the library repository (XML data file) that they want to use from the command line. The remaining time we spent testing and updating our User and Developer Guides per the comments from our peer reviewers.

Conclusion
We used continuous integration during the development of DueDates v1.1. We failed a few times, but these were mostly due to changes in our project structure that did not get updated properly when committing the changes to Google Code’s server. We altered our project structure, moved files around, and deleted files. Occasionally, the old file had not been removed from the local repository or the SVN repository which would cause builds to fail. But in general, I think striving to keep a sun on the Hudson server was a motivator to not break the build.

Robin and I continue to work well together, and even received high praise from our peers this week on the quality and the direction our project has taken. We have several ideas for our project once it moves in to a GUI world and look forward to that opportunity.

DueDates v1.1: A Pre-Release Review

Posted on Saturday, November 01, 2008
0 commentsAdd comments

This week I was provided the opportunity to review the DueDates-Yellow project designed by John Ly and John Zhou. Likewise, with a DueDates-Silver v1.1 being released soon, Scheller Sanchez and Tyler Wolff, the project leads for DueDates-Gold, were asked to review our project.
Google Code Reviews
This review process was easier to do in regards to using Google’s project review system via Issue Tracking and branching of the project versus creating a wiki page and then needing to locate all the review comments throughout the various code files. While there were a few issues with locating comments using the updated methodology, there were only a few places that one would expect to find those entries.

Reviewing DueDates-Yellow
Overall, the code style and implementation for DueDates-Yellow is fine. However, its biggest issue, in my opinion, is that the problem space has not been abstracted at the necessary level to manage maintenance and ease the implementation of future development objectives. There are several instances of similar branching structures and class definitions which provide the opportunity to abstact the problem space further. I tried to provide a constructive, objective review of DueDates-Yellow without being overly critical of the project’s implementation.
DueDates-Silver v1.1 Review
Scheller and Tyler provided very good comments. Most of their comments dealt with details, but very important ones. Several good catches included the lack of information stating that Java 6 or higher is needed, and that the QA instructions have been missing since v1.0.

For the most part, it seems that Tyler was very impressed with our implementation and project structure. In particular, he thought that all DueDates projects should be required to include an XML repository to hold the library data for the application. I would have to say he paid our project one of the highest compliments one developer can give to another:

”Your groups project is the real deal. I think if the class ever merges to one project we should merge to this version.”

Future Reviews
There’s not much I would do differently to do future reviews. The improved Review Issue tracking mechanism on Google Code has much improved the process. The only change would be to have a better understanding of where the review comments go upon completion.

I’ll See Your Two Heads And Will Raise You Seven

Posted on Friday, October 24, 2008
0 commentsAdd comments

One of the issues with the development of great code is the limited perspective of its developers. Not every programmer knows the ins and outs of a language, is able to harness every nuance of the language, or realized the ultimate potential (or weaknesses residing within) the code they have written. This is where peer reviews, hopefully, strengthen a project.
An old axiom says “two heads are better than one,” which definitely allowed my partner and I for the DueDates-Silver project to accomplish a very nice implementation. In my blog last week, I discussed how we used a carpenter’s approach of ‘measure twice, cut once’ in our design process.
If two heads are great, then nine heads would surely trump that old axiom. That’s what happened this week. Two weeks ago, my software engineering peers and I were split up into nine teams each working on their implementation of the DueDates project. This week, each team member was asked to review three peer projects and provide constructive feedback to that team of developers. This provided each project group the opportunity to see six implementations of DueDates, in addition to their design. Most projects were reviewed by six peers, while DueDates-Silver was lucky enough to have seven peers reviewing it.

Meting Assignments
Which projects to be reviewed and who would be reviewing them was predetermined by our professor, what capacity each reviewer would be asked to perform within the project review was determined by the project owners.

My review assignments were:

Peer ProjectTeam Members
DueDates-PurpleJohn Ancheta, Daniel Arakaki, Mari-Lee Flestado
DueDates-RedPhillip Lau, Creighton Okada
DueDates-VioletAnthony Du, Vincent Leung


Those assigned to review DueDates-Silver were:

Review CompletedPeer ReviewerDueDates Team
YesJohn AnchetaDueDates-Purple
YesAnthony DuDueDates-Violet
YesJung JehoDueDates-Green
YesPhillip LauDueDates-Red
NoScheller SanchezDueDates-Gold
YesAric WestDueDates-Orange
YesJohn ZhouDueDates-Yellow


The documentation, classes, and design reviews assigned to each reviewer for the DueDates-Silver project were:

• Documentation – Anthony Du, Jung Jeho
• DueDates.java, UserLibraryInfo.java – Phillip Lau, Aric West
• Library.java, User.java, Book.java - John Zhou
• Testing Suite – John Ancheta
• General Conceptualization/Implementation Structure – Scheller Sanchez

Meeting Assignments
I really enjoyed reviewing the DueDates implementations created by my peers. Many of them had great ideas. For example, Anthony Du had created an ErrorLogger class that parsed the command line arguments that I thought would be very useful in future iterations of our DueDates-Silver project. Other projects did various forms of testing that were interesting, and seeing uses of Java API elements that I was not familiar with was enlightening as well.

Additionally, the comments gained from our reviewers will definitely strengthen the design of DueDates-Silver in version 2.0. For the most part, the recommendations are in regards to minor improvements to the design, such as altering toString() method returns. Others were more beneficial, such as suggestions for improving our testing suite and using String.format() for output to the console window. And then there’s always the comments for improvements to the design that you already know need to be improved, perhaps, even scheduled for version 2.0, but still, someone just has to point it out!

DueDates v1.0: The Personal Library Checkout Management Tool

Posted on Monday, October 20, 2008
0 commentsAdd comments

In Code Complete 2 author Steve McConnell continually advocates a ‘measure twice, cut once’ philosophy to software engineering. He believes in this philosophy so much that he devotes an entire 37-page chapter explaining the concept and its practice. At its crux ‘measure twice, cut once’ stresses planning and preparing your implementation prior to the actual implementation of it. I really tried embracing this practice in this project.

DueDates 1.0 is a console application allowing a user to access both the University of Hawaii Library (UH) and Hawaii State Library (HSL) systems. By accepting the necessary user credentials from the command line (user id and password), DueDates accesses the user’s account retrieving the checked out items and presents them in the console window.

I was once again joined in this project by my previous partner for the CodeRuler project, Robin Raqueno. (Check out his DueDates blog entry.)

Knowing that this is not the final iteration of DueDates, I wanted the implementation decided upon to be thought out with the future of DueDates in mind. In fact, this was a formal requirement of our implementation. In ‘measuring’ our implementation, while it had to satisfy our current objective—retrieving checked out items data from UH, it need to be robust enough to support migration to a GUI web-based application with a strong potential for added features.

Peer Programming Feedback Loop
Peer programming is supposed to make development on a project faster, higher quality, and more fun. But, peer programming suffers from a feedback loop causing projects to become more robust, thus requiring additional development time and more stressful if you care to be responsible to your peer programmer.

Being paired with Robin, who I had worked with previously, really alleviated the stress and time involved in getting to know a new partner and how to work with that partner on a project. It would be nice to think that a pair of programmers could just get together and hack out a program without needing to become acquainted with a peer’s personality, work habits, realm of experience, etc. Robin and I had already gone through this process during the CodeRuler project, so we were able to jump right into our DueDates project.

Another great benefit to being paired with Robin is that we both work extensively with databases, so this allowed us to take a very data-centric approach to our design process. In some ways, I think this added to the robustness and complexity of the implementation of DueDates we decided upon, but we strongly believe this effort of ‘measuring twice’ will pay off in future versions of DueDates. We identified five classes that DueDates would need to support our implementation:

  • DueDates - The main application responsible for process flow and execution of the program.

  • Library - Represents a library with associated attributes to acquire checked out information from its public web site.

  • User - Represents a user with associated attributes to log in to a library’s web site.

  • UserLibraryInfo - Aquires the checked out data for a specific user from the associated library.

  • Book - Represents a book with associated attributes.

Robin and I met daily, 5/7 days face-to-face and 2/7 days over Skype, for at least an hour, typically longer. We spent 3-4 days discussing our implementation and what we expected it to do and be capable of in future versions. We spent 2 days implementing, followed by 2 days testing, creating user and developer documentation, and reviewing our code. Putting the effort up front to ‘measure twice’ allowed us to manage the tasks during the implementation and testing with decreased stress and few ‘tweaks’ to our original class designs.

We split our tasks evenly and equally, both working on all classes and committing updates regularly. I think Robin and I work well together, and I appreciate where he complements my programming weaknesses. Often when I would get stuck or had other responsibilities to deal with (i.e., work), I would let Robin know where I was at, then the next time we’d speak he would have resolved any issue allowing the work flow to continue uninterrupted.

Logistic and Implementation Issues
We suffered several logistical issues during the first several days of working together. These were issues related to working with TortoiseSVN, being in the proper Eclipse workspace, having the proper Eclipse setting in place, and being able to update and commit files. Many of the issues were minor, but just time consuming and frustrating to deal with multiple times over those first days. Once we sorted through these issues, which thankfully Robin was able to resolve, we had no logistical issues during the remainder of the development process.

The same cannot be said for implementation issues, which we encountered a few. Many issues were trivial, such as those flagged by CheckStyle. Others were more difficult to resolve, such as needing to make DueDates into a singleton class and several suggestions by PMD that, in my opinion, degraded the code quality.

Probably, the biggest area that could be improved would be testing of our implementation. We were unable to achieve 100% coverage using Emma due to private and void methods, which are not accessible and have no testable returns to test against using JUnit.

DueDates v2.0
So with all this up-front planning we did, will there be a DueDates v2.0? If so, what will it look like?
Robin and I really envision DueDates as a web application providing a robust GUI interface that allows a user to enter library and user information from an interface, add new libraries to retrieve data from, and provide downloadable library repositories allowing users to connect to libraries from additional locations. DueDates v2.0 may include options for notification via email, Twitter, and cellular at a frequency determined by the user. The implementation we decided upon really supports a migration to this format.

Now that you’ve heard about DueDates v1.0, check it out at DueDates-Silver. You will find a User’s Guide, a Developer’s Guide, and JavaDocs for it there. Users may download the executable file from the site, while developers will find the source code available. If you extend DueDates, please let us know; we’d like to know how the process went.

The Pseudo-Realistic Subversion Experience

Posted on Wednesday, October 08, 2008
0 commentsAdd comments

This week provided me the opportunity to get the feel of a true configuration management experience, and the introduction to a new peer programmer, Anthony Du.

The objective here was to experience working on someone else’s project, and also, having someone else work on your project, but with the understanding that we are both team members on each project responsible for the improvement and advancement of the project. I call this a pseudo-realistic experience because the projects both Anthony and I collaborated on are our trivial stack implementations.

Getting Started
Anthony and I had a rough start when we paired up. I had not worked much with Google Code or TortoiseSVN at that point during the week. I had been focusing my attentions to the assigned configuration management readings for the week, and had foregone any testing or experimentation with actual process of subversion control prior to class. Anthony on the other hand had been experimenting with trying to host his project remotely. We spent the majority of our lab getting to know each other and experimenting with hosting my project, since we were limited to using my laptop.

I think this actually paid off for us in hindsight, as we met later on Wednesday evening and both seemed much more confident having progressed further with our understanding of the hosting of our projects.

Working Our Projects
Once we both got our projects hosted, I did not encounter any additional obstacles. I made minor changes to Anthony’s build, which failed to pass the verify check due to some tabs in the code which broke the CheckStyle quality assurance test. Anthony added the bin directory to my project, and I was able to check it out and sync my local copy with the changes he had checked in to the remote repository.

Growing as a Developer
I’m confident in my skills as a developer. I’m not saying I’m a great developer, and definitely not a perfect developer. But, I don’t get intimidated by code or put off when approaching a new project. I think I try to approach code objectively and with a sense that whatever I want to attempt should be possible, even if at first not elegant. For many years my personal mantra has been “If there’s a will, there’s a way!” So, why then does my heart beat faster, my adrenaline start pumping, or perhaps I even give a slight cringe at the thought of others working on or reading my code?

I think this is because as a developer putting my code out there for others to see is tantamount to flaying my chest open to expose my beating heart. I would rather have a paper I have written critiqued than have my code criticized. But, I need to become accustomed to this to grow and improve as a developer.

I think this is the most valuable lesson I will learn, having just started the journey, in leaving the single-developer lifestyle behind and joining the world of team project development.

Working with TortoiseSVN and Google Code Projects

Posted on Wednesday, October 08, 2008
2 commentsAdd comments

This week I began working with TortoiseSVN, “The coolest Interface to (Sub)Version Control” according to its web site. TortoiseSVN provides remote access to code files for the management of the collaborative efforts by a developer group working on a project.

The project used to familiarize myself with TortoiseSVN usage was stack-johnson. This project is remotely hosted at Google Code. Stack-Johnson is a basic stack implementation developed by my software engineering professor Philip Johnson at the University of Hawaii at Manoa.

Learning the Ropes
TortoiseSVN simplifies the tasks of subversion control to a great extent versus using command line arguments, so I appreciated not having to work from the MS-DOS shell. After setting up a C:\svn-google directory from which to work with the code repository files for the stack-johnson project, it was a simple task of checking out the project files. Since TortoiseSVN is a GUI application that is accessed directly from the Microsoft Explorer short-cut menu, it was just a matter of selecting SVN Checkout... from the menu. To complete the checking out of the files requires the user to enter his username and password needed, which is provided by Google Code from the user’s Profile > Settings tab.

The first step any contributing developer wants to take after checking out project files is a check of the build status. In this case, I ran Ant against the verify.build.xml file to ensure the build was still viable from the updates other developers have been contributing to the project.

To understand the edit and checking in aspect of subversion control, I made a minor change to the project by restating the getIterator() method declaration in class ClearStack. It was a trivial change, but this allowed me to see some of TortoiseSVN’s visual cues it provides to developers. Upon checking out of project files, TortoiseSVN flags the files with a green checkmark icon in Microsoft Explorer. Once a file has been edited by a developer that icon gets changed to a red exclamation point. This visually identifies the files or directories the developer needs to check in to the repository.

Checking files back in becomes a simple task as well in TortoiseSVN. Again from the short-cut menu provided by Microsoft Explorer, I selected SVN Commit... to upload the changed ClearStack.java file to the stack-johnson remote repository. Again this process will prompt the user to provide his username and password, however, the user has the option of having his password retained by TortoiseSVN eliminating the task of needing to retrieve the unique and random password assigned by Google Code.

On My Own
With a comfortable feel from the successful use of TortoiseSVN’s basic operations and familiarity with Google Code’s remote project hosting site, I decided to embark on my own. If you’ve been keeping up with my blogs, you should know that I have been working on a similar project to stack-johnson. That project is my stack-reeves project which I have been working on over the past few weeks as I have journeyed through the exploration of coverage testing and manual and automated quality assurance methodologies.

Setting up a remotely hosted project on Google Code in itself is not a difficult task, however, there are some pitfalls to watch out for. I had initially gone through the steps of setting up my stack-reeves project. This had gone smoothly until I decided to reset my project from the Source tab. This caused all the subdirectories to be deleted from the project. I had not uploaded my project files yet, but it caused the branches, tags, and trunk subdirectories to be deleted. There was no direct way to recreate the subdirectories or undo the operation, so I only had an svn directory.

This wasn’t at all what I had wanted, and not knowing for certain if just recreating the accidentally deleted subdirectories in the local repository that I would be uploading to Google Code would be adequate, I decided to delete the project and start anew. Well, this only maked matters worse. First, projects are only flagged to be deleted, and that deletion is scheduled for some undetermined future date and time. This meant that even if I had wanted to recreate the stack-reeves project that I’d need to wait for the personnel at Google Code to drop the project first.

I decided to do some quick searches in the help function provided by Google Code only to discover that even once a project is deleted (by Google Code personnel) that the project name would still be unavailable. I found this completely frustrating that not only was I unable to delete the project as the project owner, but I wouldn’t be able to reuse the project name once it actually gets dropped from the hosting site.

I went through the process of creating another new project, stack-ronnreeves and ensured I made none of the mistakes that led to my befuddlement in my first attempt. The stack-ronnreeves project can be accessed here.

Conclusion
Overall, there were no major challenges to using TortoiseSVN or Google Code. The GUI interface and integration of TortoiseSVN in Microsoft Explorer makes it a trivial learning task for experienced users of the Microsoft Windows OS. Google Code is adequate for a free, open source project hosting site, but it appears to me that one of the main challenges it faces is resolving the issue of providing project owner’s true administrative rights.

The Downside of Coverage Testing

Posted on Wednesday, October 01, 2008
0 commentsAdd comments

This is part two of a two part discussion on Coverage Testing. The complement to this discussion is The Upside of Coverage Testing. These discussions are follow-ups from my blog Stable Builds: Using Ant and QA Tools.

Coverage tools are good, but they aren’t perfect. Just like no one tool can be used for every job, so too it can be said of coverage testing tools. Coverage testing ensures that every class, method, block, and line of code has been reached, but that does not guarantee that each one of these has been tested appropriately or adequately.

Sometimes You Have to Break Code to Fix Code
To prove the point, I decided to introduce a bug into the complete coverage tested Stack implementation. The bug I introduce isn’t one that would typically go unnoticed in the process of a normal development cycle, but none-the-less, it demonstrates that coverage testing has some serious weaknesses to it.

The bug I introduced was the addition of a popAll() method to clear an existing stack of all objects that had been placed on to it. Since this implementation of a stack uses a List as its underlying implementation, a properly functioning popAll() method would look like:
   1:  public void popAll() {
2: this.elements.clear();
3: }

The one I added to the Stack implementation looked like:
   1:  public void popAll() {
2: this.elements.remove(0);
3: }

Not only does the new implementation make the mistake of not clearing all the objects from the stack, it compounds the problem by accessing the first object placed on the stack. The latter violation is the worse of the two, because stack by definition is a first in/last out abstract data type. Whimsically removing the first item placed onto the stack changes the definition of what a stack is.

Resetting Splints
Okay, so at this point you should be asking yourself, but won’t your coverage tool just indicate that there is an error with this erroneous implementation. Well, that answer depends on how well your test cases are written.

For example, I can maintain 100% coverage testing of this code by adding just three new lines to my test suite method testNormalOperation():
   1:  stack.push(one);
2: stack.popAll();
3: assertEquals("Testing stack popAll is clear", 0, stack.toArray().length);

Yet, the addition of a new test testPopAll() to the test suite clearly demonstrates that the JUnit build process will fail:
   1:  public void testPopAll() throws EmptyStackException {
2: Stack stack = new Stack();
3: stack.push(one);
4: stack.push(two);
5: stack.push(three);
6: stack.popAll();
7: assertEquals("Testing stack popAll is clear", 0, stack.toArray().length);
8: }

Aftermath
Probably the worst aspect of encountering an unexpected or difficult to resolve bug in your code, is believing you have tested for just an event to later find out that just such a bug exists in your project. As I stated in my earlier blog, any testing is better than none. But, testing, especially coverage testing has its weaknesses. Coverage testing assists a developer in reaching every line of code in an attempt to assure him that all of his code has been tested. However, there are many problems with believing that just reaching a line is an adequate test of the line. In my example, it obviously fails, but worse it redefines what a stack is, which is most certainly not what a developer would want to do. Other places where coverage testing fails are reaching lines that contain multiple statements (e.g., x++; y = x;), or branching statements where not all conditions are tested (e.g., if ((x == 1) || (y != x))). A coverage tool is part of a developer’s first aid kit in code triage, but the developer has to maintain that it’s not the entire kit.

My Stack implementation with full coverage testing, plus introduced bug: Stack-JUnit-Lab-Reeves-6.0.630.zip

The Upside of Coverage Testing

Posted on Wednesday, October 01, 2008
0 commentsAdd comments

This is part one of a two part discussion on Coverage Testing. The complement to this discussion is The Downside of Coverage Testing. These discussions are follow-ups from my blog Stable Builds: Using Ant and QA Tools.

Coverage testing tools, such as Emma, provide a developer the means to test all lines, conditional branching, and routine entry of a unit of code. Unit here denotes a chunk of code that is exercised as a unit—a method, a class, a component, or a system. The goal is to obtain 100% coverage of the targeted unit’s code. The benefit to coverage testing is in the assurance that every line of the unit has been tested and has passed that testing. As the complexity of a project increases, especially in regards to the development of an entire system, coverage plays a vital role in managing the development lifecycle by ensuring that integration of new units are thoroughly tested and that their integration do not break the existing build.

To wet my feet, so-to-speak, I decided to do coverage testing of the Stack implementation from one of my earlier blogs (noted above). The coverage tool being used is Emma, which according to its User Guide “is essential for detecting dead code and verifying which parts of your application are actually exercised by your test suite and interactive use.” While the Stack implementation is a trivial unit for testing, it should suffice in demonstrating the upside of coverage testing.

Baseline
One must first assess where he stands, get a baseline, when testing, performing maintenance, or instituting a repair. A run of Emma against the existing Stack implementation indicates that it does not have complete coverage. While all of the classes in the implementation had been tested, not every method (73%), code block (69%), or line (75%) had been.

First Aid
A coverage tool, and all Quality Assurance tools for that matter, is part of a developer’s first aid kit in code triage. So, I set about trying to bandage the Stack implementation to reach full coverage of its methods, code blocks, and lines. Emma provides a summary level html page that displays the code, highlighting the untested portions. For the Stack implementation, only the EmptyStackException class had complete coverage, while the Stack and ClearStack classes each lacked or had deficient coverage testing of two of their methods. The Stack required further testing of its top() and toSting() methods, while ClearStack had no testing of its isEmpty() and getTop() methods.

Setting Splints
To achieve 100% coverage required the addition of some test cases within the Stack implementation. Since Emma had indicated where to focus my attention, it was an easy task to go about setting splints to mend the broken coverage. Tackling the ClearStack methods, I added the new tests testTopObject(), testEmptyStack(), and testNonEmptyStack. These tests covered reaching the getTop() method and its lines of code, while the other two tests reached the isEmpty() method and tested it for both true and false returns.

Covering the untested Stack methods were just as easy to resolve. I added testIllegalTop() to test for the throw of an EmptyStackException when calling top() on an empty stack. The remaining coverage was provide with the creation of a testToString() test.

Coding-After or Bug Control Pills
However you swallow it, coverage testing is helpful in obtaining stable code. Testing of any type is better protection against unplanned bugs, than no testing at all. Coverage testing ensures that code doesn’t lay fallow within an implementation—all code is shallow—and that it has at a minimal been looked at least once beyond the act of hacking it out. Coverage tools are simple to use, once incorporated into a developer’s routine, and become insurance policies allaying concerns of integrating new code with old code. While the Stack implementation may have been trivial, there was an extensive amount of it that wasn’t being tested. Had this been a critical (life-or-death) system, untested code could very well be a law-suit waiting to happen. Testing may not be able to guarantee that code is 100% error free, it is, however, due diligence on the part of the developer in aspiring to that end.

My Stack implementation with full coverage testing: Stack-Reeves-6.0.630.zip

For vs. For-Each Construct

Posted on Tuesday, September 30, 2008
1 commentAdd comments

In his blog Automated QA vs. Manual QA: CodeRuler Revisted, Arthur Shum states that I 'identified a false error' and that use of the for construct was necessary to provide access to the indices for the implementation to function as desired. I stand by my observation that use of the for statements I cited in Peer Review: Adhering to Java Programming Standards can be replaced with for-each constructs and still maintain the desired results:

Case 1 – Iterating over myKnights (Line 113):

for construct:
   1:  for (int i = 0; i < numOfKnights; i++) {
2: if (i < numOfKnights * .2) {
3: team1.add(myKnights[i]);
4: }
5: else {
6: team2.add(myKnights[i]);
7: }
8: }

for-each construct:
   1:  for (IKnight k : myKnights) {
2: // Assign one-fifth of knights to team 1
3: if (team1.size() < numOfKnights * .2) {
4: team1.add(k);
5: }
6: // Assign remaining knights to team 2
7: else {
8: team2.add(k);
9: }
10: }

Case 2 – Iterating over myPeasants (Line 189):
This example still requires knowledge of the peasantIndex variable, but it has been implemented as a counter-index variable for use in only 2 lines of code outside of the declaration/incrementation statements. I have commented out the code I replace in order to use the for-each construct:

   1:  int peasantIndex = 0;
2: // for (int peasantIndex = 0; peasantIndex < myPeasants.length; peasantIndex++) {
3: for (IPeasant p : myPeasants) {
4: directions.clear();
5: int curX = p.getX(); // myPeasants[peasantIndex].getX();
6: int curY = p.getY(); // myPeasants[peasantIndex].getY();
7:  
8: for (int i = 1; i <= 8; i++) {
9: possibleNextPosition = World.getPositionAfterMove(curX, curY, i);
10: if ((possibleNextPosition != null)
11: && (World.getLandOwner(possibleNextPosition.x, possibleNextPosition.y) != this)
12: && (World.getObjectAt(possibleNextPosition.x, possibleNextPosition.y) == null)) {
13: directions.add(i);
14: }
15: }
16: if (directions.size() > 0) {
17: move(p, directions.get(rand.nextInt(directions.size())));
18: // move(myPeasants[peasantIndex], directions.get(rand.nextInt(directions.size())));
19: }
20: else if (enemyPeasants.length > 0) {
21: // Splits up the ruler's peasants to move toward different enemy peasants in search of land.
22: // Uses modulo so that it does not matter if there are less ore more enemy peasants.
23: int x = enemyPeasants[peasantIndex % enemyPeasants.length].getX();
24: int y = enemyPeasants[peasantIndex % enemyPeasants.length].getY();
25: move(p, p.getDirectionTo(x, y));
26: // move(myPeasants[peasantIndex], myPeasants[peasantIndex].getDirectionTo(x, y));
27: }
28: else if (enemyCastles.length > 0) {
29: int r = rand.nextInt(enemyCastles.length);
30: int x = enemyCastles[r].getX();
31: int y = enemyCastles[r].getY();
32: move(p, p.getDirectionTo(x, y));
33: // move(myPeasants[peasantIndex], myPeasants[peasantIndex].getDirectionTo(x, y));
34: }
35: else {
36: move(p, rand.nextInt(8) + 1);
37: // move(myPeasants[peasantIndex], rand.nextInt(8) + 1);
38: }
39: peasantIndex++;
40: }

When I first start thinking of this topic, I perceived the relationship between manual and automated quality assurance methodologies as an either/or relationship. I believed the relationship to be that of one being a better suited methodology over the other. Of course, that tendency would be to favor automation tools over a manual process. However, this is a fundamentally flawed belief.
“Manual and automated quality assurance methodologies are integral components
with each complementing the other in the creation of high quality, stable solutions.”


The longer I thought on the topic, the more I came to the realization that each has its strengths complementing the weaknesses the other methodology exhibits. This becomes evident in analyzing the overlaps and disparity between peer reviews and the results of running quality assurance tools (QATs) against the CodeRuler implementation from my blog entry “Lessons Learned From CodeRuler”. The peer reviews associated with this analysis are provided by
• John Ly, CodeRuler Review• Creighton Okada, CodeRuler Review
Analysis
John and Creighton succeeded in discovering six types of uniquely occurring errors, while the QATs located nine uniquely occurring error types. Of these errors only three errors were identified by both peers and the QATs—lines exceeding 100 characters, usage of a wildcard import statement, and use of tabulation versus spaces. These errors are attributed to violations against programming styling specifications.

The remaining peer errors relate to poor choice of variable naming and the capacity to clarify documentation of the implementation. The QATs identified four additional deviations from the prescribed programming styling specifications (Checkstyle) and two from good programming practices (PMD). The stylistic issues ranged from missing spacing around braces and operands to missing documentation blocks. The latter problems were due to loose coupling (usage of ListArray) [4 occurrences] and confusing ternary (using not equal as the leading condition statement) [9 occurrences]. There were no issues in the CodeRuler implementation recognized through “bug pattern” analysis (FindBugs).

Conclusion
QATs have the advantage over peers in this analysis in respect to identifying the types of coding errors they specialize in, as well as, identifying all occurrences of these errors. QATs are developed to expressly and comprehensively locate all deviations from standard programming practices (stylistic and pattern analysis). QATs have the capability to do this quickly, repetitively, and reliably. Whereas, leaving reviews of code to humans is likely to be slow and very unreliable, especially in subsequent reviews of code already reviewed which may cause the human reader to read over errors.

However, manual reviews by humans can accomplish goals that QATs are not capable of resolving. These goals include writing concise, clear and appropriate levels of documentation. Also, a manual process can ensure that the conceptual model is reflected appropriately and accurately within the implementation with proper naming of variables and member fields, that the implementation and the documentation are in sync with one another, and that the code strives to be readable and self documenting. All tasks that QATs are not capable of performing. Thus, defining the complementary relationship manual and automated quality assurance methodologies have.

Stable Builds: Using Ant and QA Tools

Posted on Friday, September 19, 2008
0 commentsAdd comments

“Even the ant has his bite.” — Turkish Proverb

I doubt the Turks had Apache’s Ant in mind when they popularized this proverb. But, I definitely felt Ant’s bite when working through the tasks outlined in a recent software engineering assignment. The assignment consisted of three tasks, each task comprised of several steps—you know, in that way college assignments seem to be constructed.
1, 2... What Do I Do?
Okay, okay, so maybe it wasn’t as bad as being bit by a bullet ant, but getting through the first task definitely stung a little. The first step of the initial assignment task was to install Ant along with several Quality Assurance (QA) tools—JUnit, PMD, FindBugs, and Checkstyle. So, first things first, install Ant... I decompressed the zip file, moved the files to the appropriate directory location, and then tried to build Ant. But, several failed attempts to build Ant led me to peruse the manual (%ANT_HOME%\docs\manual\install.html). I followed the instructions under Setup and Windows and OS/2. But still Ant failed to build successfully. I needed a better clue. I needed to know what to do.
3, 4... Chores Galore
That initial sting could have been avoided just a bit, if I had just read the Ant build error messages closer. My Ant build had issues, it first needed JUnit. Okay, not a problem. Just install the QA tools first. Checkstyle, FindBugs, and PMD all installed easy enough. (Note: Some decompressed folders added an extra root folder that I had to remove in order for the QA tool to build correctly.) After that, a quick trip to the Windows Control Panel > System > Advanced System Settings > Environment Variables dialog to create or edit the system variables:

• ANT_HOME = C:\dev-tools\ant-1.7.1• CHECKSTYLE_HOME = C:\dev-tools\checkstyle-5.0-beta01• CLASSPATH = .;• FINDBUGS_HOME = C:\dev-tools\findbugs-1.3.5• JUNIT_HOME = C:\dev-tools\junit4.5• Path = Added %ANT_HOME%\bin;• PMD_HOME = C:\dev-tools\pmd-bin-4.2.3
JUnit took a little work to get a successful build. After looking under Installation in the JUnit manual (%JUNIT_HOME%\README.html), I entered the commands there and it finally worked. Now, I was ready to attempt another Ant build. It was a successful build. I downloaded the necessary assignment files, and completed the remaining chores in the first task.

Revision: Reference to JUnit jar executable file removed from CLASSPATH environment variable. It was indicated this was a suboptimal strategy to maintain the prime directives of software engineering. See my blog JFreeChart: Open Source Charting Solutions for more on prime directives.

5, 6... Stack Gets Fixed
The second task was basically a repeat of the first (minus the lengthy installation step). It required making a copy of the downloaded assignment files, a simple Stack implementation riddled with a handful of errors. These errors were purposely left in the source files, as the objective of the assignment was to become familiar with using QA tools to improve the stability of Java projects.

The third and final task picked up here by requiring the errors identified from by the various QA tools be corrected. Once all the errors had been appropriately addressed, the corrected files (compressed) were to be posted to the web for review. My compressed, corrected Stack implementation is available here:

• Stack-Reeves-6.0.919.zip

7, 8... E-LIM-I-NATE!
The biggest issue I had with using the QA tools was that I thought I could first run all of them, then go back and look at the errors identified by each. Unfortunately, running some QA tools causes the \build directory to be cleaned and rebuilt when being run. This causes the information from a QA tool run prior to these to be erased. It took me a few times attempting this method, before realizing what was occurring. Once I figured this out, it was just a matter of correcting the errors identified by each QA tool before running the next QA tool.

Most of the errors were simple to eliminate—move a { to the line above, add final to the declaration of the member fields, and other errors of this ilk. The error that caused me the most frustration was determining what to do with an empty catch block in the testIllegalPop() method in the TestStack class. My first attempts were to throw an EmptyStackException() error, but this caused the build’s JUnit task to fail. Then, I thought I’d try using fail() in the catch block, but that too caused the build’s JUnit task to fail. Finally, I realized the problem. The catch block was actually the valid case of the method. The method is testIllegalPop(), therefore, the try block actually causes the catch block to be invoked. I decided an appropriate way to handle this, since creating a log file was too much overhead for this simple assignment, was to use a System.out.println() command to report the successful catch of the illegal pop attempt to the shell. Eliminating this last error allowed all QA tools to successfully pass the Stack-Reeves implementation. The verify task completed successfully, and I ran the dist task to create the compressed Stack-Reeves files.

Revision: “Bogus” was used to describe my use of the System.out.println() command to resolve the error caused by the empty catch block. Admittedly, this was a bogus solution and used to resolve the issue with minimal effort. Therefore, I have gone back and resolved the issue using a more elegant—and production appropriate—technique as described in the article An early look at JUnit 4.

9, 10... Fini. Fin.
Overall, I am impressed with Ant. While it may have been challenging to get set up initially, its power become evident quickly on during the build processes. It is easy to see what the lowly ant and Apache’s Ant have in common: each is a tiny implementation (one biological, one digital) that does much more than seems it should be capable of doing. Nature’s ant can lift 100 times its body weight, whereas Apache’s Ant can verify Java projects containing 100 times (and more) lines than in the build.xml files. In addition, being my first experience with these QA tools, I am truly amazed. It is great to have a set of standards—very close to ones I have been using personally for several years, which can be verified against so simply and comprehensively to create stable project builds.

Peer Review: Adhering to Java Programming Standards

Posted on Thursday, September 11, 2008
0 commentsAdd comments

In Lessons Learned From CodeRuler, I explored how that project influenced my understanding of the Java language, the Eclipse IDE, and especially, peer programming. In this article I continue that exploration of peer programming by examining and evaluating the CodeRuler implementation from a group of peers in my ICS 413 course this semester. The main aspect of the evaluation is the adherence to Java programming standards defined by these sources:

• The Elements of Java Style by Vermeulen, Ambler, Bumgardner, Metz, Misfeldt, Shur, & Thompson• ICS 413 Supplemental Standards from Philip Johnson
The Elements of Java Style (EJS) had been informally introduced prior to the CodeRuler assignment to the class, but had not yet been mandated as the formal coding style to which course work should adhere. The latter standards, referred to as ICS-SE-Java or ICS-SE-Eclipse, were introduced to the class after completion of the CodeRuler project. Doing the CodeRuler assignments in this manner provides each developer within the class the opportunity to discover his/her baseline adherence to the standards expected through code reviews of our peers’ course work. By discovering deviations from the standards by our peers, we learn to look for deviations in our own programming methodology. Additionally, this assignment begins our exposure to the expectations that will be placed on us as programmers entering into the workforce, as well as laying down a foundation of programming best practices and tools we will need for our eventually rise into management and executive positions.

Peers Being Reviewed
For the purpose of this review, I will be examining the CodeRuler implementation provided by

• Arthur Shum (Check out his CodeRuler blog entry.)• Aric West (Check out his CodeRuler blog entry.)
Their implementation may be downloaded directly from either’s blog or from here:

• aricwest-atshum.zip
The Good
To say that this implementation provides an effective solution to the CodeRuler challenge would be an understatement. Even its designers describe it as an “aggressive” solution. Ruthless is probably a better word. This implementation is quite successful, and after watching it successfully conquer ruler after ruler in just a few moments of game play, even in 3 to 6 ruler competitions, I had difficulty finding a better suited word than “ruthless” to describe West-Shum’s implementation. Their ruler does quite a good job of dispatching opposing rulers and their forces in a very short period. The salt-in-the-wound for opposing rulers is West-Shum’s boxing of enemy peasants by using their peasants such that the opposing peasants are unable to move, and this on top of having already decimated all opposing offensive and defensive units.

The supporting JavaDocs are in order with short, concise descriptions of how to use their implementation. The documentation is easy to read, well-organized, and defines the class and implementation features appropriately.

For the most part, the class definition (MyRuler.java) is organized appropriately being arranged such that readability is reinforced.

The Bad
The following table displays the violations of the EJS, ICS-SE-Java, and ICS-SE-Eclipse standards occurring within the West-Shum implementation of CodeRuler.

Source Code File:  MyRuler.java

LinesViolationComments
-ICS-SE-Java-1Code not contained in package hierarchy edu.hawaii.
6ICS-SE-Java-2Wildcard * used in import statement (com.ibm.ruler.*).
113, 189ICS-SE-Java-9Use for-each control structure to iterate over myKnights and myPeasants arrays.
9, 86, 87, *ICS-SE-Eclipse-2 /
EJS-6
Keep lines under 100 characters by breaking up into shorter lines.
44, 45, 49, *EJS-9Use descriptive names for variables to convey implementation concepts.
114, 154, 217, *EJS-9Create class or method level variables in place of “magic” numbers.
102, 103, 104, *EJS-29Use fully qualified field variable names by prefixing with this keyword.
99, 185EJS-69Design smaller methods by refactoring similar code blocks out and having them do only one task.
* Denotes this violation occurs in additional lines of code not indicated.
And What I’d Suggest
By in large these violations of the standards are minor issues that can easily be addressed by these adept programmers. Their ability to program well is evident in the success of their solution, and being as such, addressing these minor infractions will require little more than an awareness of them. However, I have two observations regarding the implementation that I personally would change: 1) keeping the methods smaller and focused to a single task, and 2) more appropriate use of member fields. Regarding the first observation, the orderSubjects method performs 6 tasks within its body: gather information, create knight teams, prioritize enemies, attack enemies, order peasants, and castle production. I would suggest breaking most of these code blocks into separate methods, such that orderSubjects reads similar to
   1  public void orderSubjects(int lastMoveTime) {
2 //Gather information
3 this.myPeasants = getPeasants();
4 this.myKnights = getKnights();
5 this.myCastles = getCastles();
6 this.enemyCastles = World.getOtherCastles();
7 this.enemyKnights = World.getOtherKnights();
8 this.enemyPeasants = World.getOtherPeasants();
9
10 createTeams();
11 prioritizeEnemies();
12 attackEnemies();
13 orderPeasants();
14 controlProduction();
15 }
Of course, this is just my personal style, but I prefer this style due to its readability and the ability to focus on the specific components in the larger method.

The second observation is that some variables seem to be unnecessarily long lived. For example, several field members are created (i.e., myCastles, myKnights, myPeasants), then initialized on each call of the orderSubjects method. These are then passed as method parameters throughout the rest of the implementation, instead of using direct calls to them. There is one instance where the myKnights member field is referenced directly, but it could just as easily been passed as a parameter by the calling method. I don’t see any benefit to making these member fields, since they are not used in that capacity within the implementation. Personally, I would just make these method level variables and pass as method parameters as needed from the orderSubjects method.

Subscribe to: Posts (Atom)