Javadoc

In our current project there is a guideline that all of the public API must have Javadoc. I disagree with this guideline. Our project does not have a public API, since we’re the only team that works on this project and no other team is allowed to use the codebase. Public classes and methods are not the only criterium for a public API.

So I asked, how many times is the Javadoc actually read, versus looking at the actual code to see what the method does? Personally, I never look at the Javadoc, and I suspect I’m not alone in this practice.

So why don’t I read Javadoc in our project? I don’t trust it. We’re using IntelliJ Ultimate, and this IDE also finds errors in Javadoc. And there are lots and lots of errors. So I don’t trust it.

When you’re working on the code, you’re not updating the accompanying Javadoc. The reason for this is the same as why you don’t write unittests – unless you’re using TDD: you’re solving a problem, and you need your attention there.

Javadoc is useful when you’re writing a library, and actually want third parties to use it. They need this form of documentation.

As for our project, there are other ways of documenting the code. You can (and must, in my opinion) factor the code to be as clear as possible. Good naming practices are a good step towards self-documenting code.

Another way of documenting, is writing clear unittests. You don’t need to write them first, though that is often the easiest way to write them. As long as you run them, you have a form of documentation that fails if incorrect.

For the same reason, you can use the assert keyword. To enable this, you need a runtime JVM argument, by default it is disabled. This means assert statements are not executed in production, but neither is Javadoc. However, during development, asserts can provide a treasure of information and verify the correctness of your program. The assert keyword IS documentation.

A nice Stackoverflow thread about assert can be found here: https://stackoverflow.com/questions/1208404/do-you-use-assertions

No documentation is bad, wrong documentation is worse. Javadoc tends to turn wrong when the codebase changes, and should therefore be avoided if possible.

Internet of Things

On 19 April 2017 about 1300 tech enthusiasts gathered in Utrecht for the IoT tech day. During this day I noticed two themes: on the one hand there were project demos, on the other hand there were talks about why you should use certain products and services to connect your projects. This showed me the state of IoT at this moment: it’s still early on the technology curve. This means that the tools and frameworks need to be developed further, and, more importantly, that standards still need to emerge. This in turn will lead to new ways of thinking about computing.

Part of the problem is that the term Internet of Things (IoT) is not very well defined yet. Is virtual/augmented/mixed reality part of IoT? How about cloud computing? Artificial Intelligence or Machine Learning? Mobile apps?

In my view, IoT consists of three parts: data collection using sensors, data processing using a cloud infrastructure, and presentation. In this regard, IoT is not different from any computer program. The difference is in the way the data is collected and processed, the infrastructure of IoT. In traditional computing, the data is sourced from a few sources, processed, and presented to the user. In IoT data is generated from thousands, perhaps millions of sensors. Due to the volume of data that is potentially generated, processing speed and storage becomes a challenge.

This is where the infrastructure comes in. Instead of one monolithic program, running on a giant machine, we build and connect any number of (micro-) services, running on multiple servers, located all around the world. These services aren’t written in a single language: we can choose whatever seems best for that particular step.

This is a game-changer. The role of a software engineer is not just to translate functional requirements to source-code. Maintaining changeability is also a key objective. We do this by abstracting complexity away, and organizing the code so its intention is as clear as possible. Methods should do only one thing, and they should do it well. Classes should have one responsibility, one reason to change. Now we can add services to this list, although I’m not sure what role these services will play in the abstraction and organization of code.

For now, the Internet of Things is the domain of tech-enthusiasts and big companies cashing in on infrastructure. However, this is the next revolution in computing. The personal computer, the Internet, and mobile each supplied their own challenges and solutions. Each revolution built on the previous one. IoT is the next revolution moving computing forward.

Technical Debt

Technical debt is acquired when you take shortcuts while developing your software. It helps you get the changes in place faster, but it results in code that is harder to understand.

Ward Cunningham coined this term while he was working in a financial institution to explain why they were refactoring. His boss at the time was a financial guy, and this was financial software, so a financial metaphor was the best way to explain this principle.

When you want to buy a car, and you don’t have the money, there are basically two things you can do. You can wait and save until you do have the money to buy the car, or you can borrow it. Translating this to writing code, you can implement the feature correctly, with clean and clear design. This way, when it needs to change, it is easy to understand and changing it will be faster. Or, you can do it the quick and dirty way. This is faster, for now. But when the time comes to change the code, it will take more time.

The two components of financial debt also apply to technical debt. There is interest, and there is principle. We pay interest when we need to change dirty code, and we pay down the principle when we clean that code up.

Martin Fowler divides the term Technical Debt into four categories. Reckless vs. Prudent, and Deliberate vs. Inadvertent:
Deliberate, Reckless: “Design is boring and time consuming. This works, so who cares?”
Deliberate, Prudent: “We really should do this, but the deadline is approaching fast. We’ll note our shortcuts and refactor after the deadline.”
Inadvertent, Reckless: “What’s wrong with an entitymanager in the weblayer?”
Inadvertent, Prudent: “This seemed like a good idea at the time. Now we know how we should have done it.”

Here are a few things to help get technical debt under control.

Naming

Choose good names for your functions and variables, and rename to better names when your understanding of the code changes.
You are not just telling the computer what to do. You are also writing a document for future developers. You are creating a language in which to communicate your thoughts about the problem at hand. This language is a mixture of reserved words of the programming language (for, if), terms that are used in the domain (account, customer), and terms that are common in our industry, such as design patterns (action, command, listener). The easy part is  getting the computer to do what we want it to do. The hard part is telling our future selves what we are thinking.

Shorter methods

“Rule 1 of methods: they should be short. Rule 2 of methods: they should be shorter than that!”
“Functions should do one thing. They should do it well. They should do it only.”
Longer methods tend to do more than they should. Splitting methods into smaller methods makes them more readable, manageable, and reusable. Another advantage of smaller methods is that you get lots of them, and that makes it easier to group them in the correct classes.

Unittests

While the production sourcecode is the primary document we deliver, there is another document: unittests. When written correctly, unittests are quite helpful in further explaining the code. They will tell how objects are created and used. This is one of the reasons Test Driven Development is useful. They are also quite useful for spotting design flaws. For example, when you find an uncovered private method, and it’s just too hard to reach it through the public interface of the class, chances are that the class has too many responsibilities and should be split into multiple classes.

Remove duplications

With shorter methods comes a more fragmented codebase: functionality is split into more fragments. This usually means that duplications become more visible. The duplications are already in the code, but sometimes they are hidden. Duplications are generally considered bad, because it usually means that you need to apply the same change to multiple pieces of code. And that’s easy to forget. According to Kent Beck, eliminating duplications is a powerful way of getting a good design.

Single Responsibility Principle

What does the class do? What reasons does it have to change? How many responsibilities does it have? A class should have only one role, only one responsibility. For example, it should only validate input, or format some text. Some ways of finding the responsibilities of a class include grouping of method names and drawing the relations of the class members.
Responsibilities can live on the interface level, and on the implementation level. Classes can either delegate the responsibility to other classes, or they can implement the responsibility themselves. When a class implements too many responsibilities, it’s easier to move just the implementations those responsibilities to other classes, and keep the original methods. These methods then act as a gateway to the new classes. Over time, we can modify the system so it only uses the new classes, and then we can remove the gateway methods.

Test driven development

TDD is the ultimate agile practice. You can’t get faster feedback cycles than with TDD: often a cycle lasts well under a minute. Because of this, you always know that you’re moving to the target.

However, I’ve met very few people who know about TDD, and even fewer who actually practice it. I was at a technical meeting a couple of months ago, where the presenter asked the question who knew about Test driven development. Out of about 50 developers who were present, only about 10 raised their hands. The question who had experience with it managed to raise only three or four hands.

But it’s even worse than that. I’ve worked on quite a few projects, and most of them had a limited set of unittests. The project that I’m working on now only has about 42% coverage, and a good deal of code is covered by useless or unreadable tests. Useless, because the tests don’t actually check the behavior. Unreadable, because of duplications, long setups and multiple asserts per test.

As an industry we know that unittests are important to catch bugs early on. But unittests are often written after the production code, and that’s boring and complicated. So, often it is skipped altogether. Writing unittests before writing production code, or actually, alternating between writing tests and production code, would help.

The main sequence of TDD is as follows:

  • Red: Write a test to specify the behavior. This test will fail because the behavior doesn’t exist yet, resulting in a red bar.
  • Green: Write the behavior specified by the test. Get it working as soon as possible, it is OK to make a mess.
  • Refactor: Eliminate duplications. Both from the production code and from the tests. This last point is important, because the tests need to be readable to be useful.

When you follow this sequence over and over again, you’ll start to notice that the code you produce is quite nice to read, and well factored. The code will tell you how it wants to be designed: design emerges, instead of being imposed on the code.

Fixing bugs is also faster with TDD. Once you’ve located the problem, write a test to verify that problem. Then adjust the test so it will verify the correct behavior. At that point you don’t need to go through the compile/deploy/test cycle again. You just run the unittest to see if you’re making progress. Then, when you fixed the behavior according to the test, you can always test it manually.

The biggest downside of TDD is that it is hard to learn. Perhaps this is the reason why it is not practiced as much as it should. It is a mindset, a habit that needs to be formed. It will take months to get good enough. Therefore, it is recommended to practice on a side-project, or at least practice on small, clearly defined pieces of behavior. But once you do use TDD, you will be a better developer.

Maven, Hibernate, PrimeFaces and Jetty

The final project structure

In this post, we will make a (very) simple CRUD application using Maven, Hibernate, PrimeFaces and Jetty. It won’t be the best application out there, but enough to get started. Run ‘mvn jetty:run’ to start the application.

Click here to download the actual project.

In this case, we’ll be making a simple addressbook, with each contact having a first and last name, and an address. The address will have a city, street, housenumber and postalcode. They will each have an id, because they need to be stored somewhere. This is what they look like:

package nl.ghyze.contacts;

public class Contact {

    private Long id;
    private String firstname;
    private String lastname;
    private Address address;

    public void setId(Long id){
        this.id = id;
    }

    public Long getId(){
        return id;
    }

    public void setFirstname(String firstname){
        this.firstname = firstname;
    }

    public void setLastname(String lastname){
        this.lastname = lastname;
    }

    public String getFirstname() {
        return firstname;
    }

    public String getLastname() {
        return lastname;
    }

    public void setAddress(Address address) {
        this.address = address;
    }

    public Address getAddress() {
        return address;
    }
}
package nl.ghyze.contacts;

public class Address {

    private Long id;
    private String city;
    private String street;
    private String housenumber;
    private String postalcode;

    public void setId(Long id) {
        this.id = id;
    }

    public Long getId() {
        return id;
    }

    public void setCity(String city) {
        this.city = city;
    }

    public String getCity() {
        return city;
    }

    public void setStreet(String street) {
        this.street = street;
    }

    public String getStreet() {
        return street;
    }

    public void setHousenumber(String housenumber) {
        this.housenumber = housenumber;
    }

    public String getHousenumber() {
        return housenumber;
    }

    public void setPostalcode(String postalcode) {
        this.postalcode = postalcode;
    }

    public String getPostalcode() {
        return postalcode;
    }
}

Of course there needs to be a database schema to store the records in. We will use a MySQL database for this. For simplicity, all keys are bigints, and all other fields will be varchars.

create schema if not exists contacts;
use contacts;

create table if not exists address
(id bigint primary key,
city varchar(100),
postalcode varchar(10),
housenumber varchar(10),
street varchar(100)
);

create table if not exists contact
(id bigint primary key,
firstname varchar(255),
lastname varchar(255),
addressId bigint);

Now we will need to tell Hibernate how to map the database tables to objects. These days you’ll probably use annotations for that, but to keep database-related code out of our entities, we’re going for XML configuration.
The mapping for contact will look like this:

<?xml version="1.0"?>

<!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://www.hibernate.org/dtd/hibernate-mapping-3.0.dtd">

<hibernate-mapping package="nl.ghyze.contacts">
    <class name="Contact" table="contact">
        <id name="id" column="id">
            <generator class="increment" />
        </id>
        <property name="firstname" column="firstname" />
        <property name="lastname" column="lastname" />
        <one-to-one name="address" class="nl.ghyze.contacts.Address" cascade="save-update" />
    </class>
</hibernate-mapping>

And the mapping for Address:

<?xml version="1.0"?>

<!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://www.hibernate.org/dtd/hibernate-mapping-3.0.dtd">

<hibernate-mapping package="nl.ghyze.contacts">
    <class name="Address" table="address">
        <id name="id" column="id">
            <generator class="increment" />
        </id>
        <property name="city" column="city" />
        <property name="postalcode" column="postalcode" />
        <property name="housenumber" column="housenumber" />
        <property name="street" column="street" />
    </class>
</hibernate-mapping>

Now we need to configure hibernate, telling it how to connect to our database and which mapping files to use.

<?xml version='1.0' encoding='utf-8'?>

<!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate/Hibernate Configuration DTD 3.0//EN" "http://www.hibernate.org/dtd/hibernate-configuration-3.0.dtd">

<hibernate-configuration>
    <session-factory>
        <!-- Database connection settings -->
        <property name="connection.driver_class">com.mysql.jdbc.Driver</property>
        <property name="connection.url">jdbc:mysql://localhost:3306/contacts?useUnicode=true&amp;useJDBCCompliantTimezoneShift=true&amp;useLegacyDatetimeCode=false&amp;serverTimezone=UTC</property>
        <property name="connection.username">root</property>
        <property name="connection.password">password</property>

        <!-- JDBC connection pool (use the built-in) -->
        <property name="connection.pool_size">1</property>

        <!-- SQL dialect -->
        <property name="dialect">org.hibernate.dialect.MySQLDialect</property>

        <!-- Disable the second-level cache -->
        <property name="cache.provider_class">org.hibernate.cache.internal.NoCacheProvider</property>

        <!-- Do not show executed SQL -->
        <property name="show_sql">false</property>

        <!-- Validate the database schema on startup -->
        <property name="hbm2ddl.auto">validate</property>

        <!-- Use these mapping files -->
        <mapping resource="nl/ghyze/contacts/Contact.hbm.xml"/>
        <mapping resource="nl/ghyze/contacts/Address.hbm.xml"/>
    </session-factory>
</hibernate-configuration>

Next step, setting up the connection from our application. Since we want only one place to manage the database connectivity in the application, this will be set up as a Singleton.

package nl.ghyze.contacts.repository;

import org.hibernate.Session;
import org.hibernate.SessionFactory;
import org.hibernate.boot.MetadataSources;
import org.hibernate.boot.registry.StandardServiceRegistry;
import org.hibernate.boot.registry.StandardServiceRegistryBuilder;

public class DatabaseSessionProvider {

	private static final DatabaseSessionProvider instance = new DatabaseSessionProvider();

	private SessionFactory sessionFactory;
	
	private DatabaseSessionProvider(){
		setUp();
	}
	
	public static DatabaseSessionProvider getInstance(){
		return instance;
	}
	
	public Session getSession(){
		Session session = sessionFactory.openSession();
		session.beginTransaction();
		return session;
	}

	public void end(Session session){
		if (session.getTransaction().getRollbackOnly()){
			rollback(session);
		} else {
			commit(session);
		}
	}

	private void commit(Session session){
		session.getTransaction().commit();
		session.close();
	}

	private void rollback(Session session){
		session.getTransaction().rollback();
		session.close();
	}

	public void shutDown(){
		sessionFactory.close();
	}

	private void setUp() {
		final StandardServiceRegistry registry = new StandardServiceRegistryBuilder()
				.configure()
				.build();
		try {
			sessionFactory = new MetadataSources( registry ).buildMetadata().buildSessionFactory();
		}
		catch (Exception e) {
			e.printStackTrace();
			StandardServiceRegistryBuilder.destroy( registry );
		}
	}
}

At this point we’re ready to read, write and delete contacts from and to the database. We’re going to use HQL for this purpose.

package nl.ghyze.contacts.repository;

import nl.ghyze.contacts.Contact;
import org.hibernate.Session;
import org.hibernate.query.Query;

import java.util.List;

public class ContactRepository {

    private final DatabaseSessionProvider provider;
    private Session session;

    public ContactRepository(){
        this.provider = DatabaseSessionProvider.getInstance();
    }

    public Contact saveContact(Contact contact){
        session = provider.getSession();
        session.saveOrUpdate(contact);
        provider.end(session);
        return contact;
    }

    public List<Contact> getAllContacts(){
        String queryString = "from Contact";
        Query<Contact> query = createQuery(queryString);
        List<Contact> result = query.getResultList();
        provider.end(session);
        return result;
    }

    public void deleteContact(Contact contact){
        session = provider.getSession();
        session.delete(contact);
        provider.end(session);
    }

    public Contact getContactWithId(Long id){
        if (id == null){
            return null;
        }
        String queryString = "from Contact where id = :id";
        Query<Contact> query = createQuery(queryString);
        query.setParameter("id", id);
        Contact contact = query.getSingleResult();
        provider.end(session);
        return contact;
    }

    private Query<Contact> createQuery(String queryString) {
        session = provider.getSession();
        return session.createQuery(queryString, Contact.class);
    }
}

With the database stuff out of the way, let’s create a simple service for communicating with the repository. At this point we’re getting into PrimeFaces. We’re making this a ManagedBean, so it can be injected into the Action classes that we’ll discuss next.

package nl.ghyze.contacts.service;

import nl.ghyze.contacts.Address;
import nl.ghyze.contacts.Contact;
import nl.ghyze.contacts.repository.ContactRepository;

import javax.faces.bean.ManagedBean;
import javax.faces.bean.SessionScoped;
import java.util.List;

@ManagedBean(name="contactService")
@SessionScoped
public class ContactService {

    private ContactRepository contactRepository = new ContactRepository();

    public List<Contact> getContactList(){
        return contactRepository.getAllContacts();
    }

    public Contact saveContact(Contact contact){
        return contactRepository.saveContact(contact);
    }

    public Contact getOrCreateContact(Long id){
        Contact contact =  contactRepository.getContactWithId(id);
        if (contact == null){
            contact = new Contact();
        }
        if (contact.getAddress() == null){
            contact.setAddress(new Address());
        }
        return contact;
    }

    public void deleteContact(Contact contact) {
        contactRepository.deleteContact(contact);
    }
}

Before we continue, let’s take a moment to think about the front-end of the application. We’re going to make two pages: the first one listing all the contacts that we have, and the second one listing the details of one single contact. Each page will have its own backing class, handling the server side actions of the requests.
The list action will be simple: it only needs to provide a list of Contacts that is currently in our database.

package nl.ghyze.contacts.web;

import nl.ghyze.contacts.Contact;
import nl.ghyze.contacts.service.ContactService;

import javax.faces.bean.ManagedBean;
import javax.faces.bean.ManagedProperty;
import javax.faces.bean.SessionScoped;
import java.util.List;

@ManagedBean(name="contactListAction")
@SessionScoped
public class ContactListAction {

    @ManagedProperty(value="#{contactService}")
    private ContactService contactService;

    public List<Contact> getContactList(){
        return contactService.getContactList();
    }

    public void setContactService(ContactService contactService){
        this.contactService = contactService;
    }

    public ContactService getContactService(){
        return contactService;
    }
}

The details action will be a bit more complicated. It will need to be able to:
– get the correct contact
– save the edited contact
– delete a contact
– Since everything is SessionScoped, when the user wants to go back to the list, it needs to clear its values
The methods that receive an ActionEvent as parameter are called from PrimeFaces. Pay attention to the method signature, as this is something that lots of developers struggle with.

package nl.ghyze.contacts.web;

import nl.ghyze.contacts.Contact;
import nl.ghyze.contacts.service.ContactService;

import javax.faces.bean.ManagedBean;
import javax.faces.bean.ManagedProperty;
import javax.faces.bean.SessionScoped;
import javax.faces.context.ExternalContext;
import javax.faces.context.FacesContext;
import javax.faces.event.ActionEvent;
import java.io.IOException;

@ManagedBean
@SessionScoped
public class ContactDetailAction {

    @ManagedProperty(value="#{contactService}")
    private ContactService contactService;

    private Contact contact;

    private Long contactId;

    public Contact getContact(){
        if (contact == null){
            contact = contactService.getOrCreateContact(contactId);
        }
        return contact;
    }

    public void save(ActionEvent actionEvent){
        contact = contactService.saveContact(contact);
        redirectToList();
    }

    public void delete(ActionEvent actionEvent){
        contactService.deleteContact(contact);
        redirectToList();
    }

    public void back(ActionEvent actionEvent){
        redirectToList();
    }

    private void redirectToList() {
        reset();
        ExternalContext context = FacesContext.getCurrentInstance().getExternalContext();
        try {
            context.redirect(context.getRequestContextPath() + "/contactList.xhtml");
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

    private void reset() {
        contact = null;
        contactId = null;
    }

    public void setContactService(ContactService contactService){
        this.contactService = contactService;
    }

    public ContactService getContactService(){
        return contactService;
    }

    public void setContactId(Long id){
        this.contactId = id;
    }

    public Long getContactId(){
        return contactId;
    }
}

Now we’re ready to create the .xhtml pages. For the list, we’re going to create a table for listing all contacts. The last column will have buttons, so we can edit the contact in that row. Clicking this button will navigate to the details page, supplying the ID of the contact. Below the table there will be a button for adding new contacts.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xmlns:h="http://java.sun.com/jsf/html" xmlns:f="http://java.sun.com/jsf/core" xmlns:p="http://primefaces.org/ui">

    <h:head>
        <title>Addressbook</title>
    </h:head>
    <h:body>
        <h:form>
            <p:panel header="Contact list">
               <p:dataTable var="contact" value="#{contactListAction.contactList}">
                   <p:column headerText="First name">
                       <h:outputText value="#{contact.firstname}"/>
                   </p:column>
                   <p:column headerText="Last name">
                       <h:outputText value="#{contact.lastname}"/>
                   </p:column>
                   <p:column headerText="Street">
                       <h:outputText value="#{contact.address.street}"/>
                   </p:column>
                   <p:column headerText="Housenumber">
                       <h:outputText value="#{contact.address.housenumber}"/>
                   </p:column>
                   <p:column headerText="Postal code">
                       <h:outputText value="#{contact.address.postalcode}"/>
                   </p:column>
                   <p:column headerText="City">
                       <h:outputText value="#{contact.address.city}"/>
                   </p:column>
                   <p:column style="width:32px;text-align: center">
                       <p:button icon="ui-icon-edit" title="Edit" outcome="contactDetail">
                            <f:param name="contactId" value="#{contact.id}" />
                       </p:button>
                   </p:column>
               </p:dataTable>
                <p:button outcome="contactDetail" value="new Contact"/>
            </p:panel>
        </h:form>
    </h:body>
</html>

The details page is, again, a bit more complicated. This page receives the contactId, and sets it in the action. This part is a bit confusing, and it takes a while to understand. The f:metadata tag is responsible for this.
The p:commandButton tags are used to call methods in ContactDetailsAction. This is how those methods that take the ActionEvent parameters are called.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xmlns:h="http://java.sun.com/jsf/html" xmlns:p="http://primefaces.org/ui" xmlns:c="http://java.sun.com/jstl/core" xmlns:f="http://java.sun.com/jsf/core">

    <h:head>
        <title>Edit Contact</title>
    </h:head>

    <h:body>
        <f:metadata>
            <f:viewParam name="contactId" value="#{contactDetailAction.contactId}" />
        </f:metadata>
        <h:form>
            <c:set var="contact" value="#{contactDetailAction.contact}"/>
            <p:panel header="Edit Contact">
                <h:panelGrid columns="2" cellpadding="4">
                    <h:outputText value="First name"/>
                    <h:inputText value="#{contact.firstname}"/>

                    <h:outputText value="Last name"/>
                    <h:inputText value="#{contact.lastname}"/>

                    <h:outputText value="Street" />
                    <h:inputText value="#{contact.address.street}"/>

                    <h:outputText value="Housenumber" />
                    <h:inputText value="#{contact.address.housenumber}"/>

                    <h:outputText value="Postal code" />
                    <h:inputText value="#{contact.address.postalcode}"/>

                    <h:outputText value="City" />
                    <h:inputText value="#{contact.address.city}"/>
                </h:panelGrid>
                <p:commandButton value="Save" actionListener="#{contactDetailAction.save}" ajax="false" />
                <p:commandButton value="Delete" actionListener="#{contactDetailAction.delete}" ajax="false" />
                <p:commandButton value="Back to list" actionListener="#{contactDetailAction.back}" ajax="false" />
            </p:panel>
        </h:form>
    </h:body>
</html>

The Deployment Descriptor (web.xml) tells the application server how to handle incomming requests.

<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns="http://xmlns.jcp.org/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/web-app_3_1.xsd" version="3.1">

    <!-- File(s) appended to a request for a URL that is not mapped to a web component -->
    <welcome-file-list>
        <welcome-file>contactList.xhtml</welcome-file>
    </welcome-file-list>

    <!-- Define the JSF servlet (manages the request processing life cycle for JavaServer Faces) -->
    <servlet>
        <servlet-name>faces-servlet</servlet-name>
        <servlet-class>javax.faces.webapp.FacesServlet</servlet-class>
        <load-on-startup>1</load-on-startup>
    </servlet>

    <!-- Map following files to the JSF servlet -->
    <servlet-mapping>
        <servlet-name>faces-servlet</servlet-name>
        <url-pattern>*.xhtml</url-pattern>
    </servlet-mapping>
</web-app>

The maven project configuration (pom.xml) tells maven how to build this project. It contains all the dependencies. Pay attention to the last part of the configuration, the maven-jetty-plugin. This is how we start, and test, the application.

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>

  <groupId>nl.ghyze</groupId>
  <artifactId>addressbook</artifactId>
  <version>1.0-SNAPSHOT</version>
  <packaging>jar</packaging>

  <name>addressbook</name>
  <url>http://maven.apache.org</url>

  <properties>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <java.version>1.8</java.version>

    <servlet.version>3.1.0</servlet.version>
    <jsf.version>2.2.13</jsf.version>
    <primefaces.version>6.0</primefaces.version>

    <maven-compiler-plugin.version>3.5.1</maven-compiler-plugin.version>
    <jetty-maven-plugin.version>9.4.0.M0</jetty-maven-plugin.version>

    <hibernate.version>5.2.1.Final</hibernate.version>
    <mysql.version>6.0.3</mysql.version>
  </properties>

  <dependencies>
    <!-- PrimeFaces -->
    <dependency>
      <groupId>org.primefaces</groupId>
      <artifactId>primefaces</artifactId>
      <version>${primefaces.version}</version>
    </dependency>
    <!-- Servlet -->
    <dependency>
      <groupId>javax.servlet</groupId>
      <artifactId>javax.servlet-api</artifactId>
      <version>${servlet.version}</version>
      <scope>provided</scope>
    </dependency>
    <!-- JSF -->
    <dependency>
      <groupId>com.sun.faces</groupId>
      <artifactId>jsf-api</artifactId>
      <version>${jsf.version}</version>
      <scope>compile</scope>
    </dependency>
    <dependency>
      <groupId>com.sun.faces</groupId>
      <artifactId>jsf-impl</artifactId>
      <version>${jsf.version}</version>
      <scope>compile</scope>
    </dependency>

    <!-- Database-->
    <dependency>
      <groupId>org.hibernate</groupId>
      <artifactId>hibernate-core</artifactId>
      <version>${hibernate.version}</version>
    </dependency>
    <dependency>
      <groupId>mysql</groupId>
      <artifactId>mysql-connector-java</artifactId>
      <version>${mysql.version}</version>
    </dependency>

  </dependencies>

  <build>
    <plugins>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-compiler-plugin</artifactId>
        <version>3.5.1</version>
        <configuration>
          <source>1.8</source>
          <target>1.8</target>
        </configuration>
      </plugin>
      <plugin>
        <groupId>org.eclipse.jetty</groupId>
        <artifactId>jetty-maven-plugin</artifactId>
        <version>${jetty-maven-plugin.version}</version>
        <configuration>
          <httpConnector>
            <port>9090</port>
          </httpConnector>
          <webApp>
            <contextPath>/address</contextPath>
          </webApp>
        </configuration>
      </plugin>
    </plugins>
  </build>
</project>

Split condition

Intuitively it makes sense to group several statements in an if/else clause. In that case you only need to execute the check once. However, this is not always the cleanest solution, and it makes methods longer than they should be.

For example, take the following piece of code.

    public void updateDocument(Attachment attachment){
        Document document = new Document();
        if (AttachmentModel.TYPE_SPECIAL.equals(attachment.getAttachmentType())) {
            DocumentType documentType = repository.findByCode(DocumentType.class, DocumentType.CODE_SPECIAL);
            document.setType(documentType);
            document.setName(documentType.getName() + " " + getDateAsString());
        } else {
            document.setType(repository.findByCode(DocumentType.class, DocumentType.CODE_OTHER));
            document.setName("Attachment " + getDateAsString());
        }
    }

So, what’s wrong with this? It does what it’s supposed to do, and it doesn’t look too complicated.

But it does too many things:

  1. It checks the type of the attachment
  2. Based on this type, it finds the DocumentType from the database
  3. It sets the DocumentType of the Document
  4. It generates a name
  5. It sets the name of the document

To cite Uncle Bob (Clean Code, page 35):

Functions should do one thing. They should do it well. They should do it only.

So, let’s refactor.

First, copy the condition, and move the statements around so that each condition handles only one property of the document.

    public void updateDocument(Attachment attachment){
        Document document = new Document();
        if (AttachmentModel.TYPE_SPECIAL.equals(attachment.getAttachmentType())) {
            DocumentType documentType = repository.findByCode(DocumentType.class, DocumentType.CODE_SPECIAL);
            document.setType(documentType);
        } else {
            document.setType(repository.findByCode(DocumentType.class, DocumentType.CODE_OTHER));
        }

        if (AttachmentModel.TYPE_SPECIAL.equals(attachment.getAttachmentType())){
            document.setName(documentType.getName() + " " + getDateAsString());
        } else {
            document.setName("Attachment " + getDateAsString());
        }
    }

This doesn’t compile, because documentType isn’t available in the second block. So, let’s move the declaration of DocumentType outside of the first block.

    public void updateDocument(Attachment attachment){
        Document document = new Document();

        DocumentType documentType = null;
        if (AttachmentModel.TYPE_SPECIAL.equals(attachment.getAttachmentType())) {
            documentType = repository.findByCode(DocumentType.class, DocumentType.CODE_SPECIAL);
            document.setType(documentType);
        } else {
            document.setType(repository.findByCode(DocumentType.class, DocumentType.CODE_OTHER));
        }

        if (AttachmentModel.TYPE_SPECIAL.equals(attachment.getAttachmentType())) {
            document.setName(documentType.getName() + " " + getDateAsString());
        } else {
            document.setName("Attachment " + getDateAsString());
        }
    }

Ah, that’s better. But there’s duplication: document.setType is called on two lines. Let’s move that outside of the if block. And, while we’re at it, let’s do the same for setName.

    public void updateDocument(Attachment attachment){
        Document document = new Document();

        DocumentType documentType = null;
        if (AttachmentModel.TYPE_SPECIAL.equals(attachment.getAttachmentType())) {
            documentType = repository.findByCode(DocumentType.class, DocumentType.CODE_SPECIAL);
        } else {
            documentType = repository.findByCode(DocumentType.class, DocumentType.CODE_OTHER)
        }
        document.setType(documentType);

        String name = null;
        if (AttachmentModel.TYPE_SPECIAL.equals(attachment.getAttachmentType())) {
            name = documentType.getName() + " " + getDateAsString();
        } else {
            name = "Attachment " + getDateAsString();
        }
        document.setName(name);
    }

This is getting in better shape. Now we have separate logic for getting the DocumentType, setting the DocumentType, getting the name, and setting the name. Let’s extract some methods and see what happens.

    public void updateDocument(Attachment attachment){
        Document document = new Document();
        DocumentType documentType = getDocumentType(attachment);
        document.setType(documentType);

        String name = getName(attachment, documentType);
        document.setName(name);
    }

    private DocumentType getDocumentType(Attachment attachment) {
        DocumentType documentType = null;
        if (AttachmentModel.TYPE_SPECIAL.equals(attachment.getAttachmentType())) {
            documentType = repository.findByCode(DocumentType.class, DocumentType.CODE_SPECIAL);
        } else {
            documentType = repository.findByCode(DocumentType.class, DocumentType.CODE_OTHER)
        }
        return documentType;
    }

    private String getName(Attachment attachment, DocumentType documentType) {
        String name = null;
        if (AttachmentModel.TYPE_SPECIAL.equals(attachment.getAttachmentType())) {
            name = documentType.getName() + " " + getDateAsString();
        } else {
            name = "Attachment " + getDateAsString();
        }
        return name;
    }

Our original method looks a lot simpler now. But the methods we’ve extraced contain duplication! The condition itself, the calls to repository.findByCode, and the calls to getDateAsString(). So, let’s fix that.

    public void updateDocument(Attachment attachment){
        Document document = new Document();

        DocumentType documentType = getDocumentType(attachment);
        document.setType(documentType);

        String name = getName(attachment, documentType);
        document.setName(name);
    }

    private DocumentType getDocumentType(Attachment attachment) {
        String documentTypeCode = null; 
        if (isAttachmentTypeSpecial(attachment)) {
            documentTypeCode = DocumentType.CODE_SPECIAL;
        } else {
            documentTypeCode = DocumentType.CODE_OTHER;
        }
        return repository.findByCode(DocumentType.class, documentTypeCode);
    }

    private String getName(Attachment attachment, DocumentType documentType) {
        String name = null;
        if (isAttachmentTypeSpecial(attachment)) {
            name = documentType.getName();
        } else {
            name = "Attachment";
        }
        return name + " " + getDateAsString();
    }

    private boolean isAttachmentTypeSpecial(Attachment attachment) {
        return AttachmentModel.TYPE_SPECIAL.equals(attachment.getAttachmentType());
    }

This code still looks a bit odd. getDocumentType() determines the document type code, and then uses that to find the documentType. And getName() has a similar issue: it determines the prefix for the name, and then appends the date as String.

    public void updateDocument(Attachment attachment){
        Document document = new Document();

        DocumentType documentType = getDocumentType(attachment);
        document.setType(documentType);

        String name = getName(attachment, documentType);
        document.setName(name);
    }

    private DocumentType getDocumentType(Attachment attachment) {
        String documentTypeCode = getDocumentTypeCode(attachment);
        return repository.findByCode(DocumentType.class, documentTypeCode);
    }

    private String getDocumentTypeCode(Attachment attachment) {
        if (isAttachmentTypeSpecial(attachment)) {
            return DocumentType.CODE_SPECIAL;
        } 
        return DocumentType.CODE_OTHER;
    }

    private String getName(Attachment attachment, DocumentType documentType) {
        String name = getNamePrefix(attachment, documentType);
        return name + " " + getDateAsString();
    }

    private String getNamePrefix(Attachment attachment, DocumentType documentType) {
        if (isAttachmentTypeSpecial(attachment)) {
            return documentType.getName();
        } 
        return "Attachment";
    }

    private boolean isAttachmentTypeSpecial(Attachment attachment) {
        return AttachmentModel.TYPE_SPECIAL.equals(attachment.getAttachmentType());
    }

At this point we have some variables and perhaps some methods that are not adding anything in terms of clarity. We should inline them and compact our code a little.

    public void updateDocument(Attachment attachment){
        Document document = new Document();

        DocumentType documentType = getDocumentType(attachment);
        document.setType(documentType);

        document.setName(getName(attachment, documentType));
    }

    private DocumentType getDocumentType(Attachment attachment) {
        return repository.findByCode(DocumentType.class, getDocumentTypeCode(attachment));
    }

    private String getDocumentTypeCode(Attachment attachment) {
        if (isAttachmentTypeSpecial(attachment)) {
            return DocumentType.CODE_SPECIAL;
        }
        return DocumentType.CODE_OTHER;
    }

    private String getName(Attachment attachment, DocumentType documentType) {
        return getNamePrefix(attachment, documentType) + " " + getDateAsString();
    }

    private String getNamePrefix(Attachment attachment, DocumentType documentType) {
        if (isAttachmentTypeSpecial(attachment)) {
            return documentType.getName();
        }
        return "Attachment";
    }

    private boolean isAttachmentTypeSpecial(Attachment attachment) {
        return AttachmentModel.TYPE_SPECIAL.equals(attachment.getAttachmentType());
    }

This refactoring introduced a couple of new methods. Each method call takes a little extra time, and reduces performance a bit. But there’s a catch: with modern processors, cache-misses are the biggest performance killers. Keeping the data as local as possible will reduce cache-misses and increase performance.
In practice, however, I doubt that you’ll even notice the performance difference.

As for clarity, the original code expanded from 11 lines to 34 lines, a 3-fold increase. However, the original method decreased a bit. And because we’ve added a couple of layers of abstraction, the method now communicates only what it does. The details of how are delegated to other methods.

Working with multiple screens in Java

When you want to set the location of a Java Window, you just call setLocation(int x, int y). But what if your screen isn’t big enough? Or when you want to display the window on a second screen?

The desktop is represented by a virtual device, with physical screens providing a viewport onto that device. The top-left corner of the primary screen is always located at coordicates (0, 0). Other physical screens are located relative to this coordinate. See the api-documentation for java.awt.Frame for more information.

To get all available graphics devices, which represent physical screens, use the following code:

GraphicsDevice[] graphicsDevices = 
    GraphicsEnvironment
        .getLocalGraphicsEnvironment()
        .getScreenDevices();

Then, to get the bounds or the offset and size of each device, use the following code:

Rectangle bounds = 
    device
        .getDefaultConfiguration()
        .getBounds();

This works for all screens _except_ the primary screen. The primary screen also contains a taskbar, and you usually don’t want to place a window over the taskbar.

To get the bounds of the primary screen, without the taskbar, use the following code:

GraphicsEnvironment
    .getLocalGraphicsEnvironment()
    .getMaximumWindowBounds();

Pomodoro 0.2 released

It has been a while since the first version of this pomodoro tracker has been released. Development has continued, and there have been a few changes:

  • Backported to Java 1.7.
  • Added popup dialog when Pomodori are done, to ask whether to register it as a complete one:dialog-0.2
  • Added popup dialog when breaks are done, to ask whether to start a new Pomodoro
  • Added the ability to manually reset the current amount of done pomodori
  • The current amount of done pomodori will can be automatically reset after X amount of minutes (default 60 minutes)
  • Popup dialogs will auto-close after 5 minutes
  • Reduced font size of wait screen so it will display correctly on Linux

The updated version can be downloaded here. Source code can be found here.

WAS Deploy Script

When you’re developing for IBM Websphere, you could update your application using the webinterface. But that is a bit slow, and you risk trashing your Websphere installation. You can also use a script. Below is a Windows batch file to do just that.

@setlocal

set WSADMIN_BIN=C:\was_profile\bin
set APP_NAME={My Web Application}
set EAR_LOC=C:\location\application.ear

cd  /D %WSADMIN_BIN%
call wsadmin -c "$AdminApp update %APP_NAME% app {-operation update -contents %EAR_LOC%}"

Installing and using Citrix under Linux Mint

Installing and using citrix under linux might be a bit more tricky than it may appear on first glance. Here are a few steps to make it work.

1. Remove old installation, if needed.

sudo apt-get remove icaclient

2. Download Receiver for Linux. Look for it here: http://www.citrix.com/go/receiver.html
Make sure you have the correct version. This is probably under “For 64-bit Systems”

3. Open the download, and install through package manager.

4. Link certificates:

sudo ln -s /usr/share/ca-certificates/mozilla/* /opt/Citrix/ICAClient/keystore/cacerts

5. in firefox, set Citrix Receiver for Linux plugin to Always Activate.