Thursday, April 12, 2007

From maven to mvn : Part 7 -- A Groovier Timestamp

With a bit of help from the maven users list I've sorted out the details of creating a maven plugin written in Groovy. It isn't as clean or elegant as I had hoped but things are still brewing so I suspect they will only get better. I'm told that there is a discussion about this on the Groovy mailing list as well.

Before I continue I would like to thank Dennis Lundberg and Martin Gilday on the maven users list. You can see our conversation here. Dennis turned me on to the groovy compiler and Martin's blog entry got me hooked up with javalike-maven-plugin-tools for generating my project descriptor.

So... Let's get down to it and figure out how to write a simple plugin in Groovy. My goal will be to replace my earlier attempt with a compiled Groovy class.

To begin with, let's take a look at TimestampMojo.groovy:
/**
* Set the current time into a system property for use by resource
* filtering.
*
* @goal execute
* @phase generate-resources
* @requiresDependencyResolution runtime
*/
class TimestampMojo extends AbstractMojo
{
/**
* The System property which will hold the current time.
*
* @parameter default-value="timestamp"
*/
String propertyName;

/**
* The format string given to MessageFormat.format()
*
* @parameter default-value="{0,date,yyyy-MM-dd HH:mm:ss}"
*/
String formatString;

public void execute()
throws MojoExecutionException
{
String timestamp = MessageFormat.format(formatString, new Date())
System.setProperty(propertyName, timestamp)
}
}
Well, that's simple enough. Note that there appear to be quite a few extra semi-colons for a Groovy script. It turns out that they're necessary because of the javalike plugin we will use to build the plugin descriptor. Martin's blog entry goes into that in detail so I won't repeat it here.

Next we need a pom to build the plugin. It's a bit large so I'll break it down into bite-sized pieces beginning with the typical header parts:
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/maven-v4_0_0.xsd">

<modelVersion>4.0.0</modelVersion>

<groupId>myco.util.mvn.timestamp-plugin</groupId>
<artifactId>myco-timestamp-plugin</artifactId>
<packaging>maven-plugin</packaging>
<version>1.1-SNAPSHOT</version>

<name>myco-timestamp-plugin Maven Mojo</name>
No surprises there so let's move on to specify some extra repositories. We do this because some of the things we need are hosted at Codehaus rather than the default repository.
  <repositories>
<repository>
<id>Codehaus</id>
<url>http://repository.codehaus.org/org/codehaus/mojo/</url>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>Snapshots</id>
<url>http://repository.codehaus.org/org/codehaus/mojo/</url>
</pluginRepository>
</pluginRepositories>
And now we come to the build section where we define the plugins we need in order to build our plugin. However, we first need to tell the groovy plugin where it can find it's sources:
  <build>
<sourceDirectory>${basedir}/src/main/groovy</sourceDirectory>
Did someone say groovy plugin? Yup. If we're going to write the source as groovy then we need some way to compile that into a .class...
    <plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>groovy-maven-plugin</artifactId>
<version>1.0-alpha-2</version>
<executions>
<execution>
<goals>
<goal>compile</goal>
</goals>
</execution>
</executions>
</plugin>
Remember the annotations in my *.groovy file above? Well, we're going to use the plugin plugin to turn those into a plugin descriptor. Out of the box that plugin doesn't know anything about groovy, though, so we use javalike-maven-plugin-tools to enhance it a bit. As of this writting (April 12, 2007) the javalike plugin hasn't escaped from the Codehaus mojo sandbox so you'll have to check it out of their svn repo and build it yourself. Martin's blog covers that but I'll repeat it here for convenience:
svn co http://svn.codehaus.org/mojo/trunk/mojo \
/mojo-sandbox/groovy-maven-tools/javalike-maven-plugin-tools/

      <plugin>
<artifactId>maven-plugin-plugin</artifactId>
<dependencies>
<dependency>
<groupId>org.codehaus.mojo</groupId>
<artifactId>
javalike-maven-plugin-tools
</artifactId>
<version>2.0-SNAPSHOT</version>
</dependency>
</dependencies>
</plugin>
</plugins>
</build>
Now that we have our plugins defined we need to define our dependencies. Of course, the plugins will pull in whatever they need but because somebody is going to use us as a plugin (we hope) we need to include groovy in our dependencies so that that somebody will get it (transitively) when they build.
  <dependencies>
<dependency>
<groupId>groovy</groupId>
<artifactId>groovy-all</artifactId>
<version>1.0</version>
</dependency>
<dependency>
<groupId>org.apache.maven</groupId>
<artifactId>maven-plugin-api</artifactId>
<version>2.0</version>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>3.8.1</version>
<scope>test</scope>
</dependency>
</dependencies>
</project>
And that's it! Drop the above into a normal maven project structure (remember, TimestampMojo.groovy goes into src/main/groovy). Compile and install and you're ready to go.



Using the plugin is as simple as using any other. For this kind of thing I would recommend putting it into your ancestral pom.xml that all of your other projects extend but that's a personal choice. In any case, usage looks like this:
    <plugin>
<groupId>myco.util.mvn.timestamp-plugin</groupId>
<artifactId>myco-timestamp-plugin</artifactId>
<executions>
<execution>
<goals>
<goal>execute</goal>
</goals>
</execution>
</executions>
</plugin>
The execute goal is tied to the generate-resources lifecycle phase so that the property will be available to you prior to copying of resources. This means that any ${timestamp} in any filtered resource will be replaced by the current timestamp. If you don't remember how to setup filtered resources, well, neither do I so here it is again:
  <resources>
<resource>
<directory>${basedir}/src/main/java</directory>
<filtering>true</filtering>
<includes>
<include>**/*.xml</include>
</includes>
</resource>
</resources>
And if you want to filter your jsp's before creating the war file (I like to have a ${timestamp} in my footer.jsp...) then you want something like this:
    <plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<configuration>
<webResources>
<resource>
<directory>
${basedir}/src/main/webapp
</directory>
<filtering>true</filtering>
<includes>
<include>**/*.xml</include>
<include>**/footer.jsp</include>
</includes>
</resource>
</webResources>
</configuration>
</plugin>
Remember these bits are in the pom of the application using the timestamp plugin, not in the pom of the plugin itself.

Tuesday, April 10, 2007

From maven to mvn : Part 6 -- Classloader Maddness & A Custom Plugin

Wow. I had no idea it has been a month since my last post. I would plead business but that's cheating. After all, we're all busy...

Tonight's post is going to be a bit eclectic & is the result of considerable (mythical) man-hours of work. Much of that time was spent figuring out why things were broken rather than actually fixing them. The fixing part was really quite short though it was interrupted from time to time by more figuring parts.

The Setup:

I have a utility that I wrote quite some time ago that implements the notion of Evolutionary Database Design. It's an internal tool that I may or may not be able to publish someday but that's not really relevant right now. What is relevant is that it is simply a jar that relies on a number of dependencies, most notably Spring. It runs happily from the command line, maven 1 and Eclipse.

The Plan:

Create a maven 2 plugin around my utility by firing it's Main class with the appropriate parameters.

The Pain:

Classloaders. Man I hate classloaders.

It's a long and unpleasant tale and I won't bore you with the details. You can read some about it here though the rafb posts are long gone.

The short version is that maven creates a custom classloader for itself (a reasonable thing to do) and puts a pom's dependencies into it. In my case, one of those is Spring and when the context begins loading classes it gets tripped up in classloader evilness and begins to belive org.springframework.beans.factory.xml. SimplePropertyNamespaceHandler] does
not implement the NamespaceHandler interface which is patently not true.

The Solution:

I can't explain exactly why but it took me *hours* to figure out that the problem was classloader related and *more* hours to work out the solution. I can only say that it was most likely a Monday (and possibly a Tuesday) and hope that you will forgive me for being somewhat dense.

What you really want to know, though, is how I solved the problem. It is my expectation that someone else may very well run into the same thing so here we go...

First we have my mojo:
/**
* Invoke EDD to generate SQL from XML.
*
* @goal generateSql
* @phase compile
* @requiresDependencyResolution runtime
*/
public class GenerateSqlMojo extends AbstractMojo
{
/**
* Where to get the *.xml/*.sql files to be processed.
*
*
* @parameter expression="${project.build.directory}/classes"
* @required
*/
private String inputDirectory;

/**
* Prefix for the files created by EDD.
*
* @parameter expression="${project.build.directory}/classes/${project.artifactId}"
* @required
*/
private String outputPrefix;

/**
* The classpath elements of the project. We need this so that we can
* get the JDBC driver for the project.
*
* @parameter expression="${project.runtimeClasspathElements}"
* @required
* @readonly
*/
private List classpathElements;

public void execute() throws MojoExecutionException
{
try
{
new File(outputPrefix).getParentFile().mkdirs();
new UtilityExecutor().execute(inputDirectory,
outputPrefix, classpathElements);
}
catch (final Exception cause)
{
throw new MojoExecutionException("Failed to invoke EDD", cause);
}
}
}
OK... Not very exciting... In particular we see nothing about classpaths or classloaders other than the classpathElements parameter/attribute. All of that evilness is hidden in my UtilityExecutor. I'll tell you now: It could be done better! A more robust / reusable solution would extract the classloader rot out of UtilityExecutor and into a more generic helper class. That's a great idea... I'll do it later.

So, let's see UtilityExecutor:
public class UtilityExecutor
{
public UtilityExecutor()
{
super();
}

public void execute(final String inputDirectory, final String outputPrefix,
final Boolean saveOnlyMode,
final List<String> classpathElements)
throws ...
{
final URL[] urls = buildUrls(classpathElements);
final URLClassLoader cl =
new URLClassLoader(urls, ClassLoader.getSystemClassLoader());
Thread.currentThread().setContextClassLoader(cl);

final Class clazz = cl.loadClass(getClass().getName());
final Constructor ctor =
clazz.getConstructor(
new Class[] { String.class, String.class, Boolean.class });
ctor.newInstance(new Object[] { inputDirectory, outputPrefix, saveOnlyMode });
}

public UtilityExecutor(final String inputDirectory,
final String outputDirectory,
final Boolean saveOnlyMode)
{
final Main main = new Main();
main.setSaveOnlyMode(saveOnlyMode.booleanValue());
main.setSaveTarget(outputDirectory);
main.execute(new String[] { inputDirectory });
}

So let's break this down...

1) GenerateSqlMojo fires new UtilityExecutor().execute(...) (a convenience method which simply delegates to the execute(...) method shown).

2) execute(...) builds a URL[] of for our custom classloader (more on that in a minute).

3) The custom classloader is created.

Now it gets a bit weird...

4) We ask the custom classloader to load the Class for ourselves. This causes the class to be loaded from the custom classloader against whatever classpath we built in #2 then

5) fetch the "do some work" constructor for ourself and

6) finally invoke the constructor to launch the utility. (I chose to do the "work" in the constructor to reduce the need for reflection BTW.)


Now a note about the buildUrls(...) method. There isn't anything really magic about what we're doing here. Simply take the URLs from our current classloader (the classloader that loaded the mojo) and combine them with the classpath elements provided from our execution environment. These classpath elements are the dependencies of the pom in which the plugin is being invoked. In my case I need this because the JDBC driver to be used is specific to the client of the plugin rather than to the plugin itself. I also need these because the utility expects to find some of it's runtime configuration in a properties file in ${basedir}/target/classes.

Pay particular attention to the code in bold... If your classpath element is a directory you need to append the trailing slash or your resources there (e.g. -- ${basedir}/target/classes) will not be found. This was something of a painful and frustrating lesson...
  private URL[] buildUrls(final List classpathElements) throws MalformedURLException
{
final URL[] mojoUrls = ((URLClassLoader) getClass().getClassLoader()).getURLs();
final URL[] urls = new URL[mojoUrls.length + classpathElements.size()];
int ndx = 0;
for (final URL url : mojoUrls) urls[ndx++] = url;
for (String cpe : classpathElements)
{
final File file = new File(cpe);
if (file.isDirectory())cpe += "/";
urls[ndx++] = new URL("file:/" + cpe);
}
return urls;
}
}

BTW, I also have an UpdateSchemaMojo that is basically the same as GenerateSqlMojo but invokes a different convenience method on UtilityExecutor.


A Custom Lifecycle:

Now the point of my plugin is to create SQL from XML and optionally apply it to your schema. Great. That doesn't really fit with the typical maven build lifecycle. Never one to try the shallow end of the pool first I decided to jump in the deep end (with lead weights) and create a custom package type for my plugin. It turns out not to be too difficult.

To create a custom lifecycle you only need to create a plexus components.xml file. In my case it looks like this:
<component-set>
<components>
<component>
<role>
org.apache.maven.lifecycle.mapping.LifecycleMapping
</role>
<role-hint>sa</role-hint>
<implementation>
org.apache.maven.lifecycle.mapping.DefaultLifecycleMapping
</implementation>
<configuration>
<phases>
<process-resources>
org.apache.maven.plugins:maven-resources-plugin:resources
</process-resources>
<compile>
myco.util:myco-util-edd-plugin:generateSql
</compile>
<package>
org.apache.maven.plugins:maven-jar-plugin:jar
</package>
<install>
org.apache.maven.plugins:maven-install-plugin:install,
myco.util:myco-util-edd-plugin:updateSchema
</install>
<deploy>
org.apache.maven.plugins:maven-deploy-plugin:deploy
</deploy>
</phases>
</configuration>
</component>
<component>
<role>
org.apache.maven.artifact.handler.ArtifactHandler
</role>
<role-hint>sa</role-hint>
<implementation>
org.apache.maven.artifact.handler.DefaultArtifactHandler
</implementation>
<configuration>
<type>sa</type>
<extension>zip</extension>
<packaging>sa</packaging>
</configuration>
</component>
</components>
</component-set>

It isn't really that bad... All I'm doing here is defining the stages of the lifecycle I'm interested in and the plugins to fire at each one. Notice that in install I specify both the standard install plugin as well as my own updateSchema plugin. This means that after my sa artifact is published to the local repo updateSchema will be fired to (optionally) update my database for me. (In case you're wondering, UpdateSchemaMojo has a Boolean parameter that will enable/disable the database update.)

The type, extension and packaging tags are there to tell maven about my custom packaging type and what to call it when it is installed. We will see what this means in just a bit.


The only thing left is the pom.xml for my plugin. There is nothing magic here either:
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/maven-v4_0_0.xsd">

<modelVersion>4.0.0</modelVersion>

<version>1.4-SNAPSHOT</version>
<groupId>myco.util</groupId>
<artifactId>myco-util-edd-plugin</artifactId>
<packaging>maven-plugin</packaging>

<name>EDD Maven Mojo</name>

<dependencies>
... As necessary
</dependencies>
</project>

Using The Plugin:

OK, so now we know everything there is to know about the plugin. All that's left is to use it. Since we're introducing a new packaging type there is a detail we need to be aware of.

The pom starts out as usual:
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/maven-v4_0_0.xsd">

<modelVersion>4.0.0</modelVersion>

<parent>
... as necessary
</parent>

<name>contacts database</name>
<artifactId>contacts-database</artifactId>
then we specify our custom packaging type:
  <packaging>sa</packaging>
and then define the plugin. There are two things to note here: (a) the extensions tag tells maven that the plugin extends the standard behavior and (b) the enableEDD property that enables/disables the UpdateSchemaMojo.
  <build>
<plugins>
<plugin>
<groupId>myco.util</groupId>
<artifactId>myco-util-edd-plugin</artifactId>
<version>1.4-SNAPSHOT</version>
<extensions>true</extensions>
<configuration>
<enableEDD>${enableEDD}</enableEDD>
</configuration>
</plugin>
</plugins>
In my case the plugin needs some runtime configuration from a property file so I use a resource tag to be sure that gets copied into ${basedir}/target/classes. I also need the Oracle JDBC driver so that the utility can connect to the database to update the schema.
    <resources>
... As necessary
</resources>
</build>

<dependencies>
<dependency>
<!--
We have an Oracle database so we need to have
this in our classpath to pickup the JDBC driver.
-->
<groupId>oracle</groupId>
<artifactId>ojdbc14</artifactId>
<version>10.2.0.2</version>
</dependency>
</dependencies>
Modifying the schema with every build is probably not a good idea so we disable the update by default and invoke 'mvn -DenableDD=true install' on demand. You could also control this with your ~/.m2/settings.xml and profiles.
  <properties>
<enableEDD>false</enableEDD>
</properties>

</project>

Postfix:

That's it my friend. It is a bit of a long post (hopefully making up for the previous one's shortness) but there was rather a lot to cover. Let's review what I've got here:

A) How to create a Java Mojo implementing your custom Maven 2 plugin functionality.

B) How to create a custom classloader to deal with oddball classloader issues.

C) A clever, IMO, way to isolate the classloader issues and minimize reflection. (Though, honestly, UtilityExecutor could be a bit more clever and less hackish.)

D) How to specify a custom package type with it's own custom lifecycle.

E) How to use your shiny new plugin and it's custom packaging type.


Yes, it took me a bit of time to get everything working. That's one of the problems with jumping into the deep end of the pool when you're first learning something new. On the other hand, I'm reasonably confident that I've faced many (hopefully most) of the odd edge-cases when creating plugins. With any luck this will be a true statement and I can lean on these experiences when I create my next, and hopefully simpler, plugin.

As always, thanks for reading. Feel free to post feedback & questions. Peace ya'll.

Monday, March 05, 2007

From maven to mvn : Part 5 -- A Groovy Timestamp

Good afternoon and happy Monday! I hope you had a great weekend, I know I did. I have to apologize for today's entry up front. It isn't about writing a plugin as I had hoped it would be. It's also somewhat brief. Well, it's Monday and there have been more than the usual number of distractions...

I've been bothered about abusing the maven-buildnumber-plugin to set a timestamp property used in ${foo} replacement. I happened to be doing some reading over the weekend and came across the groovy-maven-plugin. Which lets you run a random bit of groovy script during your build process. Now I still intend to explore the exciting world of creating my own maven plugins but I think this is a good place to start. In fact, when I *do* write my own plugins in the future they will very likely be written in groovy using this technique.

So, today's brief entry will (a) drop the buildnumber plugin and (b) use the groovy-maven plugin to replace its functionality.

Begin by adding repository entries to your pom:
<repositories>
<repository>
<id>Codehaus Snapshots</id>
<url>http://snapshots.repository.codehaus.org/</url>
</repository>
</repositories>

<pluginRepositories>
<pluginRepository>
<id>Codehaus Snapshots</id>
<url>http://snapshots.repository.codehaus.org/</url>
</pluginRepository>
</pluginRepositories>

Note: If you already have repositories declared just add these new entries, don't blow away what you've got.

Next, declare the plugin and some script for it to execute:

<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>groovy-maven-plugin</artifactId>
<executions>
<execution>
<phase>generate-resources</phase>
<goals>
<goal>execute</goal>
</goals>
<configuration>
<source>
<body>
import java.util.Date
import java.text.MessageFormat
def timestamp = MessageFormat.format("{0,date,yyyy-MM-dd HH:mm:ss}", new Date())
System.setProperty("timestamp", timestamp)
</body>
</source>
</configuration>
</execution>
</executions>
</plugin>


That's it! Now, during the generate-resources phase of the maven build lifecycle groovy will execute the script to set the system property timestamp to the current time.

That's it for today. I have to run off to take care of some other things. I promise you a better Part 6.

Thursday, March 01, 2007

From maven to mvn : Part 4 -- Transitive Dependencies

So I took a break from maven yesterday to do some performance investigation using JAMon and AspectJ. We don't need no fancy-pants monitoring tools. Just a bit of advice, a Monitor or two and an editor that can handle huge log files.

But that's not what I'm here to talk about today.

Today I'm continuing the m1-to-m2 migration and turning my attention to my nasty pile of dependencies. With m2's way cool transitive dependency management I hope to drop my 300 lines of dependencies in pom.xml to something a bit more manageable.

Now you probably don't want me to dump the original 300 lines here so I can't really show you what I'm starting with. However, the high level framework bits include: spring, hibernate, acegi, aspectj and misc Jakarta commons. If you've worked with any of these you know that each one introduces still other dependencies you have to deal with.

To get started I simply commented out all of the dependencies we didn't produce in-house and started compiling. AspectJ was the first thing mvn complained about so now I have to go figure out how to add it correctly... I mentioned this link earlier (Part 2 I believe). I'm not a big fan of repeating someone else's words so please take a moment to read the link. I'll wait.

Did you see this bit: "site:www.ibiblio.org maven2 log4j" If you didn't, go back and look for it. That is the secret to finding your m2 dependencies. I will caution you, however, that maven-metadata.xml may not always list the most recent available versions. It's worth doing a directory listing to see what is there.

For instance, "site:www.ibiblio.org maven2 aspectj" gets me to a maven-metadata.xml that does not list version 1.5.2a which, as it turns out, is the one I want. The directory listing, however, shows me a 1.5.2a subdirectory which has exactly what I need.

Also in maven-metadata.xml you will find the appropriate groupId and artifactId values you need for your dependency. Armed with all of this information I can now add the appropriate pom.xml entry:
<dependency>
<groupId>aspectj</groupId>
<artifactId>aspectjrt</artifactId>
<version>1.5.2a</version>
</dependency>

Good. That was easy. Now rinse & repeat for my other dependencies and lets see where we end up...

Well, that didn't last very long. My very next dependency is hibernate. My google search showed up beanlib (which is really cool BTW) and the old net.sf.hibernate stuff but not the 3.2.1.ga version I need. Fortunately, I know how the m1 repository is layed out and a bit about how the m2 repository is layed out. On a hunch I try a reasonable URL and find something useful. (I also noticed that 3.2.2.ga is available so I may consider an upgrade soon. But I digress...) Now I know the groupId and artifactId values for my hibernate dependency so that gets added to my pom.xml

Now rinse & repeat. Occasionally a dependency shows up with a groupId or artifactId that I wasn't quite expecting from my m1 days but those things are easy to work with.

javax.transaction introduced a new wrinkle. Apparently you have to go get that from Sun yourself. Thanks guys. <sigh/> Fortunately m2 tells us exactly what to do:
Missing:
----------
1) javax.transaction:jta:jar:1.0.1B

Try downloading the file manually from:
http://java.sun.com/products/jta

Then, install it using the command:
mvn install:install-file -DgroupId=javax.transaction -DartifactId=jta \
-Dversion=1.0.1B -Dpackaging=jar -Dfile=/path/to/file

Path to dependency:
1) com.myCompany.personal.jcej:contacts:war:1.0-SNAPSHOT
2) org.hibernate:hibernate:jar:3.2.1.ga
3) javax.transaction:jta:jar:1.0.1B
Oh what fun... However, if you run across other dependencies that are not available at the venerable ibiblio you can use this same technique to install them to your local repository. Certainly not ideal but tolerable until they do become available.

It is worth pointing out that as you make your way through your dependencies you may be able to remove some you added previously. For instance, I added org.hibernate/hibernate because my build broke without it. Later I added org.hibernate/hibernate-annotations because the build broke on that. The later requires the former so I can remove my explicit org.hibernate/hibernate dependency and rely on the transitive dependency from org.hibernate/hibernate-annotations. It is a bit more work to backtrack this way but in the long run it will make for a much smaller and easier to maintain pom.xml. For simplicity it is worth my time to inspect my dependencies' pom.

While exploring that, because I know acegisecurity depends on spring, I discovered that some plugins specify their dependency versions via properties. Now that's a great idea! Because it centralizes dependency version management and lets me override their required version by overriding the property.

Thus I now have a properties section in my pom.xml something like this:
<properties>
<spring.version>2.0.1</spring.version>
...
I just have to pay attention to my dependencies dependencies. But I was going to do that anyway. Using the properties is easy:
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring</artifactId>
<version>${spring.version}</version>
</dependency>
[ time passes ]

So after about an hour (which isn't bad considering the normal afternoon interruptions) I get to the point where things are successfully compiling. But my unit tests are failing!? Looking at the surefire reports clues me in to the thought that maybe some of my dependencies aren't consistent:
    java.lang.NoSuchMethodError: org.springframework.util.ObjectUtils.nullSafeToString(Ljava/lang/Object;)Ljava/lang/String;
That don't look good...

Build again with the tests disabled (mvn -Dmaven.test.skip=true install) and see what we have in our war's lib directory. Yea... Well... Who put spring 1.2.8 jars in there? This is a case where transitive dependencies hurt rather than help.

OK, roll the sleeves up and crank up the tunes. Lets go figure this one out. First, we need to tell maven to tell us more: mvn -X -Dmaven.test.skip=true install. It's probably wise to capture that via nohup or script since it will be a tad verbose. I know that acegisecurity depends on spring and after a few minutes I realize that the spring.version property I was so excited about didn't really help me. Sure, I set it to 2.0.1 in my own pom.xml but that didn't override the value in the acegisecurity dependency's pom. Drat. After a bit of reading I come to the conclusion that I need to exclude spring from acegisecurity's dependencies. Once I do this I can specify the spring versions I want in my own pom.xml and we should be in good shape.

Better but still not good. We have the correct version of spring and that's good. But my hsqldb-based unit tests are failing. They're complaining about "The database is already in use by another process" and I wasn't getting that before I started mucking about with transitive dependencies. This turned out to be similar to the spring problem in that struts-menu was pulling an old version of hsqldb.

There seems to be some difference in the way transitive dependencies are excluded. In the acegisecurity-uses-spring case spring is specified in the dependencyManagement section. When I add an exclusion to my acegisecurity dependency declaration all works as expected. Namely, my spring dependency is used instead of acegi's. However, struts-menu declares its dependency on hsqldb in its dependencies, not under dependencyManagement. When I add an exclusion to struts-menu not only does mvn not include the version of hsqldb that struts-menu wants but it does not include *any* version of hsqldb though it is clearly listed in my own dependencies. Could be operator error... Just be cautious. (If you have any word on this please drop me a comment so I can quit pulling my hair out over it.)

That's it for today. My m1-built war still has different dependencies than my m2-built war but I'm confident that those remaining dependencies can be solved by applying the steps above. It's raining and 5:00 and I would like to get home before dinner gets cold. I'll wrap this up in the morning and if I make any amazing discoveries I'll document 'em here for you. Otherwise you can safely assume that things went well.

Part 5 coming soon. I'm not quite sure what that'll be but it might have something to do with creating my own plugin. Check back soon...

(Oh, and if you're keeping score, my 300 lines of dependencies are now about 120 lines. Maybe less if I drop the odd comment here and there. I call that an improvement!)

Tuesday, February 27, 2007

From maven to mvn : Part 3 -- Filtering Resources & Timestamps

Good morning all!

We left off yesterday with an application that will build and tests that will pass. We even have a war file sitting in our target directory. Before I try to deploy that, however, there are some loose ends I want to clean up.

Our goals for Part 3 are:
- @foo@ (or ${foo}) replacement
- attempt to deploy the war

M1 grew out of a set of ant build scripts. As such, filter-when-copying worked pretty much like you would expect from ant. In other words, any @foo@ text in a filter-when-copying resource would be replaced by the foo property value. M2 uses the newer ${foo} syntax and that's probably a good thing.

The first thing I want to do is identify the files I need to change:
  grep -r '@.*@' src/ | grep -v /.svn | grep -v /images/

It turns out that I'm only using a few properties. One no-brainer is the application version that comes from the pom. The others are application specific properties I need to deal with.

So, @pom.artifactId@ should simply become ${pom.artifactId}. Enabling filters is easy enough to do by adding <filtering>true</filtering> to the appropriate resource tag(s).

That gets me very close but not quite done... You see, I'm filtering /WEB-INF/geronimo-web.xml. The one that gets copied into target/test-classes (because of my pom's testResources as discussed in Part 2) is fine. However, the one in target/contacts-1.0-SNAPSHOT (which is managed by the war plugin) is not filtered. Hrm...

It turns out that the war plugin is responsible for managing that bit of the project. Well, that makes sense. So, to change it's behavior we need to configure the plugin:

<build>
...
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<configuration>
<webResources>
<resource>
<directory>${basedir}/src/main/webapp</directory>
<filtering>true</filtering>
<includes>
<include>**/*.xml</include>
</includes>
</resource>
</webResources>
</configuration>
</plugin>
Now if you read the plugin documentation about this topic you will probably reach the conclusion that I'm doing it wrong. That is probably the correct conclusion. What I should do is create a ${basedir}/otherResources directory and put the to-be-filtered things there. Well, as with my dependency ick, I'm simply going to defer that to later. I still haven't quite decided for myself whether I like the idea of separating the resources from everything else anyway...

So, now I have my ${pom.*} properties filtered but I still have my application-specific properties to contend with. In particular, I have a myApp.timestamp property supplied by the venerable ant <tstamp/> tag that I slam into footer.jsp to display the build time (very handy for our QA folks.) It turns out that getting the timestamp (oh so easy with m1) is a bit of a trick with m2. Thanks to some good folks on the forum for pointing out a clever use of the build number plugin.

First -- tell maven where to find the plugin:
<pluginRepositories>
<pluginRepository>
<id>tlc</id>
<name>TLC Repository</name>
<url>http://commons.ucalgary.ca/pub/m2</url>
</pluginRepository>
</pluginRepositories>

Second -- configure the plugin to provide a timestamp:
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>maven-buildnumber-plugin</artifactId>
<version>0.9.4</version>
<configuration>
<format>{0,date,yyyy-MM-dd HH:mm:ss}</format>
<items>
<item>timestamp</item>
</items>
<doCheck>false</doCheck>
<doUpdate>false</doUpdate>
</configuration>
<executions>
<execution>
<phase>validate</phase>
<goals>
<goal>create</goal>
</goals>
</execution>
</executions>
</plugin>
In my case I also have to add an <include...> to my filtering since I want my timestamp in footer.jsp. With all of that in place I now get the date/time in my footer.jsp via ${buildNumber}. It isn't nearly so easy or flexible as with m1 but right now I'm interested in results, not cleverness. Perhaps I'll find something better in the future. (Or maybe write my own timestamp plugin!)

(BTW: If you have simple, static properties they are very easy to define in an external file or directly in pom.xml as described by the link above.)

OK, so now I'm filtering and building and testing and packaging so I should have a deployable war. I'm not brave enough to try to deploy via m2 (perhaps another day) so I'll fire up my application server of choice and deploy the war manually. Nothing really to write about here. You probably don't care. I just thought you would like to know that it actually deploys correctly.

You will have to excuse me now; there are some other, non-maven things I need to address this afternoon. (Yes, it is afternoon now. What began in the morning has bled over the international lunch line due to the usual distractions.) If my meeting schedule allows I'll pickup Part 4 tomorrow and try to do a bit of cleanup on my dependencies.

Monday, February 26, 2007

From maven to mvn : Part 2 -- Java 5 & AspectJ

In this morning's Part 1 I walked through last Thursday morning's activities that began my process of converting a simple project from maven 1.0.4 to 2.0.4. This afternoon (actually, after I come back from the meeting I'm about to go off to) I'll run through last Thursday afternoon's activities where I dealt with dependencies, AspectJ and other details to get my application's war file built.

[ time passes ]

OK, I'm back from my meeting and ready to dig into Part 2. My goals for Part 2 are carried over from Part 1:
- compile the classes and aspects
- build a war (not necessarily deployable)
- test cases should execute and pass
- @foo@ replacement is not necessary

As you recall, Part 1 ended with build failures. I was intentionally vague about the details since your details will differ from mine. However, we have to solve the specific case so we will tackle my specific issues. Hopefully this will give you some insight into your own solution.

We start with this noise:

path/to/UserDelete.java:[26,5] annotations are not supported in -source 1.3
(try -source 1.5 to enable annotations)
@Override

Great! I'm using the shiny new Java 5 stuff and maven thinks I'm stuck in dinosaur land. Well, clearly, a lot of people are using m2 to build Java 5 applications so this must be an easy thing to solve.

As it turns out, the solution is documented on the maven site quite clearly. Rather than paste all of that here I'll just point you there. Be sure you specify 1.5 (or whatever comes after that) and you'll be good to go.

Now my compile is marginally better. No complaints about @Foo but lots of complaints because of missing dependencies. Time to address that...

Recall that I'm starting with a project that currently builds under m1. This means that (a) I already know what my dependencies are and (b) I have a copy of them in ~/.maven/repository. By inspecting my previous project.xml (or the /WEB-INF/lib directory of the m1-built jar) I can quickly identify my dependencies.

Again, the maven documentation comes to the rescue. Dependency management is at the heart of what maven does for you. Follow the link to get the word on defining your dependencies.

M2 does this really cool thing called transitive dependency management. What that means is that if you depend on foo and foo depends on bar then you only need to define foo in your pom.xml. This is vastly different from (and much easier than) m1 where you had to define not only your own dependencies but your dependencies' dependencies as well.

At this point in our series I am not going to be that clever. The project.xml from my m1-buildable project lists my entire dependency tree so I'm going to simply replicate that in my pom.xml. The result will be hideous but I'll get smarter later (possibly in Part 3 if I'm lucky).

One thing you will notice is that you no longer need to worry about the <property.../> tags to indicate what is or is not in the final war. (You will notice this because if you copy the entire dependencies section from your project.xml to pom.xml you will get xml violations when you try to compile your project.) M2 is smart enough to figure that out based on the dependency's scope but we'll worry about that later as well.

So, after adding my dependencies (300 lines worth) I can try again to build my application. Unfortunately many of my dependencies are not available in the m2 repository. Truthfully, I probably specified them incorrectly. What I should do is find them in the m2 repository and respecify them with the correct group and application id values. Well, I'm in a hurry and I know I have them in my local m1 repository so I'm not going to do the right thing. Instead, m2 suggests I can do this:

10) ojdbc:ojdbc:jar:1.4

Try downloading the file manually from the project website.

Then, install it using the command:
mvn install:install-file -DgroupId=ojdbc -DartifactId=ojdbc \
-Dversion=1.4 -Dpackaging=jar -Dfile=/path/to/file

Path to dependency:
1) foo.bar.baz:contacts:war:1.0-SNAPSHOT
2) ojdbc:ojdbc:jar:1.4

And that is exactly what I intend to do... For every missing dependency I will locate it in my ~/.maven/repository directory and invoke the magic charm above to put it into my ~/.m2/repository directory. As I stated above, this is not the Right Way to do it but it is expedient. Once I get smarter (by reading more at the maven site) I will apply what I've learned.

Now my dependencies are in place and I can actually build the code! Sadly, however, my unit tests are failing. <sigh/> Well, this is a Spring-based application and that means a pile of configuration xml files. Unsurprisingly, they're not in target/test-classes and that's why our tests are failing.

You can find the answer to resources in the getting started guide. Since I'm coming from an m1 project mine were originally in src/java (now at src/main/java) as well as src/test (src/test/java). M2 seems to want them at src/*/resources but I'm in no mood at this time of day to extrude them from their current location. Therefore... I will use what the GSG has told me and blaze my own path:
<build>
...
<resources>
<resource>
<directory>${basedir}/src/main/java</directory>
<includes>
<include>**/*.xml</include>
</includes>
</resource>
</resources>
<testResources>
<testResource>
<directory>${basedir}/src/test/java</directory>
<includes>
<include>**/*.xml</include>
</includes>
</testResource>
<testResource>
<directory>${basedir}/src/main/webapp</directory>
<includes>
<include>**/*.xml</include>
</includes>
</testResource>
</testResources>
Are you still with me? We're getting closer. Really. I promise. If we compile now we will find our Spring resources in target/test-classes where they're needed. Our tests are still going to fail, however, because... we are not compiling my AspectJ aspects. Oops...

Maven (both m1 and m2 but more so with m2) is built around plug-ins. It turns out that there is one just for AspectJ. You can read all about it at codehaus. Configuring pom.xml to include it is quite simple:

<build>
<plugins>
...
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>aspectj-maven-plugin</artifactId>
<configuration>
<source>1.5</source>
<target>1.5</target>
</configuration>
<executions>
<execution>
<goals>
<goal>compile</goal> <!-- use this goal to weave all your main classes -->
<goal>test-compile</goal> <!-- use this goal to weave all your test classes -->
</goals>
</execution>
</executions>
</plugin>
</plugins>

(I would like to say that's the last time you will see a pom.xml snippet in this series but that would probably be a lie.)

So, now we have all of that in our pom.xml, the compiler set to 1.5 compliance and our 300 lines of dependency declaration... mvn install reveals... A successful compile, test, package cycle! We now have a war! I doubt very much that it will deploy but, my friends, it is past time for me to go home and that is exactly what I intend to do.

Tomorrow I will begin exploring Part 3. If that goes well and I don't have too many meetings I will endeavor to post that here shortly thereafter. Please be patient, you never quite know what a Tuesday may bring...

From maven to mvn : Part 1 -- Building a war

I've been a huge fan of maven since the 0.4 days when it was a pile of reusable ant scripts. I encourage any and all Java developers I come across to convert their projects and use it. However, I'm not going to try to convince you here that you need to use maven. If you do then you're already on the path. If you don't then, well, there are other sites that will attempt to convince you. What I intend to do here is describe my journey from maven 1.0.4 to maven 2.0.4.

For a long time now (and I say "long time" because I really have no sense of time) I've been using a modified version of maven 1.0.4. Modified in the sense that I've tweaked a few of the built-in plug-ins, added my own plug-ins and also written a fair amount of value-add in maven.xml. I also want to point out that we use this as the core build framework where I work. Prior to 1.0.4 we used, of course, 1.0.3 and prior to that was some other 1.x. In other words, we've been using some version of maven for a Very Long Time.

I've been following the maven 2 (henceforth simply "m2") effort since its inception. They fine folks there have made some huge improvements. IMO m2 is so very different from m1 (maven 1.x that is) that it is very nearly should have been called something other than "maven". That may be a bit extreme but I simply want to point out that migrating from maven 1.x to 2.x is not a drop-in scenario.

For quite some time now (where "quite some time" is something less than "long time") I've been pondering conversion to m2. I've had a few false starts where a plug-in I needed wasn't there. I've also had a pretty busy schedule such that I just couldn't dedicate the time to learn m2 well enough to give it a fair shot. Fortunately, some other folks I work with have had a bit of time and have begun their own effort to move to m2.

So, with that being said, this series will track my efforts to convert a very simple web application ("Contacts") from m1 to m2. While simple, the project does have some requirements that do not fit m2's out of the box behavior. My theory is that if I can build this application I can probably then convert our core m1-based build framework to m2.

In this episode my goals are:
- compile the classes and aspects
- build a war (not necessarily deployable)
- test cases should execute and pass
- @foo@ replacement is not necessary

Step 1

Build, test and deploy Contacts using the current m1-based build system. I want to be sure that what I think is working is actually working.

Step 2

The application has several components (sub-projects in maven-speak). For this bit I will focus only on the war. Therefore, copy Contacts/view to Contacts-m2/view. Subsequently clean up Contacts-m2/view to get rid of m1 artifacts (maven.xml, etc.)

mkdir -p Contacts-m2/view/
cp --recursive Contacts/view/* Contacts-m2/view/
cd Contacts-m2 ; rm -f maven.* project.*


Step 3

The project layout has changed. Therefore we have to do some simple relocation:

mkdir src/main
mv src/java src/main/java
mv src/aj src/main/aspect
mv src/webapp src/main/webapp
mv src/test src/test-tmp
mkdir src/test
mv src/test-tmp/src/test/java


Step 4

Now we need to have some m2 artifacts so that m2 knows what to do with all of these things. Specifically, we need a pom.xml that describes our project and how to build it.

The easiest way to bootstrap this is to use m2's archetype system:

mvn archetype:create -DgroupId=com.mycompany.app -DartifactId=my-app


Step 5

Well, Step 4 didn't go so well because we're behind a corporate firewall. To deal with this we need to tell m2 how to deal with the proxy. So now we have a ~/.m2/settings.xml file. We may have to come back to this later but we're good for now.

Try the archetype again... all good. It will take a while since m2 needs to pull dependencies for its plug-ins.

Step 6

Now that we have the archetype we will take its pom.xml and throw everything else away.

mv my-app/pom.xml .
rm-rf my-app

The changes to pom.xml will eventually be significant. For now, we just need to change the groupId, artifactId, name and packaging tags. The first three are whatever you need for your application. packaging will be war since we're building one of those.

Note that we could have used the webapp archetype. I really wanted to start with the basics, though, so I went with jar.

Step 6

Now lets build it and see what happens! It's magic, right? Everything should Just Work?

Start with mvn clean just to make sure we didn't flub the pom.xml. There's nothing to clean, of course, but it gives us a good feeling because it cannot possibly fail!

Next try mvn install to build, test and package the application. Ah well... success was short-lived. My application uses Java 5 stuff, AspectJ and a pile of dependencies. With the default pom.xml this build isn't going to complete.

Conclusion

Thus concludes Part 1. I didn't complete all of my goals but i did make some significant progress.

Actual execution took place over a four-hour period during which I was also dealing with my inbox and the inevitable hourly "emergencies" of software development. During the that window of time I probably spent between 90 and 120 minutes doing the things documented above. (The actual documentation -- four days later -- took about 45 minutes.)

I'm off to lunch now. When I get back I'll proof-read this post and document Part 2 where I sort out the build failures in Step 6.

Sunday, February 25, 2007

Specialization

pteropus is something of a stream of consciousness for me. Ideas come to me and some of them come out here. My ideas tend to fall into four broad categories: Christianity, Java geek stuff, other geek stuff, everything else.

For a while now I've been focusing some of my thoughts on my faith into a dedicated blog. I mention it now because I haven't mentioned it before. I struggled a bit with the decision to create the dedicated blog because I'm proud of my faith and want to put it "out there" for all to see. As such, you'll still see some of those thoughts here; thoughts much along the lines of what I've already posted. If you're one of my geek readers I hope you find something of interest in these posts. If so I invite you to visit John-14-6.

On the theory that I may someday feel the need to fire up other specialized blogs I've setup a focal point for my blog-related interests.

Thanks for visiting, come again soon and invite your friends.

Thursday, February 22, 2007

Information Overload

I'm not the first one to notice this and I won't be the last but here it is: We are overloaded with the amount of information we attempt to process each and every day. I'll even narrow it down and say we're overloaded with just the number of websites we attempt to consume every day. Never mind the shows stacking up on TiVo or the Cd's we got at Christmas and haven't listened to yet. And you can simply forget the dead-tree-ware (books, that is) that were given to us on our birthdays months and months ago! At latest count I have over 50 blogs I monitor every day. It's no wonder I rarely have time to update this one!

So what are we going to do about it?

Well, the naive would say that we simply cut back. Drop my 50 to 5 and be done with it. OK, fine, but the problem with that is I've spent a lot of time and energy to be one of best Java programmer (and C++ programmer before that) I can possibly be. I have a passion for what I do and if I'm going to bother to do it then I want to be good at it. To do that means that I need to keep up with the technology and all that's new. After all, you don't get better by doing the same thing the same way all the time. You get better by learning from folks who know things you don't know. To do that means reading lots and lots of stuff put out by people who are a lot smarter than I am. And that sets me up for the information overload.

So, again, what are we going to do about it?

Well, these days any website (be it blog or other) worth visiting has some kind of feed available. Even my own poor blog here has an atom feed and I'm certainly not in the top set of sites to visit! (Actually, I'm not sure that anyone visits at all. Maybe I'm just typing in a vacuum...)

Armed with some sort of site feed we can begin to leverage some pretty cool tools called aggregators. If you don't know what I'm talking about go ask Wikipedia. From my limited investigation they fall into two broad categories: local installables and web-based. (Of course I guess you could say that most applications these days fall into those categories...) My daily digital life involves three to six computers so a local install is not really useful thus I've begun to look into the online variety.

My first attempt to fence in my mass of feeds was Google's customized home page. If you're not doing this then you're missing the boat. And not just any boat. I mean the boat. Google's customized home page is what every ISP or portal-thing vendor has been trying to do since the www got started. There are hundreds, nay, thousands of widgets you can drop on there to customize your web experience. Beyond that, anything with a feed can be dropped on just as easily. Soon, however, I reached a critical mass where I had a half-dozen customized tabs with ten or twenty widgets each and things were beginning to be ignored.

My new favorite toy is Google Reader. I played with it a year or more ago when, I think, it went by the name Google Lens. I'm probably wrong about that name because every time I say it nobody knows what I'm talking about. Whatever you name it, it's cool.

Now I'm not going to turn this into a Reader tutorial. You're probably a pretty smart person and can figure it out for yourself. I just wanted to take a moment to make you aware of it in case you're not already. However... here are some of the cool things I can do: tag sets of related feeds, mark come-back-to-read-later articles with a big yellow star and share my favorite articles in an automatically maintained meta-blog thing. That last one is wicked cool because it actually has its own RSS feed so its like a feed of feeds kind of thing and as a coder I really dig the recursion of it.

Along the same lines as Reader is Bloglines. I tried it for a while before I settled on Reader. It has an equally cool set of features. There is obviously overlap as well as some interesting features that set it apart. (FWIW I settled on Reader because of some odd usability issues I had with Bloglines. YMMV)

I encourage you to try 'em both out and any others you run across. It isn't really about which one you settle on as long as you settle on something. In my opinion (of course it's mine since it's my blog) there is just no way to keep up with the torrent of information without some sort of tool that will help us organize it all.

My other favorite toy is del.icio.us. del.icio.us does for bookmarks what Reader does for feeds. It lets me aggregate all of my bookmarks into one web-accessible location and organize them by any relevant set of tags I want. What's more, my friends with a del.icio.us account can drop things onto my pile and vice-versa. They even have these cool browser buttons that let me tag a link without visiting the page. So, now, instead of maintaining an html page of my bookmarks or trying to sync six to twelve browser instance's bookmarks I just install those cool buttons everywhere I go and link 'em online.

Clearly these are not the only tools available to help manage the information overload. Even if they were the only tools for their niche we're still faced with all of our other input sources. Rather than simply sigh and succumb to the avalanche take a moment to find things to help get things under control. There are a lot of smart folks out there with the same problem and some of them are giving us some powerful tools to take back control. If I wore a hat I'd definitely take it off to 'em!

Wednesday, January 24, 2007

Convergence

Bob just wrote an interesting entry that dovetails nicely with some things on my mind that fit with the current message series at church. Weird.

Wednesday, January 10, 2007

WOW

Let me repeat that: WOW

I just got back home from what our church calls Wednesday Night Live. It is a periodic meeting for those of us who call Mountain Lake home. Our pastor generally lets us in on some of the things that are coming up over the next few months. Tonight was no exception to that and, in fact, tonight was a totally pumped-up version of a normal WNL (if ever any of our WNLs could be considered "normal" by anyone).

Mountain Lake will be seven years old this coming weekend. We have been attending for the last three of those. God has done some amazing things in all of that time. He has changed the life of everyone who has walked through those doors. Ask anyone who has been there, we've all seen and experienced it personally.

The new chapter introduced tonight by Shawn is going to be an order of magnitude beyond the amazing things we've already seen. Fasten your seat belts and hold on tight. It's going to be an awesome journey!

Tuesday, January 09, 2007

You need a blog...

This is what a good freind of mine told me today at lunch.

I promptly replied, "I have a blog. I just never have time to update it."

And that really brings us to the crux of the matter doesn't it? I mean, I'm just so busy that I never seem to have any "me" time for things like blogging.

I'm not busy in the single parent of six working two jobs to make ends meet while going to night school to get a PhD in quantum physics kinda busy. I'm about as busy as you are. Maybe a little more or a little less but mostly about as busy as your average person.

Now don't misunderstand me. I'm grateful that I have a full schedule. I'm active in my church, I have an awsome family, a career I actually enjoy and friends I can hang out with from time to time. I'm grateful for all of these things and the time to enjoy them. If I didn't have a full schedule it would be because some of these, and other important, things were missing. That would be a Bad Thing.

When I say I'm too busy to blog I guess I mean that after all of the high-priority life things are taken care of and I find myself with an hour or two of responsibility-less time on my hands I have trouble choosing the thing to do next. Do I catch up on that book I've been putting off? (I have about a dozen in the queue.) Or maybe the latest episode of whatever that TiVo has been holding for a couple of months now? Then there's that DVD still in the shrink-wrap. Or maybe that those pet Java projects that I've been tapping away at for a year or more. And, oh yea, the blog.

My friend who recommended that I get a blog knows me well. I have many opinions, insights and so forth that I would like to share. Some are truly useful while others are probably useless.

So, here is my first entry of '07. With luck it will be the first of many. Let's just hope I don't get distracted by... Oh! Shiny thing. Gotta go!