null

NullPointerException, probably…

Unified (as much as possible) Logging Using SLF4J

with 7 comments

Integrating, integrating, integrating. That’s what we do in Java enterprise development. Persisting objects with Hibernate wrapped by JPA using C3Po (or JTA?) (or MongoDB over Morphia?), processed with JBMP, created by JAXB (jackson-json?) from JAX-RS scheduled by Quartz … (a few dozen frameworks later) … all this glued with Spring (or Guice?) deployed on Jetty (or Tomcat, JBoss, Resin?) into cluster by Terracotta (or Hadoop, GigaSpaces, JBoss cache, Infinispam?). Ah, and all this built using Maven Gradle with Artifactory on Jenkins. I sure forgot ½ of the frameworks we constantly use.

Generally we don’t mind much about the internals of the frameworks we use (as long as they are good) – the whole encapsulation stuff is the last undoubted good thing. But except for the API (part of which is the configuration) frameworks have another user-facing end – the logging. When we build a system we want it to behave as one system – single configuration from one end, and single log from another (break it to different files, if you wish, but it should still be a unified logging system).

The reality is that there is no standard de-facto for logging. The standard de-jure – JUL, is not very popular because of its lack of functionality (compared to alternatives) and its suboptimal performance. And then there is Log4J, which almost became standard, but did not. And there is logback, which is a Log4J trashover, and there are facades (JCL and SLF4J), which try to unite all this zoo, and some others, which you have probably never heard of, like syslog4j*, logging framework by the Object Guy, jLo, MonoLog, Lumberjack, Houston, JTraceDump, qflog, LN2, TracingClassLoader, SMTPHandler, Log4Ant, Simple Log, Log Bridge, Craftsman Spy, Pencil, JDLabAgent, Trace Log, JDBC Logger, LimpidLog and Microlog.

Let it be, you’d say – why not have many logging tools, which are good and diverse! Well, the problem, as I’ve already mentioned, is that they leak out of the frameworks. Their diverse configuration leaks from one end, while their diverse output from another. Spring uses Log4J over JCL. So does Hibernate. Jetty uses Logback over SLF4J. Some (like Terracotta modules) use plain Log4J, Jersey uses JUL.  This means we end up with 5 separate configurations (Log4J, SLF4J, Logback, JCL and JUL) and 3 different types of log files (Log4j, Logback and JUL). What a system!

To make the long story short – How can we achieve the desired consolidation? Clearly, we need a facade. There are two most commonly used – SLF4J and JCL. JCL is known for its classloader hell, SLF4J is newer, better performing, smarter, simplier to use and generally provides better quality for the same buck (well, no buck – both are open source, of course), so we’ll stick to it. SLF4J is an adapter – thin layer of API to and from different logging implementations. Yap, both ways. It means with SLF4J we can use JUL API on top and log using Log4J in the bottom!

First we need to pick an actual logger. Log4j was considered the best choice up until recently (2006) when Ceki Gülcü decided he needed a fresh start and rewrote from scratch a new Java logging framework, just better than log4j, called Logback. We can give it a try as our underlying logging implementation (we can switch in a moment, as we are using  good facade, remember?).

So, here’s what we have to do:

    1. Establish our own good logging:
        1. Add Logback to our classpath
        2. Add SLF4J API to our classpath

      Done here. Now our own brand new code will use top-notch logging.

    2. Now for the tricky part. Let’s make the example stack I listed above taking configuration from one source (our config files) and writing to one target (files, listed in our configuration)
      1. All the tools using SLF4J will just work. That includes dozen of Apache projects, inc. Camel and Mina, some SpringSource projects and many others.
      2. Now let’s start rolling with all the rest. This is how you do it (click to enlarge):
        Bridging architecture
        1. Jakarta Commons Logging:
          1. Remove commons-logging.jar from your classpath. Usually, it is transitive dependency from the framework, so you need to instruct your build tool on how to do it. What a lucky coincidence, I just wrote short and instructive blog post about how to do it!
          2. Add jcl-over-slf4j.jar instead. It contains alternative commons-logging API implementation, so the code will run just fine.
        2. Log4J:
          1. Same goes here! Remove log4j.jar from your classpath (Again, it would usually be a transitive dependency from the framework, look here).
          2. Add log4j-over-slf4j.jar instead. It contains alternative log4j API implementation, so the code will run just fine.
        3. JUL:
          1. Well, you can’t remove JUL from classpath (it’s a part of the JRE, dude). For the same reason SLF4J can’t reimplement JUL’s API.
          2. Add jul-to-slf4j.jar. It will translate java.util.logging.LogRecord objects into their SLF4J equivalent.
          3. Install SLF4JBridgeHandler and LevelChangePropagator.
          4. Expect 20% decrease in performance (so use it wisely).

All done. Now both our code and all the 3rd paries configured from single source and write to single target. Hooray!

* syslog4j claims it is cross-platform. Well,  I’ll just quote: “Is Syslog4j cross-platform? Yes! Syslog4j UDP/IP and TCP/IP clients should work in any typical Java JRE environment.”

Written by JBaruch

22/06/2011 at 08:40

Posted in Frameworks

Tagged with , , , , ,

Banning Transitive Dependencies With Maven2/3, Gradle and Ivy

with 12 comments

Oh, you are using build tool with dependency management? Good! Be it Maven2/3, Gradle or Ivy, your life as devops or developer is much easier. Until you hit it. The evil transitive dependency. How can it be evil you ask? When the classes in it clash with the classes you really need.  Here’s some use-cases:

  1. Same dependency, different jar names, two examples here:
    1. The Jakarta Commons renaming effort: commons-io:commons-io:1.3.2 and org.apache.commons:common-io:1.3.2
    2. The Spring Framework artifacts naming convention alternatives: spring-beans, spring-context, etc in repo1 versus org.springframework.beans, org.springframework.context, etcin
      SpringSource EBR.
  2. Different packaging of the sample classes, many examples here:
    1. OSGi repackagings: asm:asm:3.2 and org.objectweb.asm:com.springsource.org.objectweb.asm:3.2.0
    2. Modularization of Spring 2.5.6: as single jar and as spring-whatever multiple modules
    3. Xerces and Xalan are included in JDK since 1.5. They are still present as transitive dependencies in all the tools which support JDK 1.4.
    4. Alternative packagings with and without dependencies: cglib:cglib and cglib:cglib-nodep
    5. Project merges like Google collections, which are now included in Google Guava
  3. Deliberately reimplemented interfaces, for example for bridging legacy APIs to new implementation, such as in SLF4J.
  4. Your patches for 3rd-party tools.

All those may end up with 2 or more classes with the same name in the classpath. Why it is bad? Java class identifier consists of fully-qualified class name and the classloader that loaded it, so if two classes with the same name reside in same classpath JVM considers them to be the same class, and only one of them will be loaded. Which one? The first classloader encounters. Which one will it be? You have no idea.
When the duplicated classes are exactly the same, you will never notice. But if the classes are different, you’ll start getting runtime exceptions, such as NoSuchMethodError, NoClassDefFoundError and friends. That’s because other classes expect for find one API, but encounter another one – wrong class was loaded first. Not fun.

Now, when you know how evil they are, let’s take those bastards down!

Maven 2/3

There is no simple way (Maven’s tagline) to exclude some dependency from all the scopes. I’ll show two cases – manual exclusion and working with IntelliJ IDEA:

    1. Stage 1: exclude all the banned dependencies one by one:
      1. Manually edit Maven’s poms
        1. For each evil dependency:
        2. Find which top-level dependency brings the evil transitive hitcher with it. This is done by using Maven Dependency Plugin:
          mvn dependency:tree -Dincludes=commons-logging:commons-logging
        3. You’ll get something like this:
          [INFO] com.mycompany.myproduct:rest-client:1.0
          [INFO] \- org.springframework:spring-webmvc:jar:3.0.5.RELEASE:compile
          [INFO]    \- org.springframework:spring-core:jar:3.0.5.RELEASE:compile
          [INFO]       \- commons-logging:commons-logging:jar:1.1.1:compile
        4. Go to the pom.xml with your dependency management (you use dependency management, don’t you? If you don’t, don’t tell anyone, go and start using it) find spring-webmvc dependency and add an exclusion to it:
          1     <dependency>
          2     	<groupId>org.springframework</groupId>
          3     	<artifactId>spring-webmvc</artifactId>
          4     	<version>3.0.5.RELEASE</version>
          5         <exclusions>
          6             <exclusion>
          7                 <artifactId>commons-logging</artifactId>
          8                 <groupId>commons-logging</groupId>
          9             </exclusion>
          10         </exclusions>
          11     </dependency>
      2. Working with IntelliJ IDEA:
        IntelliJ IDEA Maven Dependencies
          1. Open Maven Dependencies Graph.
          2. Filter it by the dependency you are looking for.
          3. Select it and press Shift-Delete.
    2. Good job! Your nailed them down in the current version of your build. But what happens when someone adds a new 3rd party dependency and brings some bad stuff with it as transitives? You need to protect your build from this scenario. So, stage 2: Fail the build if one of the banned dependencies ever added to the build with Maven Enforcer Plugin. Add the plugin to your root project pom:
      1 <project>
      2   <build>
      3     <plugins>
      4       <plugin>
      5         <groupId>org.apache.maven.plugins</groupId>
      6         <artifactId>maven-enforcer-plugin</artifactId>
      7         <version>1.0</version>
      8         <executions>
      9           <execution>
      10             <id>enforce-banned-dependencies</id>
      11             <goals>
      12               <goal>enforce</goal>
      13             </goals>
      14             <configuration>
      15               <rules>
      16                 <bannedDependencies>
      17                   <excludes>
      18                     <exclude>commons-logging</exclude>
      19                     <exclude>cglib:cglib</exclude>
      20                   </excludes>
      21                 </bannedDependencies>
      23               </rules>
      24               <fail>true</fail>
      25             </configuration>
      26           </execution>
      27         </executions>
      28       </plugin>
      29     </plugins>
      30 </build>
      31 </project>
    3. As I mentioned, using the Enforcer plugin won’t exclude the unwanted dependencies, it only will fail the build. Once that happened (and trust me, it will), you need to go and exclude them manually, as described in Stage 1 above.

And we are done with Maven. Not fun? Switch your build tool!

Ivy

Well, comparing to Maven it’s emabrassing how easy is to add global exclusion in Ivy. All you need to do is add exclude tag, and it will do the job for all the transitive dependencies, both in current and future use:

1 <dependencies>
2     <dependency org="org.springframework" name="spring-webmvc"
3 rev="3.0.5.RELEASE" conf="compile->default"/>
4     <exclude org="commons-logging"/>
5 </dependencies>

Done.

Gradle

Since Gradle uses Ivy under the hood, here comes the same ease, but even groovier:

1     configurations {
2         all*.exclude module: 'commons-logging'
3         all*.exclude group: 'cglib', module: 'cglib-nodep'
4     }

That’s all! Now your code is bullet-proof from classloading conflicts and you can do nasty class-replacing stuff, for logging or pleasure.

Written by JBaruch

22/06/2011 at 08:39

Posted in Build

Tagged with , , ,

PAX 2010

with 4 comments

Just back from Project Automation Experience 2010.

Guess what? It was awesome!

Here’s my summary:

General:

The conference was combined with Rich Web Experience 2010, which probably, was the right thing to do for the first ever project automation seminar. It was announced 2 months ago, bringing 52 registrants (out of ~400 total for both events). Pretty impressive for such a short notice on such a narrow subject. The organization was fantastic, everything felt well-planned and well-orchestrated. It was my second NFJS conference, and like with the first one, they delivered!

Sessions:

The first speaker in the conference was the one and only Douglas Crockford in his notorious Quality talk. Top quality (pun intended) – funny, entertaining, and touching the right points. Bottom line – read The Mythical Man-Month and Literate Programming.

Yet another keynote was devoted to actually Project Automation. Hans Dockter laid down his vision on the topic, the idea being – we are entering very interesting times, better understanding of the needs combining with the right tools will enable us whole new level of project automation, way beyond what we are used to today.

Lots of fun and enriching talks, like Tim Berglund‘s Complexity Theory and Software Development or an experts panel, moderated by Ted Neward (who honestly tried to recall who I was and where did we met), and tons of useful and deep sessions and workshops, just to name a few: Fred‘s modularity and smart BRMs talks, Kohsuke’s updated Doing More with Hudson, Git, Sonar and Liquibase sessions from Matthew McCulloughOlivier Gaudin and Tim Berglund. The integration with Rich Web Experience gave the participans the opportunity to mix and match the sessions, and there was a lot of stuff to attend – HTML5, CSS3, Flash, iOS and Android development, Grails, Wicket and what’s not! 10 parallel sessions in any given time, now go choose one!

The sessions I gave:

First of all, it was my first speaking experience outside of Israel. I think I did well. At least the feedbacks say so :) The high speaker:attendee ratio produced small classes with live interaction, I loved it. As I suspected, 90 minutes are too long, looks like 60 minutes is my favorite format, but I managed to keep the audience awake :)

Here are my sessions:

By the way, Prezi rocks, as usual.

My next goal – speaking in a bigger seminar around spring 2011, probably on different topic, more relevant to my new job.

Florida:

December 1st – 30ºC, sun and ocean. That’s a typical room view. No additional comments needed, I guess.

Everglades and alligators rock too. It looks like this.

Written by JBaruch

06/12/2010 at 14:50

Maven2 to Gradle Convertor – Take II

with one comment

Well, it’s time to another solution for something that I see as the biggest absent feature of Gradle - decent migration tool from Maven2. Gradle provides some cool Maven2 integration features – you can use Maven repositories, Gradle (well, Ivy inside Gradle) understand your dependencies’ poms in terms of transitive dependencies, you can even generate pom for your artifact and deploy it to Maven repo, but what about the build itself? For now it should it be trashed over and rewritten completely. That is a show-stopper for a lot of projects. They worked so hard to make their Maven work (you know what I mean… Maven == working hard), and now I have to say them to just throw it away and rewrite? No way! Some time ago I took @psynikal’s script for generating Gradle like dependencies from Maven like ones and improved it a bit to generate usable Gradle build file out of pom. The full story is here. That solution, while definitely is better than void is far from being perfect for number of reasons:

  1. It is fragile.
  2. It uses maven-help-plugin. Did I say fragile?
  3. Changes in pom.xml aren’t reflected in your build – you need to regenerate the gradle build files (writing them over, destroying all changes you made – the script isn’t perfect in that sense).
  4. Probably some annoying bugs.

Now it’s time for something more serious – the m2metadata plugin. In an essence, it takes metadata from Maven’s Project Object Model and builds Gradle project out of it.

More specifically it does the following:

  1. Ask Maven to parse poms and settings xmls as it does during regular Maven build.
  2. Set group, version and status (snapshot/release) for Gradle project.
  3. Apply Gradle plugins according to packaging (jar -> java, war -> war). Currently those two are the only supported, but more are coming.
  4. Get some metadata from well-known Maven plugins and configure Gradle plugins with it. This step currently includes setting Java compiler level and configuring sources and resources directories.
  5. Add repositories.
  6. Add dependencies (both external and inter-project).

That’s about it.

Now for the dark side. Currently, the m2metada-plugin clashes with maven-plugin (classloading issues). It can be worked around, but:

  1. Maven-plugin is bundled, so it must be explicitly removed by deleting jars from Gradle’s lib directory.
  2. The true power of m2metadata plugin is using it together with maven-plugin. M2metadata-plugin retrieves metadata part of maven build, while maven-plugin runs Maven’s runtime to execute goals like generating poms and deploying to maven repositories.

Yet another, more methodological than technical downside of m2metadata-plugin is that it preserves the usage of pom.xml. It works, so you don’t touch it, and it stays forever instead of being replaced with fully-blown build.gradle. For that concern, I see clear benefits in using the script solution, which trashes the pom.xml, leaving you with pure-gradle solution, and in conjunction with the idea-plugin gives you all you need to start going.

All in all, once the classloading issues will be sorted out, It looks to me that the mission of creating migration tool can be considered as accomplished.

You can find my work here (Usage guide in Wiki, TODOs in issues). I am going to present it (together with the script, which,as mentioned, has it own benefits) at The Project Automation Experience 2010 in the Java Build Automation Tools Jungle session. The presentation will be posted here once it will be ready.

Written by JBaruch

11/10/2010 at 10:27

Integrating MongoDB with Spring

with one comment

Apparently, most of the visitors to my “Integrating MongoDB with Spring Batch” post can’t find what they look for, because they look for instructions how to integrate MongoDB with plain Spring Core.
Well, the source includes that integration, but it’s on github, and anyway that wasn’t the focus of that post.
So, here’s the integration – short, plain and simple:

  • Properties file with server and database details (resides in classpath in this example):
1    db.host=localhost
2    db.port=27017
3    app.db.name=app
  1. application-config.xml (or whatever you call it):
    1<beans xmlns="http://www.springframework.org/schema/beans"
    2       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    3       xmlns:context="http://www.springframework.org/schema/context"
    4       xsi:schemaLocation="http://www.springframework.org/schema/beans
    5           http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
    6           http://www.springframework.org/schema/context
    7           http://www.springframework.org/schema/context/spring-context-3.0.xsd">
    8    <context:property-placeholder 
    9                location="classpath:db.properties"/>
    10    <bean id="mongo" class="com.mongodb.Mongo">
    11       <constructor-arg value="${db.host}"/>
    12       <constructor-arg value="${db.port}"/>
    13   </bean>
    14   <bean id="db" 
    15      class="com.mongodb.spring.config.DbFactoryBean">
    16       <property name="mongo" ref="mongo"/>
    17       <property name="name" value="${app.db.name}"/>
    18   </bean>
    19</beans>
  2. The com.mongodb.spring.config.DbFactoryBean class:
    1 public class DbFactoryBean implements FactoryBean<DB> {
    2    
    3        private Mongo mongo;
    4        private String name;
    5    
    6        @Override
    7        public DB getObject() throws Exception {
    8            return mongo.getDB(name);
    9        }
    10   
    11       @Override
    12       public Class<?> getObjectType() {
    13           return DB.class;
    14       }
    15   
    16       @Override
    17       public boolean isSingleton() {
    18           return true;
    19       }
    20   
    21       public void setMongo(Mongo mongo) {
    22           this.mongo = mongo;
    23       }
    24   
    25       public void setName(String name) {
    26           this.name = name;
    27       }
    28 }
    
1    @Configuration
2    public class ApplicationConfiguration {
3    
4        @Value("${app.db.name}")
5        private String appDbName;
6    
7        @Value("${db.host}")
8        private String dbHost;
9    
10       @Value("${db.port}")
11       private int dbPort;
12   
13   
14       @Bean
15       public DB db() throws UnknownHostException {
16           return mongo().getDB(appDbName);
17       }
18   
19       @Bean
20       public Mongo mongo() throws UnknownHostException {
21           return new Mongo(dbHost, dbPort);
22       }
23   }

That’s, actually, it – enjoy. If you feel some part of the puzzle is missing, please leave a comment.

Written by JBaruch

30/05/2010 at 16:25

Posted in Frameworks, Friendly Java Blogs

Tagged with , ,

Towards My Gradle Talk In Beyond Java AlphaCSP Seminar

leave a comment »

Using Maven2 to build tools was like AWT to UI frameworks: revolutionary, but not without downsides.Concepts such as standardization of project layout and centralized dependency management are preserved in almost every new and future build tool.

Written by JBaruch

20/05/2010 at 22:23

Integrating MongoDB with Spring Batch

with 4 comments

Update (May 30th 2010):
If you look for plain core Spring integration with MongoDBhere’s a post for you.

Spring Batch is a superb batch framework from, well, Spring. It covers all the concepts of batch architecture and, generally, spares you from reinventing the wheel. It’s cool, really. If you have batch-oriented application, you must go and take a look at Spring Batch. And if you don’t know what batch-oriented application is, just think about reading-validating-saving-to-db a zillion text files every night, unattended. Now you know what batch-oriented application is, go and look at Spring Batch.

Welcome back. As you’ve seen, Spring Batch constantly saves its state in order to be able to recover/restart exactly when it stopped. JobRepository is the bean in charge of saving the state, and its sole implementation uses data access objects layer, which currently has two implementations – in-memory maps and JDBC. It looks like this:

JobRepository class diagram

Of course, the maps are for losers testing,  JDBC implementation is the one to use in your production environment, since you have RDBMS at your application anyway, right? Or not…

Today, when NoSQL is gaining momentum (justified, if you ask me) the assumption that  “you always have RDBMS in enterprise application” is not true anymore. So, how can you work with Spring Batch now? Using in-memory DAOs? Not good enough. Installing, setting up, maintaining, baby-sitting RDBMS only for Spring Batch meta-data? Hum, you’d rather not. There is a great solution – just keep the meta-data in the NoSQL database you use for the application itself. Thanks to Spring, the Spring Batch architecture is modularized and loosely-coupled, and all you have to do in order to make it work is to re-implement the four DAOs.

So, here’s the plan:

  • Implement *Dao with NoSqlDb*Dao
  • Add them to Spring application context
  • Create new SimpleJobRepository, injecting your new NoSqlDb DAOs into it
  • Use it instead of the one you would create from JobRepositoryFactoryBean
  • Profit

That was exactly what I did for our customer, implementing the DAOs using MongoDB. Guess what, you must go and take a look at MongoDB.  It’s lightning-fast, schema-less document-oriented database, that kicks ass. When you suddenly have a strange feeling that RDBMS might not be the best solution for whatever you do, chances are you’d love MongoDB, as I do now. There are use-cases, in which you just can’t implement whatever you need to do with relational storage. Well, I lied. You can. It will take a year, it will look ugly and perform even worse. That’s my case, and I am just happy the year is 2010 and we know by now that one size doesn’t fit all.

I have to admit -implementing Spring Batch DAOs with MongoDB was fun. Even Spring Batch meta-data model, which was designed with relational storage in mind, persists nicely in MongoDB. Should I even mention that the code is cleaner comparing to JDBC? Even on top of JDBC template?

Now go and grab the Spring Batch over MongoDB implementation and the reference configuration: http://github.com/jbaruch/springbatch-over-mongodb. I have used the samples and the tests from original Spring Batch distribution, trying to make as few changes as necessary. You’ll need MongoDB build for your platform and Gradle 0.9p1 to build and run. (Why Gradle? Because it is truly a better way to build).

If you use MongoDB – enjoy the implementation as is. If you use some other document-oriented DB, the conversion should be straightforward. In any case, I’ll be glad to hear your feedback.

Written by JBaruch

27/04/2010 at 15:10

Follow

Get every new post delivered to your Inbox.

Join 1,067 other followers