Sunday
Jan272013

Code Impressions Episode 1 - Is Spring coming yet?

In civil architecture practitioners study work of the masters: cathedrals, bridges, even pyramids. In software engineering we rarely have things that last so long, as most of the software is built in the rush. I still enjoy studying sources. You would expect well-established frameworks to be of a top quality, and often times this is the case. But sometimes even the best pieces will have their smells.

I've recently came across such a smell where I would least expect it. I mean, smells can be found everywhere and will happen to the best, but it is the matter of a kind in this case that made be blog about it. Just to make things clear: this is not a finger pointing in any way and I am sure that folks behind the piece of code I am about to comment on are otherwise good engineers.

NOTE: Code under investigation taken from git version 8472a2b.

Let's have a look at spring-context:org.springframework.cache.interceptor.CacheAspectSupport lines 184-187:

Class<?> targetClass = AopProxyUtils.ultimateTargetClass(target);javascript:noop()
if (targetClass == null && target != null) {
    targetClass = target.getClass();
}

This sequence of statements is semantically terrible: if AopProxyUtils.ultimteTargetClass(target) returns an ULTIMATE target class (whatever it might be), what is the "if" block doing there? My guess, somebody simply got lazy. The only possible situation when this call could return null is when the target object is a CGLib proxy and was created from the interface, as then candidate.getClass().getSuperclass() returns null (spring-aop:org.springframework.aop.framework.AopProxyUtils:67):

result = (AopUtils.isCglibProxy(candidate) ? candidate.getClass().getSuperclass() : candidate.getClass());

Should the authors of the CacheAspectSupport consider modifying AopProxyUtils.ultimteTargetClass instead? Possibly. We cannot however forget the regression cost associated with the change and that some other frameworks which rely on spring-aop might have a dependency on the respective utility method. However, as of a considered version, the mentioned method is only used once in production code, in the discussed CacheAspectSupport.execute. 5 other usages in spring-context and spring-aop are in tests. That does qualify as a smell, doesn't it?

Now the true shocking bit: AopProxyUtils DOES NOT HAVE  a single line of test code! Mr Johnson R. and Mr. Hoeller J. shame on you with that one :)

Thursday
Jan102013

Storing objects graphs - native vs. relational storage

On my last two projects I have been dealing with highly hierarchical data that had to be stored into and retrieved from the persistent store at the later stage of the process. 

In  both cases because of organisational and operational contexts, there was a strong preference - or should I rather say, a constraint - for the Oracle DBMS. As a result we have been left with little choice beyond mapping our hierarchical data into the relational model.

It would probably not be that much of an issue if only we were not to deal with tree-like structures with dimensions not known at design time.

Because other kinds of storage where out of the question, we did not really have a chance to compare how much more performant, if at all, an alternative - say native object/graph - store would prove to be.

My role as an architect is to make technology choices that maximise quality. Sometimes you have no choice but "to do the right thing" and  operate within the strategically chosen technology stack, but sometimes you get this fantastic opportunity "to do it right".

I do not feel it would be enough to follow Gartner reports in order to make an informed decision. I need to see with my own eyes in order to believe. Hence I always prove any concept to which I do not have an existing prove.

This time around I'd like to prove that using native graph backend for the tree-like structures will result in significant performance improvements over relational storage even with relatively simple structures.

I've initially chosen OrientDB (NovulaBase) and MySQL (CloudBees) for my test, primarily because they are both available on the cloud under free plans and this makes it possible for everybody to take my test code and run it themselves. After the initial test, which came out quite shocking, I have also added Amazon RDS MySQL to verify that test results against Cloudant MySQL made any sense.

I'd like to avoid starting a war over which database system is greater, however if there are certain advantages of one over another and you know how to prove it in the context of my test and there is publicly and freely available infrastructure, or it could be easily obtained by installing a software locally, please feel free to join the conversation.

My intent is to start simple and possibly evolve and improve the test over time, as long as there is enough interest in the results.

We will start with a basic question: having a relatively simple tree, how long does it take to persist and load the graph? Next, the series of publications will focus on usability aspects of those competing technologies, such as traversal, searching and general engineer friendliness - although the last one might be a bit subjective I guess.

For now let's focus on save and load performance.

Let's state some initial assumptions as they will allows us to put the results into perspective later on.

  • For now we will ignore differences in network latency between test machine and the selected SQL and noSQL stores. We will eventually have to incorporate this factor into an overall picture, but first we need to understand the chattiness of both technologies to even consider if network latency differences have any significance.
  • No backend type specific optimisations will be performed on the domain model level - both solutions will be tested against structurally indistinguishable sets of classes.
  • Object identifiers will be generated by the respective databases using the their default id generation strategies.
  • No client side cache enabled for neither reads nor writes.

Source code can be downloaded from github.

The test tree is modelled in the following way:

DOCUMENT <-[0.*]-> CHAPTER <-[0.*]-> PARAGRAPH <-[0.*]-> LINE <-[0.*]-> WORD <-[0.*]-> LETTER

Every node is of class Node, where DOCUMENT, CHAPTER, PARAGRAPH, LINE, WORD and LETTER represent the NodeType property of the node. A tree is fully grown if it contains nodes of all NodeTypes, ie. each subtree extends from the DOCUMENT up to the LETTER. Letter is a terminating node, ie. it does not have any children.

Each database have been subjected to saving and loading of 3 gradually more complex fully-grown tree configurations differing in width, from 1 up to 3. This test defines a width as number of children at each non-terminating node of a tree. 

The following table presents metrics gathered from an example test run.

I will need to investigate a bit further, but my guess is that this significant difference can be, at least partially, attributed to the massive chattiness of the ORM technology (Hibernate) when compared to the slick interface of OrientDB's native object API, which rather than inserting each object individually, serialises and transfers the entire graph in one go. I am curious whatever generating identifiers manually and pushing hibernate to perform batch inserts would make a difference. I'm also intending to test relational database based graph persistence against Oracle, to see whatever this makes any difference.

Would be very keen your feedback. 

Tuesday
Dec182012

Do we need architecture?

We don't need no architecture! We don't need no code control! We are agile, we know better! Architects, leave us Devs alone!

"Wait a second. Aren't you an architect?" - you ask. Hell, yeah! And I'm quite fed up with what is happening in the agile space. It seems like the human nature inherently pushes - if not all, at least some of - us to screw up all sort of good ideas, be it because of laziness, stupidity or ego. 

Would you ever want bridges or skyscrapers to be built with only a rough idea of what is being built. Why then we allow the software to be built like this.

By all means, I am not against agile; on the contrary, agile done right is invaluable. The problem is that more & more delivery organisations assume an immature model where agile becomes an excuse for  little or no architecture work. 

Architecture is there to set the structure, to guarantee that end quality will satisfy required levels. It is like a railway for the train.

I've recently heard from a developer on a team: "We do not need architects." Well, sure you do not need architects, or any other label for that matter, as long as the team can self-regulate itself and impose a structure over the product, but wait, isn't that architecture?

There is a fundamental problem with the responsibility for defining architecture being spread thin all over the team: no one is ultimately responsible for it, and quite often egos overtake - please do not be surprised, we software engineers are well know for our big egos - a result being an accidental architecture. Of course, if the agile lead has a decent background and experience he/she will not let it happen, but then they become an acting architect.

Finally, architecture brings not just the structure, but also reusable approaches to solving common problems. A system built this way is cheaper to maintain. We need not forget, that software is a business and while having fun doing it is great, it is not enough.

Wednesday
Feb162011

Intalio|BPMS (Apache ODE), Process Management API (PMAPI), unfortunate message queuing behavior and again Groovy to the rescue

On my project I get to play a lot with Intalio|BPMS, which is essentially a "supported" version of Apache ODE bundled with a couple of extra components. Most of the processes I have to deal with typically require a lot of interaction with the environment. For  those of you familiar with BPMN notation, this translates to Intermediate Message Events somewhere within the process, an equivalent of receive in BPEL. An overview of the architecture of our solution is outlined on the diagram below.

Not getting into to much detail we have a central routing component (implemented on Mule ESB) which intelligently resolves the target endpoint (process endpoint + operation to invoke) and the target instance of the respective process to which the event should be delivered. Simplifying (but not much), all our processes have an "update" interface which allows us to communicate with a running instance in a standard way. The trouble is that when the update event is received, the process might be busy processing previous update (we have instances when different updates trigger different branches in the process, as shown on a below BPMN diagram and also one process can receive multiple updates of the same type, when it orchestrates a # of activities of the same kind). While the process is busy doing it's job, respective update interface is not "open". And it might actually take a while before it becomes available again.

Apache ODE (on which Intalio|BPMS is based) has an annoying feature (at least not desirable for us): if the interface is not "available", the message will be queued (internally), while at the same time you will actually get a read timeout somewhere in your web service stack. This puts us in a very unfortunate position - even though the exception is raised while the message is being delivered, it actually is accepted and queued in the process engine itself. This is naughty. We don't like it. Or let's be honest, it sucks. Big time.

One way to solve it would possibly be to expose respective SOAP endpoints over JMS (instead of a default HTTP/SOAP), but quite often our clients require events to be delivered synchronously. Not to mention all the trouble that comes with SOAP over JMS. We can of course play with the timeout settings on Axis (which ODE utilises to expose process endpoints), but that would not save us from timeouts in the WS stack of a client.

Therefore we have decided to enhance our central router with the logic checking whatever the target process instance is "ready" to accept the message before dispatching it (see above architecture overview diagram). If the process is not currently "listening" on the update interface, we  return a meaningful exception back to the caller.

Given that ODE provides process management API, which among other things allows you to check the status of the instance (is it still active?) and list all events generated during the execution of an instance - which in turn would allow us to figure out if the last activity in the process is indeed our incoming interface - we expected this to be a simple exercise. Well, it is not as simple as you would expect it to.

Ok, so lets explain why it is not...

First, the pmapi.wsdl is not quite representative of the actual messages. So we first attempted to make it right: message listEventsOutput defines bpel-event-list part, while the service actually returns event-info-list (of EventInfoList type as defined in pmapi.xsd). Ok. We have got it fixed.

Then another surprise. Generated client (Axis 1.4) still had problems deserialising response and was throwing NullPointerExceptions. Bad service! Fortunately, SOAPUI allows you to validate responses against the schema. And you discover yet another set of discrepancies, such as elements not being declared as nillable, while the services respond with nils.

Ok we have got fed up and decided, that if we have to keep fixing the broken WSDL/XSD of the RPC/Encoded service, we can as well generate our request by-hand and post it to the respective PMAPI endpoint using HTTPClient. We have actually reused autogenerated client where it worked fine, and only handled calles by hand where needed. Since we work with Groovy, processing of the response is nice and easy thanks to XMLSlurper.

Below snippet illustrates how this was achieved:

public InstanceInfo getInstanceInfo(long instanceId, transactionId) throws ServiceFault {

   def getInstanceInfoRequestStr = ""
   getInstanceInfoRequestStr += '<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" '
   getInstanceInfoRequestStr += 'xmlns:pmap="http://www.apache.org/ode/pmapi">'
   getInstanceInfoRequestStr += '<soapenv:Header/><soapenv:Body><pmap:getInstanceInfo>'
   getInstanceInfoRequestStr += "<iid>${instanceId}</iid>"
   getInstanceInfoRequestStr += '</pmap:getInstanceInfo></soapenv:Body></soapenv:Envelope>'
   
   def url = "http://${Configuration.getConfigProperty('endpoint.bpms')}/ode/processes/InstanceManagement"
   def soapAction = ""

   def getInstanceInfoRequestEntity = new StringRequestEntity(getInstanceInfoRequestStr);
   def httpMethod = new PostMethod(url);
   httpMethod.addRequestHeader("Content-Length", Long.toString(getInstanceInfoRequestEntity.getContentLength()));
   httpMethod.setRequestEntity(getInstanceInfoRequestEntity);
   httpMethod.addRequestHeader("Content-Type", "text/xml;charset=UTF-8");
   httpMethod.addRequestHeader("User-Agent", "Jakarta Commons-HttpClient/3.1");
   httpMethod.addRequestHeader("Accept-Encoding", "gzip,deflate");
   httpMethod.addRequestHeader("SOAPAction", soapAction);
   httpMethod.addRequestHeader("Host", "Configuration.getConfigProperty('endpoint.bpms'));
   def client = new HttpClient()
   int responseCode = client.executeMethod httpMethod

   def slurper = new XmlSlurper().parseText(httpMethod.getResponseBodyAsString()).declareNamespace(
                      soapenv: 'http://schemas.xmlsoap.org/soap/envelope/',
                      pmapi: 'http://www.apache.org/ode/pmapi',
                      'pmapi-types': 'http://www.apache.org/ode/pmapi/types/2006/08/02')
   
   if (responseCode != 200) {
      def faultString = slurper.'**'.findAll { it.name() == 'faultstring'}
      if (faultString.size()) {
         throw FaultsHandler.createFault(transactionId, "getInstanceInfo", faultString.get(0).toString(), LOG)
      } else {
         throw FaultsHandler.createFault(transactionId, "getInstanceInfo", "HTTP Responce Code ${responseCode} while connecting to BPMS", LOG)
      }
   }

   def instanceInfo = new InstanceInfo()
   def getInstanceInfoResponseNode = slurper.'soapenv:Body'.'pmapi:getInstanceInfoResponse'   

   if (getInstanceInfoResponseNode.size()) {
      instanceInfo.with {
         iid = getInstanceInfoResponseNode.'**'.findAll { it.name() == 'iid'}?.get(0)
         pid = getInstanceInfoResponseNode.'**'.findAll { it.name() == 'pid'}?.get(0)
         status = getInstanceInfoResponseNode.'**'.findAll { it.name() == 'status'}?.get(0)
         firstEventOn = getInstanceInfoResponseNode.'**'.findAll { it.name() == 'event-info'}.'first-dtime'.get(0)
         lastEventOn = getInstanceInfoResponseNode.'**'.findAll { it.name() == 'event-info'}.'last-dtime'.get(0)
      }

      def instanceManagementClient = new InstanceManagementPortTypeProxy()
      int maxCount = 0
      TEventInfo[] tEventInfos = instanceManagementClient.listEvents("iid=${instanceId}", "", maxCount)
      if (tEventInfos.size()) {
         TEventInfo lastEvent = (TEventInfo)tEventInfos.toList().last()
         instanceInfo.callbackOperation = lastEvent.activityType == 'OPickReceive' ? lastEvent.activityName : ''
      }
   }

   return instanceInfo
}

Now, if instanceInfo.status="ACTIVE" && instanceInfo.callbackOperation == "tUpdate", where  tUpdate is the technical name we have given to our Intermediate Message Event activity, we know that the process is ready to accept messages.

As always things are not so simple. In fact we have also had to manage concurency, to avoid races between events targeted to the same instance. But that is a whole different story.

Now, the real question is why bother providing WS APIs (or any APIs for that matter), if out-of-the-box they are completely useless and you have to resort to manually constructed HTTP/POST? Fortunately we use Groovy. Do not event want to think about the XML madness we would have to deal with, if we were to use Java.


Sunday
Jan302011

Building Hadoop Cluster, Part 1: The Beginning 

A piece of software that I am writing for quite a while now, requires a decent framework for making sense out of a large sets of data. I have decided to go with Apache Hadoop. I always liked pork dumplings and Pig  has proven to be a tasty addition to Hadoop.

Anyway, after playing for a while in a single node sandbox, I´ve decided that the time has came to build my own cluster. Given limited budged and space on my desk, I was looking for a solution that would satisfy these two requirements.

My initial configuration should allow me to run one NameNode/DataNode and three JobTracker/TaskTracker nodes. Should be enough for a start.

Initially I was considering a single, rather juicy (i7, 16GB RAM) box, running ESXi, equipped with 1 HDD for a a general purpose management node and 4 more for Linux VMs running Hadoop. The problem with that setup is that enclosures are rather bulky, noisy (fans) and ugly (gamers have a very specific taste, which I do not share). Also no matter if I am coding (for which I need only one node) or running my map/reduce jobs, all devices are active, consume power and generate noise.

Therefore I decided to build a ¨network¨ of  mini-PCs. Long story short (there are a number of options available, and if you are interested in my findings I will be happy to share), I went for the following setup:

  • IN-WIN BQ Series BQ656 slim chassis (£50)
  • Asus AT5NM10-I Integrated dual-core, four-thread Intel® Atom™ processor D525 (£65)
  • Corsair 2GB DDR2-800MHz (£28)
  • Seagate Momentus 7200rpm 250GB (£34)

Total price per unit: £177. Not bad. Size and cost criteria have been met. So I´ve got 2 of these to start with, as I have a Mac Mini and a decent unit of the Fujitsu-Siemens´s business-line laptops which I can initially use in my cluster.

Assembly was quite straight-forward, no major surprises, although I had to refer to Google a couple of times - but I do not assemble computers for living, so it might be just my lack of experience. After approximately an hour my units were assembled, connected to the GB switch and ready for OS install.

Another hour later and I am writing this post on one of these boxes.

I will keep you posted with the progress.