//log//

About

Lost in transit.
Moving.
Returning.
You're welcome.

small-time SOA: lessons learnt so far…

Some time ago, in the midst of migrating at least parts of our application from an all-integrated proprietary monolith to a more “open” approach, we quickly came to embracing at least parts of the SOA architectural model to do so, as, these days, it just seemed to fit. Now, few years later, it seems we already learnt something from that, a few things, to be accurate, which I want to share here for the sake of it – maybe they will provide a good subject of discussion or eventually also help others gain insights into that matter…

Lesson #1: Reconsider why you actually wanted to go for services…

Well… it might sound obvious, but personally I learnt that, actually, it isn’t. We used to get started with two different applications, both based upon Java EE web tier and Spring, implementing the notion of a “backend” to provide data access and a “frontend” to provide user interaction with these data, with just a small layer of business logic mixed into both of them (complexity of this application was pretty much limited initially). A lot of this “frontend/backend” separation, these days, arose out of the requirement to have the data access “hidden” somewhere in a well-protected, restricted environment whereas the environment available to end users needed to be publicly available. The backend system offering services to the frontend system (actually stateless beans exposed via Spring remoting) was our first take on SOA and, so, actually born out of the demand to have a distributed application (in terms of being distributed across several machines mainly for administrative reasons).

It changed. While our initial environment was running on several Apache Tomcat servers, we got away from that pretty soon, as it showed that the administrative effort caused by maintaining a couple of servlet containers, partly in cluster configuration, for front- and backend communicating with each other by far exceed the benefit gained from this structural approach. So, the next logical step was to “consolidate” this environment, which we simply did by moving all the stuff to one big (glassfishv2) Java EE application server in a clustered operation mode fronted by a fast web proxy restricting external user access. Our application, by then, still had this “frontend/backend” structure and also did still use the kind of remoting involved here (at this time mainly based upon Hessian web services) but not distributed anymore but on the same machine and in the same application server. So, the original reason to go for the system structure we chose had disappeared all of a sudden… and, adding to that, we realized that by now our requirements were somewhat different, even though we kind of “expected” our application structure to be up to this: We wanted parts of our application to be maintainable / redeployable independent upon each other as far as possible, so ideally being capable of deploying a new version of part of our system without having the whole application taken off-line or affected by this.

It prove that, overally, our architecture was not up for that requirement: Though we were capable of deploying “frontend” and “backend” separately, deploying “backend” (and, subsequently, having it offline for a couple of moments) involuntarily would cause “frontend” to fail simply because most of the functionality it required wasn’t around. Worse than that, we figured out that “backend” actually had grown to provide a lot of different service components that generally would do rather well on their own but, right now, just were to be separated with quite some effort because sharing a lot of common code. Bottom line, resolving this issue, was completely restructuring the whole “backend” module, extracting several services to modules of their own, extracting common code to shared libraries or other services, without breaking the overall structure, some glue code included. Which immediately gets me to…

Lesson #2: Careful with hierarchies of interdependent services!

I think this is a point where your mileage basically might vary, depending upon why you want to go for a service-oriented approach. In an environment like ours however, having the ability of maintaining services seperately without affecting the system as a whole all too much, services depending upon each other all too much tend to successfully break things in worst case. In our scenario in example, initially we were tempted to restructure “backend” by ripping out most of the common code and putting it to a “generic” database access service pulling data off tables (we can’t make meaningful use of an O/R mapper in our structure so this is the most comfortable way to do database access) and making all other services providing access to “special” kinds of data use this “databaseAccessService”.

Well… it didn’t take a prototype, just a second look to see this is not a good idea: Having this kind of structure immediately would make the “databaseAccessService” module the one not to be maintainable / redeployable easily without shutting most of the rest of the system down (most of what the system does is actually database access work). It wouldn’t lock all the system so generally this would have been an improvement compared to the global “backend” after all, but still it was bad enough not to use it. So, shared code again: Instead of building a custom “databaseAccessService” most of the shared code to deal with the database backend was extracted to a .jar project and used to create a bunch of specialized data access services providing access to special kinds of data, yet raising another caveat which is interesting in my opinion:

Lesson #3: Actually keep thinking on a service level throughout the project.

So, we do have these shiny new services, all featuring well-defined, (hopefully) well-documented interfaces to access them, along with (if required – EJB or Spring remoting) client stub libraries required to interact with them. And, along with this, we do have the service implementation code available in the same workspace / project group (as we’re using NetBeans for this kind of work)… so, while creating a new service module from scratch, one is always tempted to “directly” integrate, configure, use functionality and classes usually exposed in another service rather than using / developing against the services external interface. Asides contracts and conventions, I haven’t yet found a more convenient way of working against this than just creating a project structure which tries to make this approach difficult, but it’s not yet that convenient.

Lesson #4: Reduce the amount of shared code.

In some ways, this is a different facet of #3 in my opinion, looking at it from a different point of view and outlining other problems: Sure, library code shared between services might be a good thing while solving the same issues (generic frameworks come to mind), but asides the issue of not thinking on the service level anymore, in my opinion there is another pain arising from that: As soon as library or framework code written to be used in one service will be re-used to implement another one, chances are good to, while tweaking the framework in some aspects to better suit the needs of one service using it, make any other service depending upon the same code cease working in strange ways when being redeployed simply because, even if your API didn’t change, maybe a small yet important aspect of this shared code changed in a small way without all services depending upon this code being modified to know about (and honour) these changes. Simply put: Loads of shared code seem likely to make your everyday development life incredibly more difficult because, once you do changes in shared code, you’ll have to ensure all services to cleanly work with those as well sooner or later (or you utilize different SVN branches or different versions of maven2 artifacts which, however, still doesn’t eliminate but just postpone the problem of dealing with this).

Lesson #5: Try keeping extensibility at a sane level.

While working in a semi-agile process, starting with a small set of specified requirements, trying to get them worked out to a more fine-grained state following several iterations and prototypes, I have seen (or, better: followed) two different approaches:

The first, “dirty” one is a quick “front-to-back hack” of what is intended to happen: Starting with the user interface (after all, that’s the only part of the application the customer is likely to see…), introduce the functionality demanded / requested, and then hack up any services around to provide data needed to make this functionality work (which includes both external service interface, service implementation and anything “beyond” this). Do so carefully, try not to break the service interface defined so far, and try as well to keep a given semantic view on the interface throughout all the functionality added. The other one, seeming more “clean” at first sight, is trying to keep things as “generic” and extensible as possible, trying to predict any modification or requirement likely to appear in advance. This includes building “generic” service interfaces providing “generic” data, along with eventually introducing vast loads of (redundant?) transformation logic in order to get actual “non-generic” data off the service and ready to work with.

To me, both ways seem diametral and exactly opposite. The first one has proven to end up in, indeed, “hacked-up” code which is hard to refactor and hard to get straight simply because, in worst case, we did end up with a whole bunch of different views on the same service, with completely different understandings of what the service should be doing. The second one, however, ended up in the service actually doing nothing except for passing data around, which would render the service itself useless and again move most of the “essential” code to some processing layer inside, say, the user interface or frontend. Probably it’s a matter indeed of finding a balance between the two approaches…

Conclusions

Finding to an end on that: We learnt that, using a service-oriented approach, there are good chances of cleaning up a “grown”, heterogenous software infrastructure in a way pretty good in terms of modularization, independency of components, reuse of shared functions and so on. We also learnt, however, that there are indeed some more or less obvious pitfalls to take care of, and that some of the benefit provided by this kind of approach might on the long run be compensated by additional complexity arising from other unique aspects of the service-oriented way of doing things (like administering a bunch of loosely coupled services to actually make up the system they are intended to make up, in the end). Was it worth it? From a short term point of view – definitely yes, and be that just for the sake of having clarity in some of the points listed here. From a long term point of view… we still will have to learn, I guess. 🙂 So, other experiences, thoughts, ideas on that welcome…

11. März 2009

Filed under:

english , tech , thoughts , tools