Category Archives: Patterns

Forrester on Dynamic Case Management Q1 2011

What Oracle and some other BPM and ECM vendors call Adaptive Case Management – ACM – is called Dynamic case management by Forrester and others. The notion of a case and the need for these systems emerge from requirements elicited by existing Business Process Management (BPM) and Enterprise Content Management (ECM) implementations. Forrester states:

We found a clear recognition that older process automation approaches based on traditional mass production concepts no longer fit an era of peolple-driven processes.

Types of Dynamic Case management

Forrester uses a division in three categories of Case Management:

  • Investigative – Examples are Audit request, Fraud detection and regulatory queries. All these are aiming at risk mitigation and cost control.
  • Service Request – Think claims, customer service, underwriting and customer onboarding. Processes like these are aimed at customer experience and risk mitigation.
  • Incident management – Think managing complaints, order exception and acute helth care. This categorie is aimed at customer experience and cost control.

Dynamic Case Management extends BPM

In contrast to traditional BPM products, DCM software supports:

  • The ability to run multiple procedures against a given case of work – An individual case instance can be influenced by multiple processses.
  • The ability to associate different types of objects with a case – A set of data (structure, unstructured, assets, customers calls, etc) provide the context for an individual case.
  • Mechanisms that allow end users to handle variantion – Humans working on the case use their skills and expertise to interpret what is needed to handle the case and see the results of this reflected in the supporting system.
  • Mechanisms to selectively restrict change on a process – Certain lock down of change on certain assets is required due to compliance on one hand and facilitating goal-centric behavior on the other hand.

Beware of the untamed processes

In every organization there are several to loads of untamed processes. With a growing demand to track these, meet compliance regulations and gain insight on their effectiveness (and efficiency). Dynamic Case Management aligns with these untamed processes since they support:

  • both structured and unstructured content
  • both human and system controlled processes
  • facilitating khowledge and expert guidance

Forrester Wave - Dynamic Case Management q1 2011DCM has very strong point when bringing flexibilty and manageability together. It provides visibility and control for tasks that have to be performed. Key drivers for the DCM initiatives are both agility and traceability.

Oracle and ACM

As Forrster states: Many ECM and BPM tools form the basis for Dynamic Case Management solutions. With PS6 and release 12c of the Oracle BPM Suite, Oracle will take a leap into Adaptive Case Management segment as they call it. Check the other vendors in the Forrester Wave for Dynamic Case Management.

Service Bus definition

While preparing guidelines for the usage of the Oracle Service Bus (OSB) I was looking for a definition of a Service Bus. There wasn’t one on my blog yet (more posts on integration) so i decided to use the following and share them with you.

Forrester Service Bus definition

From 2009 Forrester has used this one:

An intermediary that provides core functions to makes a set of reusable services widely available, plus extended functions that simplify the use of the ESB in a real-world IT environment.

Erl Service Bus definition

Thomas Erl offers the following description of a Service Bus::

An enterprise service bus represents an environment designed to foster sophisticated interconnectivity between services. It establishes an intermediate layer of processing that can help overcome common problems associated with reliability, scalability, and communications disparity.

An Enterprise Service Bus is seen by Erl et al as a pattern. That is why it is even more important to share what that patterns is. Later on I’ll also shortly describe the VETRO pattern. Also a very useful pattern to use when comparing integration tools or developing guide lines.

Erl Enterprise Service Bus pattern

On the SOA patterns site we learn that an enterprise service bus represents an environment designed to foster sophisticated interconnectivity between services. The Enterprise Service Bus pattern is a composite pattern based on:

  • Asynchronous Queuing basically an intermediary buffer, allowing service and consumers to process messages independently by remaining temporally decoupled.
  • Service Broker composed of the following patterns
  • Data Model transformation to convert data between disparate schema structures.
  • Data Format transformation to dynamically translate one data format into another.
  • Protocol bridging to enable communication between different communication protocols by dynamically converting one protocol to another at runtime.
  • Intermediate routing meaning message paths can be dynamically determined through the use of intermediary routing logic.
  • With optional the following patterns: Reliable Messaging, Policy Centralization, Rules Centralization, and Event-Driven Messaging. Also have a look at slide 12 etc of the SOA Symposium Service Bus presentation.

VETRO pattern for Service Bus

The VETRO pattern was introduced by David Chappell, writer of the 2004 book Enterprise Service Bus.

  • V – Validate: Validation of messages eg based on XSD or schematron.
  • E – Enrich: Adding data from applications the message doesn’t originate from.
  • T – Transform: Transform the data model, data format or the protocol used to send the message.
  • R – Routing: Determine at runtime where to send the message to.
  • E – Execute: You can see this as calling the implementation.

We also used this pattern to compare Oracle integration tools and infrastructure. It can be very well used while choosing the appropriate tools for a job and deciding on guidelines on how to use these tools.

Dis-economies of centralization

While in a previous post I was arguing that we should handle industry models with care, because of very inconvenient side effects. This week I’ll blog in a similar way on centralization. Among the effects of centralization are often overlooked or neglected dis-economies of scale.

Dis-economies of scale

One of the main reasons for centralization is to gain economies of scale. Less known are the dis-economies of scale. I’ll give some examples in the paragraphs below.

The cost of communication between the central group and the rest of the organization. Although there are lots of tools that make communication easier. Distance in the physical sense or within an organization can create boundaries. These have to be dealt with and there are costs incurred for that. Besides that it has to be clear who to communicate for what matters. This, in my experience, is not always the case. With a greater (organizational) distance more effort has to be put into this.

There is a large possibility that top heavy management in a centralized department becomes isolated from the effects of their decisions. In other words the feedback loop is broken. Because the communication loop is broken, decision become more and more dysfunctional. This due to the lack of real world knowledge that should be incorporated in these decisions.

Centralization can lead to reduced agility. On one hand standardization is a great asset. The larger part of architecture, whether it is enterprise architecture, process architecture or infrastructure architecture, is about standards and reducing the “solution space”. This has several advantages, among which the reduction of software- and systems entropy. The downside of a centralized body that maintains standards is that it probably will lead to inertia and unwillingness to change.

I’m a big fan of (open) standards. They simplify life! However we should not neglect that standardization comes at a cost. There are the costs for implementing, adapting to and maintaining standards in our organization. Say for example that we use a canonical (data) model. There is are maintenance costs (at least some effort) while adopting to change outside and within our organization. These costs of standardization tend to be hidden.

What to do?

Bring the effects described before into the business case for centralization. You did make sure that there was some sort of trade off when you decided to centralize a certain part of your organization didn’t you?

Take measures to prevent these risks. It goes without saying that these measures will take effort, time and possibly money. Now you know you’re going to take measures don’t you?

Industry Data Models, Processes and Architectures

Recently while listening to OTN ArchBeat podcasts, a panel discussion on Reference Architectures (part 2 and part 3) I was thinking back to some pieces I wrote on industry data models and processes that I didn’t share with you yet. There a some similar argument to using these and reference architectures.

The value of reference models whether it contains data models, standardized messages, processes of a reference architecture, is or should be in a faster time to market and better quality of the solution.

Handle with care

What makes it hard to achieve this value, is the fact that these models contain always far more than is needed. That can be considered a waste. Even the parts that are not used still require attention while implementing and maintaining. This incurs work to understand the complex model, hide the details you don’t need, and customize and extend the parts you need.

Implementing a reference model requires spending time to determine how and to what extend this model meets the needs of your business? That is typically something you have to discover for yourself. It is where the majority of the time is spend! If you don’t go through the effort of understanding your business requirements, you are missing understanding of how the business can and should use the model. That makes it very hard to determine the value of the end solution to the business.

When using a reference models you should be aware that your business is not average. In some shape or form it delivers value to your customers in away a reference model doesn’t provide. Reference models should be used with care your business deserves.

Presenting at Seminar “Lean & Agile IT: beter resultaat, betrokkenheid en IT volwassenheid” (Dutch)

Martin van Borselaer asked me to present at a seminar he is organizing on Lean and Agile IT. I’ll be presenting on Lean Integration and will probably also offer a peek into the Integration Factory.

This seminar will take place on Thursday September the 15th at our Whitehorses head office in Nieuwegein, the Netherlands. It’s in Dutch and aimed at our customers or potential customers. More information on the seminar program.

We’re looking forward to share our ideas with you. Hope to see you there!

Kscope 11 FMW Symposium

Sharing some highlights from Symposium Sunday of Kscope 2011. The two most remarkable quotes of the day are:

ADF is the “Paint by the Numbers” for web front end development.

The most common application integration tools/solutions used are Post-it and a paper notepad.

However the real gem I discovered during this first day was:

User Experience Design Patterns

Madhuri Kolhatkar has delivered a great presentation on the effort Oracle has put into creating and implementing User Experience Design Patterns. Extended information is available on the Usable Apps pages of the Oracle website. Great insight on how this can help you in developing and delivering your applications can for example for OBIEE be found on Design Patterns and Guidelines for Oracle Applications. Take special note of the Pattern Selection Tool.

Kscope 2011 Solid Service Bus implementations

From now on counting down in days to the upcoming ODTUG Kscope 2011. ODTUG is a user group for for a wide range of technologists working with the Oracle platforms. During this conference I’ll be presenting on solid Service Bus implementations using the Oracle Service Bus, Mediator or both. The full schedule of Kscope is here.

Program SOA Symposium 2010 available

The agenda for the SOA Symposium 2010 has been posted. Again there are very interesting sessions during this 2 day conference. The largest and most comprehensive in the field of SOA and Cloud Computing. The Real World SOA Case Studies track offers a great opportunity to learn from the experience of others. In this track you will find:

Real-life accounts of successful and failed SOA projects discussed first-hand by those that experienced the project lifecycles and have a story to tell. These veteran practitioners will provide advice and insights regarding challenges, pitfalls, proven practices, and general project information that demonstrates the intricacies of implementing and governing service-oriented solutions in the real world.

I will be presenting the first session in this track on Using a Service Bus to Connect the Supply Chain. If you have any topics or questions in advance that you think I should address, please post them in the comments. Hope to meet you in Berlin.

SOA Symposium 2010 Call For Presentations

SOA Symposium 2010On October 5 and 6 2010 the worlds largest SOA and Cloud Computing event will be held in Berlin; the SOA Symposium. The International SOA and Cloud Symposium brings together lessons learned and emerging topics from SOA and Cloud projects, practitioners and experts.

There is a call for presentations:

The SOA and Cloud Symposium 2010 program committees invite submissions on all topics related to SOA and Cloud, including but not limited to those listed in the preceding track descriptions. While contributions from consultants and vendors are appreciated, product demonstrations or vendor showcases will not be accepted.

All submissions must be received no later than June 30, 2010. An overview of the tracks can be found here. Other resources: