Category Archives: Service Orientation

Service Orientation

web scale patterns in the bol.com back office – Mixed SQL – NoSQL

In the previous weeks, we started a series of blog posts that show you how we use “web scale” patterns to achieve scalability and flexibility in our back office software. The previous patterns discussed were Event Sourcing and CQRS. This week we will dive into mixed SQL – NoSQL. Showing you how this doesn’t just solve a technical problem, they help us solve our business problems!

Mixed SQL – NoSQL

Where needed in our services we are moving away from pure SQL. We create a mix with other types of storage. So we are using NoSQL (Not only SQL). One could also call this polyglot persistence. The notion that your application can write to or query multiple databases or one database with multiple models. It uses the same idea as polyglot programming. This expresses the idea that applications should be written in a mix of languages to take advantage of the fact that different languages are suitable for tackling different problems.

RTN – Billing platform for our retailers

RTN stores all kinds of transactions to charge and pay partners in our LvB (Fulfilment by bol.com) operations. In this part of our operation, we store and fulfil products in our warehouses for retailers that sell their goods on our platform.

The transactions to create invoices for our LvB partners stems from a number of services. We have all kinds of different attributes we want to account for to know why decisions have been made and for auditing purposes. These attributes depend on the transaction type. It was decided that the attributes wouldn’t be a part of the transactions table since they are only filled for a part of the records.

To accommodate for the attributes that depend on the transaction type we created an additional column in the table that is able to store key-value pairs in JSON format. The use of a pure SQL solution would have resulted in a weak design. As would the use of a pure NoSQL. In these cases, they work great together.

FNK – Warehouse orders

FNK processes our customer orders to create warehouse orders. It determines the warehouse that will fulfil the customer demand and instructs the warehouse to fulfil the order. Besides regular warehouses, it also communicates with our warehouse for digital products (e-books and software downloads) and with retailers that sell their products on our platform and take care of the fulfilment themselves.

These retailers have requirements that differ from the other warehouses. To accommodate these while avoiding a to a specific part in this services we introduced an additional column that stores XML. This mixture of SQL (one table for all warehouse orders) and NoSQL (stored XML) results in a simple model that can handle requirements that are only needed for a part of the orders. Since the data in the XML is hardly needed in this service but mostly in downstream services, there are no drawbacks on performance.

What we learned

The NoSQL parts in these mixed data stores are mostly used to read from. If you need to specifically filter on these of have a requirement to use them in joins performance will degrade.

Next in web scale patterns in the bol.com back office

In the next week’s episode on the following subject will be published:

  • Micro services
web scale patterns in the bol.com back office

web scale patterns in the bol.com back office – Event Sourcing

Last week we started a series of blog posts we show you how we use “web scale” patterns to achieve scalability and flexibility in our back office software. Last week’s pattern we discussed was CQRS. This week we will dive into Event Sourcing. Showing you how this doesn’t just solve a technical problem, they help us solve our business problems!

Event Sourcing

The idea of Event Sourcing is that every change to the state of a system is captured in sequence and that these events can be used to determine the current state. Consequently, the state of the system for any point in time can be determined by replaying the events. The structure of the service changes from storing state to storing events.

The most obvious that we gain by using Event Sourcing is that we have a log of all the changes. We can see everything that happened. This enables us to:

  • Do a complete rebuild;
  • Determine the state of the system at any point in time;
  • Event replay – Compute the consequences of a change in a past event of recalculate the consecutive states based on the proper sequence of events (in case messages in an asynchronous communication weren’t received in the proper order).

Using Event Sourcing can feel a little bit awkward for some developers. However, it offers a variety of opportunities. One could replay the events on a test environment to see exactly what happened on pro, while you have the ability to stop, rewind and replay the events running a debugger. This provides also a way to do parallel testing before promoting an upgrade to production.

Where do we use it at bol.com?

web scale patterns in the bol.com back officeOne of the examples where we use Event Sourcing is Condition Management and especially the calculations of accruals and invoices for (purchasing) conditions. A large set of our purchasing conditions is based on either purchasing amounts or values and sales amounts and values. In general these purchasing conditions have to attributed to (sets of) single products, product categories, suppliers and brands.

Storing the events that represent the purchase and sales of goods allows us to implement functionality that would be very hard to develop if we wouldn’t. Typically a purchasing condition isn’t agreed with a supplier of a brand on the first of January. While it could be valid from the first of January. The Event Sourcing model allows us to handle conditions that are entered into the system somewhere in March or April that are valid from the first of January. These conditions will be handled by passing all the events from the start date and the appropriate accruals and invoice can be created.

With the Event Sourcing model, we are also more loosely coupled to the source services for purchasing and sales. Our calculations can handle events that are captured out of sequence or even very late. Condition values are still calculated properly and handled as accounting and controlling have prescribed.
For the future, we are planning to implement scenario run through and comparisons. This would support our buyers while negotiating with suppliers.

Next in web scale patterns in the bol.com back office

In the next week’s episodes on the following subjects will be published:

web scale patterns in the bol.com back office – CQRS

web scale patterns in the bol.com back office – CQRS

In this series of blog posts, we show you how we use “web scale” patterns to achieve scalability and flexibility in our back office software. We will guide you through how we apply patterns like CQRS, event sourcing and micro services to solve puzzles in our back office services. These patterns don’t just solve a technical problem, they help us solve our business problems!

We need web scale in the back office since more and more functionality from the back office is needed on the web site to offer better service to our customers. For example, more parts of our web shop do request on our stock levels and warehouse configuration to determine how fast product can be delivered to our customers and with what options. Consequently, the services that know our stocks levels and warehouse configuration also have to be scaled to handle these volumes. To enable this we don’t just need more hardware, we also need to apply patterns to our services to create a proper structure.

CQRS

CQRS is short for Command Query Responsibility Segregation. At the core of CQRS is the notion that a different model can be used to alter data than the model that is used to query data. Updating and reading information have different requirements on a model. There are enough cases where it serves to split these. The downside of this separation is that it introduces complexity. So this pattern should be applied with caution.

The most common approach for people to interact with data in a service or system is CRUD. Create, Read, Update and Delete are the four basic operations on persistent storage. The term was likely popularised by James Martin in his 1983 book Managing the Database environment. Although there exist other variations like BREAD and MADS, CRUD is widely used in systems development.

When a need arises for multiple representations of information and users interact with these multiple representations, we need something that extends CRUD. This because the model to access the data tends to be split over several layers and becomes overly complicated.

What CQRS adds

CQRS introduces a split into separate models for update and display, Command and Query respectively. The rationale for this is that for many problems in more complex domains having the same model for commands and queries leads to a more complex model. A model that does neither well.

Where do we use it at bol.com?

One of the examples of where we use CQRS in the back office services at bol.com is in our Inventory Management. Inventory Management handles all updates on stock levels and serves them to several services in out landscape including our web shop.

The updates of stock levels come from our warehouse management and include reservations based on customer orders, shipments and received goods. The queries on the stock level originate in the web shop, check out and fulfilment network. As you can imagine these queries have quite a different profile compared to the updates. Besides that, the number of queries far outreaches the number of updates.

Given these different requirements we decided to split command (updates) and query for inventory management. All updates are handled by a technically isolated part of the service. Stock levels are served by other services by another isolated part.

Implementation

web scale patterns in the bol.com back office – CQRSThe part that handles the updates has several models. The incoming changes like the shipments and received goods have to be handled in for example stock mutations, stock levels and stock valuation. These models receive updates and process them to a new stock level and stock valuation. Once a new stock level is calculated, it is published on a messaging queue to the query part. This message is also consumed by other services that need these.

The query part is a simple single table. The messages from the update part are stored in this table and there is no additional logic or processing. Queries from other services are handled by a REST interface. Due to this design, this call has a very high cache hit ratio. Which of course leverages performance.

Next in web scale patterns in the bol.com back office

In the next week’s episodes on the following subjects will be published:

Presenting on pragmatic microservices GOTO Night Thursday, May 12, 2016

goto nights pragmatic microservicesOn Thursday, May 12, 2016 I will be presenting on pragmatic microservices at the GOTO Night organised at bol.com. The presentation will be the support act for Randy Shoup. Check some of his previous presentations on SlideShare.

Pragmatic microservices

We have been around in e-commerce for years. However compared to other companies we’re young. Some would say we are in the scale up phase. In a number of ways we are experiencing a rapid growth. What does our IT need to stay innovative and scale to enable all this? What are the tradeoffs that are made for innovation in IT?

This year we won the Best Web Shop award because of our “efforts to get the difficult to achieve basics right that make the difference for customers”. IT has a large role in achieving this, at the scale of a web shop like bol.com. Did (micro)service make the difference to achieve this?

At bol.com we have a pragmatic, business value driven approach to (micro)services. In this presentation we share insights and the tradeoffs we made so IT enables to scale and innovate.

Presentation Pragmatic (Micro)Services

Here is the presentation I used:

Article on integration infra components published in OTech magazine

article OTech magazine - integration infra componentsDuring Oracle Open World 2013 OTech magazine was launched. OTech is a new independent magazine for Oracle professionals. The magazine’s goal is to offer a clear perspective on Oracle technologies and the way they are put into action. As a trusted technology magazine, OTech Magazine provides opinion and analysis on the news in addition to the facts.

My article in OTech magazine one of the most frequently asked questions is how to pick the right integration infrastructure component to solve the problem at hand.

Download the fall issue of OTech magazine.

Gartner Magic Quadrant for SOA Infrastructure Projects

In June gartner published it’s Magic Quadrant for Application Infrastructure for Systematic SOA Infrastructure Projects. Due to the nature of SOA initiatives the selection of technologies and products aimed at supporting the implementation of the SOA infrastructure is done upfront. The resulting platform is shared among SOA applications and other integration initiatives in the enterprise.

To address the need for SOA infrastructure vendors typically have “SOA suites” of “SOA platforms”. These package products like:

Magic Quadrant for Application Infrastructure for Systematic SOA Infrastructure

.
SOA infrastructure magic quadrant
All open source vendors are in the visionary quadrant. In some cases their offerings are more modern than the Leaders’ products, since they are hardly burdened with backward compatibility issues. However these vendors are constrained by their small size or sometimes inconsistent execution.

In general the open source platforms are less expensive and easier to implement and deploy. However their offerings are generally less comprehensive than the Leaders’ offerings. If these offerings fit your requirements this could be an easy-to-use/low-cost SOA infrastructure for your organisation. The open source platforms are a strong technology offering.

Other recent Magic Quadrants for SOA and integration

Service Bus definition

While preparing guidelines for the usage of the Oracle Service Bus (OSB) I was looking for a definition of a Service Bus. There wasn’t one on my blog yet (more posts on integration) so i decided to use the following and share them with you.

Forrester Service Bus definition

From 2009 Forrester has used this one:

An intermediary that provides core functions to makes a set of reusable services widely available, plus extended functions that simplify the use of the ESB in a real-world IT environment.

Erl Service Bus definition

Thomas Erl offers the following description of a Service Bus::

An enterprise service bus represents an environment designed to foster sophisticated interconnectivity between services. It establishes an intermediate layer of processing that can help overcome common problems associated with reliability, scalability, and communications disparity.

An Enterprise Service Bus is seen by Erl et al as a pattern. That is why it is even more important to share what that patterns is. Later on I’ll also shortly describe the VETRO pattern. Also a very useful pattern to use when comparing integration tools or developing guide lines.

Erl Enterprise Service Bus pattern

On the SOA patterns site we learn that an enterprise service bus represents an environment designed to foster sophisticated interconnectivity between services. The Enterprise Service Bus pattern is a composite pattern based on:

  • Asynchronous Queuing basically an intermediary buffer, allowing service and consumers to process messages independently by remaining temporally decoupled.
  • Service Broker composed of the following patterns
  • Data Model transformation to convert data between disparate schema structures.
  • Data Format transformation to dynamically translate one data format into another.
  • Protocol bridging to enable communication between different communication protocols by dynamically converting one protocol to another at runtime.
  • Intermediate routing meaning message paths can be dynamically determined through the use of intermediary routing logic.
  • With optional the following patterns: Reliable Messaging, Policy Centralization, Rules Centralization, and Event-Driven Messaging. Also have a look at slide 12 etc of the SOA Symposium Service Bus presentation.

VETRO pattern for Service Bus

The VETRO pattern was introduced by David Chappell, writer of the 2004 book Enterprise Service Bus.

  • V – Validate: Validation of messages eg based on XSD or schematron.
  • E – Enrich: Adding data from applications the message doesn’t originate from.
  • T – Transform: Transform the data model, data format or the protocol used to send the message.
  • R – Routing: Determine at runtime where to send the message to.
  • E – Execute: You can see this as calling the implementation.

We also used this pattern to compare Oracle integration tools and infrastructure. It can be very well used while choosing the appropriate tools for a job and deciding on guidelines on how to use these tools.

SOA Cloud Service Technology Symposium 2012 London

The world’s largest conference dedicated to SOA, cloud computing and service technology will have it’s 2012 version in London! Hosting the 5th SOA Symposium and the 4th International Cloud Computing Symposium on September 24-25. This brings the symposium back to Europe after last years visit to Brasilia, Brazil. The SOA Symposiums website has been rebranded to Service Tech Symposium.

There are several blog posts on previous editions of the SOA Symposium available in blogs. During this years event the following books will be launched:

  • Cloud Computing: Concepts & Technology
  • SOA with REST: Principles, Patterns & Constraints
  • Next Generation SOA: A Real-World Guide to Modern Service-Oriented Computing

Call for presentations

The 2012 program committee invites submissions on all topics related to SOA, cloud computing and service technologies. The primary tracks are:

  • Cloud Computing Architecture & Patterns
  • New SOA & Service-Orientation Practices & Models
  • Service Modeling & Analysis Techniques
  • Service Infrastructure & Virtualisation
  • Cloud-based Enterprise Architecture
  • Real World Case Studies
  • Service Engineering & Service Programming Techniques
  • Interactive Services & the Human Factor
  • New REST & Web Services Tools & Techniques

Additional information the 2012 SOA Symposium Call for Papers are available online. Download the Speaker Form. All submissions must be reviewed no later than July 15, 2012.

Book review: Do more with SOA integration

Book cover: Do more with SOA IntegrationRecently I read Do more with SOA integration that was published December 2011. This book is a mash-up of eight earlier published works from Packt, including Service Oriented Architecture: An Integration Blueprint, Oracle SOA Suite Developer’s Guide, WS-BPEL 2.0 for SOA Composite Applications with Oracle SOA Suite 11g, and SOA governance. More details on this title:

Target audience according to the publisher:

If you are a SOA architect or consultant who wants to extend your knowledge of SOA integration with the help of a wide variety of Packt books, particularly covering Oracle tools and products, then “Do more with SOA Integration: Best of Packt” is for you. You should have a good grasp of Service Oriented Architecture, but not necessarily of integration principles. Knowledge of vendor-specific tools would be an advantage but is not essential.

My thoughts

My assumption is that most people won’t read the around 700 pages of this book cover to cover. In my view it is a good reference book to get a solid introduction to SOA and integration in general.

To deepen you knowledge on real world scenario’s there a good examples eg given in the chapters on Extending enterprise application integration and Service oriented ERP integration. The first gives an example of of BPEL orchestrating various web service exposed on ERP systems (SAP, Siebel) using EAI (TIBCO, webMethods). This sample includes an example of centralized error handling. The latter shows an integration of PeolpleSoft CRM 8.9 and Oracle Applications 11g using BPEL 10g. The ideas and mechanismes of the integration will also hold in the 11g version.

Chapter 14 on SOA Integration a Scenario in detail, offers another example on how to use Oracle SOA technology (10g again) to integrate legacy systems into a more modern application landscape. It does a thorough job.

The chapter on Base Technologies has parts that are based on the Trivadis Integration Architecture Blueprint. Beside that it offers a good introduction on transactions, JCA, SCA and SDO. Their fundamentals are well explained without getting too technical. So should you be looking for coding examples on these topics, there are other great sources.

When reading about XML for integration I noticed that it answers questions we get from our customers on a regular basis like: How to design XSDs – XML Schema Definitions. Questions on when to use a type or an Element, chose targetNamespace or XMLSchema as the default namespace, the number of namespaces to use. These are all well adressed in the book.

Where on the other hand a complete view on the following statement could fill at least a whitepaper:

Adopt and develop design techniques, naming conventions, and other best practices similar to those used in object-oriented modelling to address the issues of reuse, modularization, and extensibility. Some of the common design techniques are discussed in the later sections.

The chapter on loose coupling offers an example of how to achieve this using the Oracle Service Bus. It is hard to overrate the importance of loose coupling since a lot of both the technical and the business advantage rely on whether or not this loose coupling is achieved.

Bottomline

As a reference this is a good starting point to learn about SOA and integration in general. It could be more consistent on some details and with the great BPEL and BPM tooling these days I wouldn’t implement processes in an ESB. Of course there is a good chapter (12) with an eaxmple of using both BPM and BPEL. As mentioned before it has some great illustrative examples of real world scenarios. The bottom line is that I would recommend this book to people looking for a reference on SOA and integration.

Cons:
Some text seems a little dated.

Pros:
Good description of SOA and integration in general; practical ; solid introduction on the XML stuff, transactions, JCA and SCA; nice real world integration examples.

Additional reviews

If you’re interested in other reviews on this book, visit the ADF Code Corner blog by Frank Nimphius, AMIS blog by Lucas Jellema, or this SOA / BPM on Fusion Middleware blog by Niall Commiskey.