Microservices: Right Tool, Right Job

Microservices is a hot new buzzword for integration technology today; essentially it boils down to a type of software architecture where small, independent processes—arising from the web, wearable devices or the Internet of Things—communicate with each other by using application programming interfaces (APIs).

The term microservice, widely believed to have been popularized (if not coined) by Martin Fowler, is increasingly discussed in the digital realm and has come to represent many concepts – not all of which align with the original idea.

As with many new technology buzzwords, microservices has attracted some nay-sayers , hence the pejorative, “microservice-washing.” According to Wikipedia, microservices-washing is derived from whitewashing, “meaning to hide some inconvenient truth with bluster and nonsense.”

But microservices are not nonsense, they should actually be considered as using the right tool for the right job. The notion of a microservice, at its very core, is that each piece of a complex system is discrete, and its business functionality can be broken down into much smaller pieces. Those smaller pieces can be engineered—they can be tested and deployed—independently of trying to cross-coordinate members of a large engineering team.

In this 5-part series of posts entitled “Microservices 101” I will identify and explain what I believe to be foundational elements related to microservices and their practical applications.

Conceptually, a microservice is a portion of a complex application that is a discrete amount of self-contained business functionality, meaning the run-time server, the configuration and the business logic are deployed as a single unit. These units can be developed, tested, deployed, upgraded, retired and so on without having to deploy the entire application at the same time. Additionally, microservices can scale up or out individually as needed.

There are additional benefits that microservices offer. The independence realized by breaking down business functionality into discrete deployment units lends itself very well to the idea of using the right tool for the right job and provides better agility and more flexibility. Developers can choose any programming language, storage technology and framework that solve the problem because each part of the overall system no longer must conform to standards imposed in a “monolithic architecture.”

Microservices are not necessarily just small services; they, in fact, can be rather large depending on the functionality they provide. Even so, they are discrete and self-contained. Next time we’ll look at the IT architecture approach that preceded microservices, called Monolithic Architecture.

Read More 0

Considerations around MiFIDII Transaction Reporting

Within the last year or so, there has been a marked increase in the amount of fines levied on financial institutions by the FCA for transaction reporting breaches. Last year for example, Deutsche Bank was fined £4.7m for incorrectly reporting over 29 million swaps transactions, where the buy & sell indicators were the wrong way around[1]. Then in May of this year, the FCA levied its largest fine yet for transaction reporting failures – £13.3 million – to Merrill Lynch for a range of transgressions, including incorrectly reporting more than 35 million transactions and failing completely to report another 121,387[2]. Some of the failures included incorrect client and counterparty identifiers; wrong trade dates and times, missing maturity dates and buy/sell indicators reversed (again!).


Whilst reporting whether a trade is a buy or a sell might seem pretty simple and fundamental, the details in these and other similar cases serve to highlight the increasingly complex nature of transaction reporting today. Firms might be trading on behalf of all sorts of clients and counterparties, across asset classes covering various instruments types, on a wide range of different markets falling under multiple jurisdictions. And they might be managing a range of trading desks across multiple geographic locations, each with their own trading systems, trade/data formats and local reporting requirements. So it’s not surprising that figuring out what needs to be reported to whom, by when and in what format, can become something of a system integration nightmare where things can go badly wrong.

MiFID II will only compound this situation. From January 2017, transactions will need to be reported across many more instruments and asset classes, on a much wider range of trading venues. And the details of what must be reported will grow substantially too, as firms will have far stricter obligations regarding the identification of transaction counterparties, individual traders and even computer algorithms. The number of firms impacted by these new regulations will grow too with the removal of buy-side exemptions and the requirement for any firm that is party to a trade in Europe – regardless of their geographic location – to report transactions.

Best practices

Unfortunately there is no “silver bullet” when it comes to addressing challenges around transaction reporting under MiFID II. But there are some best practices that firms can adopt. And by following these principles, firms will not only minimise the risk of being fined by competent authorities for reporting breaches, they will also have the necessary business intelligence to pro-actively manage their reporting requirements in the future.

Much of this revolves around how a firm captures and stores data and applies business logic to it. It is important that firms not only capture all of the relevant transaction data in the first place, but also that they have the necessary tools and infrastructure in place to be able to make sense of that data in order to translate it into meaningful transaction reporting.

Top-down Approach

Given the growing complexity in terms of what needs to be reported, one of the key elements here is to ensure that any reporting solution implemented for MiFID II can be modified to factor in specific local regulatory or asset/instrument-based requirements. For example, although there is no excuse for getting things like buy and sell indicators the wrong way around, it is an easy mistake to make when trying to fit the square peg of a payment/receipt-based equities swap transaction (for example) into the round hole of an exchange-traded derivative.

Firms need to take a top-down approach when it comes to transaction reporting, starting with the Chief Data Officer (or equivalent) driving the emphasis on the firm’s transaction reporting strategy. And with the fast pace of change around regulatory reporting in the financial sector – MiFID II and MiFIR transaction reporting is one example but there are others such as BCBS239 – it is important that any systems initiatives they undertake in this regard follow more of an integration-type approach where data can be transformed, rather than a complete re-building of their enterprise data infrastructure to cater for each new regulatory requirement.

Technology as Enabler

As is often the case when addressing complex business issues, technology can act as an enabler, as long as the right tools are used in the right way. Complex transaction reporting requirements can often result in spaghetti code that is hard to maintain and correct, but this challenge can be addressed by using Business Process & Analysis (BPA) tools – such as Software AG’s Aris – which are able to model transformation logic in a way that can be easily understood, verified and even modified by business analysts.

From a reporting and workflow perspective, Integration and Business Process Modelling (BPM) tools such as WebMethods can help firms not only monitor the end-to-end processes from their multiple systems to the approved reporting mechanism (ARM), but also define actionable workflows if (for example) a specific system extract doesn’t get produced, or the ARM reports errors.

In conclusion, the transaction reporting obligations under MiFID II, EMIR, Dodd-Frank, Basel III and so on are only going to grow over time. But if firms are able to use technology wisely to simplify what is becoming an increasingly complex area, they will not only save themselves a great deal of time and, they will also reduce the risk of non-compliance along with the associated fines and bad press.


[1] https://www.fca.org.uk/news/deutsche-bank-fined-transaction-reporting-failures

[2] https://www.fca.org.uk/news/fca-fines-merrill-lynch-international-for-transaction-reporting-failures

Read More 0

Anticipate, Influence and Respond Using Apama 9.9

With its latest release of Apama streaming analytics, Software AG has taken an exciting step forward by incorporating predictive analytics capabilities into the product. In this post, I’m going to look at is why this is such an important step for our customers.

Predictive analytics delivers the ability for organizations to predict, with a reasonable amount of certainty, an event that is likely to happen in the future, based on spotting patterns in, around and leading up to events in the past.

Predictive analytics allows organizations to build models that can be used to:

  • Reduce downtime by predicting mechanical and part failures
  • Reduce the risk of fraud by spotting unusual spending patterns
  • Increase customer lifetime value by being able to monitor lifestyle changes as they happen

Being able to deploy these models with the streaming analytic capabilities of Apama makes it possible not only to understand the events that might for example lead to a customer lapsing, or a fraudulent transaction, but it becomes possible to respond and react to these events in a way that allows you to influence the outcome before the main event has actually taken place. In other words, while it still matters.

In this way, organisations are able to increase efficiencies, reduce costs, and improve revenue while increasing customer satisfaction at the the same time.

The technology used to build these predictive models is not new. In fact, many of us already use the same or similar technologies when using the voice recognition capabilities of our phones. Or in our cars when we are driving and the car constantly monitors our behaviour for signs that we might be getting drowsy.

What is new is the ability to seamlessly combine predictive capabilities with the power of streaming analytics—and this is what we have delivered with the release of Apama 9.9.

By using proven and tested technology from Zementis, we have integrated the ability for organizations to take predictive analytics models from any of the tools they may be using—or plan to use in future—that support PMML (the industry standard for exchanging predictive analytic models) and to quickly and easily re-use these models within Apama.

Our inclusion of support for predictive analytics actually goes one step further, allowing organisations to refine their predictive models over time by looking at the effects of different interventions and building those results back into the models. That way Apama can determine the best way to influence different types of events, taking into account all the factors that might be relevant. This means that your predictive analytics solution becomes more intelligent the more you use it. Anticipate, influence, respond. This is the power of Apama 9.9

To find out more about how our predictive analytics capabilities can help your organization, please go to our predictive analytic pages.

Read More 0

Government is a Major Driver of B2B Commerce

While organizations of all kinds the world over benefit from B2B technology and practices, one of the most prominent is government, according to Bob Cohen, North American vice-president for Basware.

“Governments are forerunners in pushing connected commerce,” he wrote in his column Paythink recently. “More than fifty governments around the world are in the process of implementing e-invoicing mandates and are pushing for a supportive infrastructure to enable agile e-commerce.”

He cited tax compliance and fraud reduction as significant drivers in the push, noting that efficiency and savings were major factors domestically, with the U.S. multi-agency Invoice Processing Platform resulting in a $20 processing cost reduction on federal invoice processing.

For the European Union, borderless commerce has been the goal, he said, with an interoperable e-invoicing standard. Government suppliers must now conform to the standard, which is causing it to proliferate within private industry as well.

Other benefits emanating from government e-commerce include the untethering of financial process, he added, through the exploitation of cloud technology and mobile platforms. Business and supplier networks, experiencing rapid growth, are simplifying and accelerating governmental requisition processes in turn.

Finally, he noted, the increasing demand for real-time payment and financing options by businesses in general have made government uptake of B2B prudent in any case. And the analytics that modern B2B methodologies and systems enable are causing both governmental purchasing systems and suppliers to grow ever more efficient over time.

Read More 0