Distributed systems pose reliability problems that are not frequently encountered in centralized systems. A distributed system consisting of a number of computers connected by a network can be subject to independent failure of any of its components, such as the computers themselves, network links, operating
systems, or individual applications. Decentralization allows parts of the system to fail while other parts remain functioning, which leads to the possibility of abnormal behaviour of executing applications.
Consider the case of a distributed system where the individual computers provide a selection of useful services that can be utilized by an application. It is natural that an application that uses a collection of these
services requires that they behave consistently, even in the presence of failures. A very simple consistency requirement is that of failure atomicity: the application either terminates normally, producing the intended results, or is aborted, producing no results at all. This failure atomicity property is supported by atomic transactions, which have the following familiar ACID properties:
Atomicity: The transaction completes successfully (commits) or if it fails (aborts) all of its effects are undone (rolled back);
Consistency: Transactions produce consistent results and preserve application specific invariants;
Isolation: Intermediate states produced while a transaction is executing are not visible to other transactions. Furthermore, transactions appear to execute serially, even if they are actually executed concurrently. This is typically achieved by locking resources for the duration of the transaction so that they cannot be acquired in a conflicting manner by another transaction
Durability: The effects of a committed transaction are never lost (except by a catastrophic failure).
A transaction can be terminated in two ways: committed or aborted (rolled back). When a transaction is committed, all changes made within it are made durable (forced onto stable storage such as disk). When a transaction is aborted, all changes made during the lifetime of the transaction are undone. In addition, it is possible to nest atomic transactions, where the effects of a nested action are provisional upon the commit/abort of the outermost (top-level) atomic transaction.
In this article we will discuss different ways and methods of improving performance in an enterprise Java based application specifically concentrating on the Web services Integration layer.
Let’s briefly start by discussing the different methodologies of performing Remote Procedural Calls (RPC) from a JEE Application integration tier perspective, and then delve into performance pros and cons for each.
Document and literal style Web services using JAX-RPC and JAX-WS
Wire speed solutions for example DataPower devices
Concurrency using Asynchronous communication and passing contextual information to spawned
threads and Asynchronous beans.
Web service interface best practices for performance gains
Introduction to SOAP and Web services
SOAP was originally intended to be a cross-Internet form of DCOM or CORBA. The name of an early SOAPlike technology was WebBroker – Web-based object broker. It made perfect sense to model an inter-application
protocol on DCOM, CORBA, RMI etc. because they were the current models for solving inter-application interoperability problems.
These technologies achieved only limited success before they were adapted for the Web. RPC models are great for closed-world problems. A closed world problem is one where you know all of the users, you can share a data model with them, and you can all communicate directly as to your needs.
Scalability was comparatively easy in such an environment: you just tell everybody that the RPC API is going to change on such and such a date and perhaps you have some changeover period to avoid downtime. When you want to integrate a new system you do so by building a point-to-point integration.
On the other hand, when your user base is too large to communicate coherently you need a different strategy.You need a pre-arranged framework that allows for evolution on both the client and server sides. You need to depend less on a shared, global understanding of the rights and responsibilities of a participant. You need to put in hooks where compliant clients and serves can innovate without contacting you. You need to leave in explicit mechanisms for interoperating with systems that do not have the same API. RPC protocols are usually poorly suited for this kind of evolution. Changing interfaces tends to be extremely difficult. Integrating services typically takes complicated software “glue” which is the motivating factor behind extending the capabilities of the first-generation Web services framework for the enterprise.
Second generation SOE
The vast amount of second-generation Web service specifications that have emerged, position SOA as a viable successor to prior distributed platforms. Their feature sets continue to broaden, as do vendor-sponsored variations of the specifications themselves. The continuing maturity of theses standards and their implementations sets the stage for the viable evolution of a Service-Orientated-Enterprise (SOE). The generation Web service specifications are listed below which can be embedded and used as part of either JAXRPC or JAX-WS:
Business Process Execution Language for Web services (BPEL4WS)
SOAP with Attachments (SwA)
Extensible Access Control Mark-up Language (XACML)