UIMA project logo
Getting Started: Apache UIMA Asynchronous Scaleout
Apache UIMA

Search the site

 Getting Started: Apache UIMA Asynchronous Scaleout

The "Getting Started: Apache UIMA™ Asynchronous Scaleout " guide should help you to understand UIMA AS: how it relates to the core UIMA framework, what are the main concepts of UIMA AS, and an example application scenario.

UIMA AS Relationship to UIMA

UIMA AS is the next generation scalability replacement for the Collection Processing Manager (CPM). UIMA AS provides more flexible and powerful scaleout capability, and extends support to the UIMA components not supported by the CPM, the flow controller and CAS multiplier. UIMA components can be run within UIMA AS with no code or component descriptor changes.

UIMA AS introduces a new XML descriptor, the Deployment Descriptor. Unlike the CPM descriptor which specifies component aggregation, error handling and scalability, the UIMA AS Deployment Descriptor only specifies error handling and scalability options; component aggregation is done using the standard aggregate descriptor. The UIMA Component Descriptor Editor (CDE) has been enhanced to support the UIMA AS Deployment Descriptor.

To use UIMA AS, you need to download UIMA AS binary package from UIMA Download Page and unzip it into a directory of your choice. The package includes base UIMA jars to simplify installation. In addition to base UIMA, the UIMA AS package also includes the Apache ActiveMQ implementation of JMS which is used to provide connectivity between UIMA AS clients and services.

UIMA AS Client - Service Architecture

Like UIMA Vinci services, UIMA AS provides a service wrapper that creates a shared UIMA service from a UIMA analysis engine. UIMA services enable UIMA clients to utilize UIMA analysis engines running in separate processes on the same or different machines. There are a number of advantages with UIMA AS services:

  1. UIMA AS services are asynchronous. A client can send multiple requests before receiving any responses. This can increase processing throughput over Vinci services which, being synchronous, prevent the client or service from doing work during request/reply transmission.
  2. Load balancing for UIMA AS services is excellent, because available services pull requests from a shared queue. Service instances can be dynamically added or removed at runtime
  3. Because of Apache ActiveMQ, connectivity to UIMA AS services is much more flexible than Vinci. A maximum of one port needs to be opened to expose one or more services thru a firewall. The HTTP protocol can be used for robust wide-area network connectivity.
  4. The UIMA AS design is based on JMS, a widely adopted standard. One goal for UIMA AS is that it can run with different JMS implementations, so that it can be integrated into popular middleware platforms.
  5. UIMA AS exposes numerous performance parameters via JMX, extending in a consistent manner the JMX parameters already exposed by core UIMA. Custom UIMA AS monitoring tools based on these JMX parameters are being developed to help identify bottlenecks in complex deployments.

The shared queue in front of each UIMA AS service is implemented using an Apache ActiveMQ broker. A separate reply queue is created for each client, and every request contains the address of the client's unique reply queue.

UIMA AS services are compatible with core UIMA applications such as the Document Analyzer. However, the base UIMA interface to services is synchronous and non-thread safe. There is a new UIMA AS application API that exposes synchronous and asynchronous thread-safe interfaces to UIMA AS services.

UIMA AS architecture
Figure 1 - UIMA AS Client-Server Architecture

UIMA Aggregates vs. UIMA AS Aggregates

A UIMA aggregate analysis engine is implemented as a synchronous, single-threaded object. That is, when the aggregate AE's process method is called, only one delegate can be working on that CAS at a time.

For a UIMA AS aggregate, each delegate has its own input queue, and the aggregate controller sends requests to the delegates via their queues. By default a delegate is assumed to be collocated with the aggregate controller, but the Deployment Descriptor allows a delegate to be "remote", i.e. mapped to the input queue for another UIMA AS service. For collocated delegates, the Deployment Descriptor can specify how many instances of the delegate should be instantiated; each instance will have a listener thread to receive requests and execute user code.

When calling "remote" delegates, the CAS is serialized into CasXmi format for transfer. Colocated delegates share the in-process CAS object and have no serialization overhead. Remote delegates with no dependency on each others results can be called in parallel on the same CAS, providing opportunities for reducing latency in real-time applications.

UIMA AS aggregates optionally provide extensive error handling for service calls, including retry, ignore, delegate disable, service termination, and more. When combined with the UIMA flow controller, users can implement complex application flow/error handling logic.

UIMA AS asynchronous aggregate
Figure 2 - UIMA AS Aggregate

UIMA AS - Example Application Scenarios

One problem that has been raised on the UIMA mailing lists is how to "push" documents into a UIMA processing pipeline. The simple answer with UIMA AS is to implement the processing pipeline as a UIMA AS service, and push requests from a custom application. Multiple instances of the processing pipeline service could be instantiated to increase throughput.

Included with UIMA AS is a sample application, RunRemoteAsyncAE. This application demonstrates most of the features of the new UIMA asynchronous API, including the ability to use a UIMA Collection Reader to push documents to a specified service.

RunRemoteAsyncAE application
Figure 3 - Using RunRemoteAsyncAE to push documents into a UIMA AS service

A scaleout that corresponds to that provided by the Collection Processing Manager (CPM) is to have a single instance of collection reader and Cas Consumer(s) and scale out the other analysis components. Scaleout here is limited by the collection reader / Cas consumer bottleneck and the deserialization work required on the central driver. Scaleout efficiency is determined by the ratio of the processing done by the scaled out analysis engines to the serialization overhead in the services.

Single Collection Reader Scaleout
Figure 4 - Scaleout using a single set of Collection Reader and Cas Consumers

For very large scalability using UIMA AS, multiple copies of the collection reader and Cas consumers are needed. A central driver would distribute [references to] subsets of the input collection to the scaled out processing pipelines, where each pipeline contains a collection reader and the Cas consumers.

Scalout limitations in this scenario could be in several places, examples being a common source of input documents or a shared writable resource used by Cas consumers.

Very large Scaleout
Figure 5 - Scaleout using multiple Collection Readers and Cas Consumers.

UIMA AS - What next?

A reference manual for UIMA-AS is viewable on the documentation page. Go to the UIMA Download Page and get the "UIMA AS Asynchronous Scaleout" package in zip or tar format. Unpack it into directory of your choice. See the README file in the top level directory for instructions on deploying and testing standard UIMA example annotators as UIMA AS services.