TAO is increasingly being used to support high-performance distributed real-time and embedded (DRE) applications. DRE applications constitute an important class of distributed systems where predictability and efficiency are essential for success. This document describes how to configure TAO to enhance its throughput, scalability, and latency for a variety of applications. We also explain various ways to speedup the compilation of ACE+TAO and applications that use ACE+TAO.
As with most applications, including compilers, enabling optimizations can often introduce side-effects that may not be desirable for all use-cases. TAO's default configuration therefore emphasizes programming simplicity rather than top speed or scalability. Our goal is to assure that CORBA applications work correctly ``out-of-the-box,'' while also enabling developers to further optimize their CORBA applications to meet stringent performance requirements.
TAO's performance tuning philosophy reflects the fact that there are trade-offs between speed, size, scalability, and programming simplicity. For example, certain ORB configurations work well for a large number of clients, whereas others work better for a small number. Likewise, certain configurations minimize internal ORB synchronization and memory allocation overhead by making assumptions about how applications are designed.
This document is organized as follows:
In this context, ``throughput'' refers to the number of events occurring per unit time, where ``events'' can refer to ORB-mediated operation invocations, for example. This section describes how to optimize client and server throughput.
It is important to understand that enabling throughput optimizations for the client may not affect the server performance and vice versa. In particular, the client and server ORBs may be designed by different ORB suppliers.
Client ORB throughput optimizations improve the rate at which CORBA requests (operation invocations) are sent to the target server. Depending on the application, various techniques can be employed to improve the rate at which CORBA requests are sent and/or the amount of work the client can perform as requests are sent or replies received. These techniques consist of:
We explore these techniques below.
      For two-way invocations, i.e., those that expect a reply
      (including ``void'' replies), Asynchronous method
      invocations (AMI) can be used to give the client the opportunity
      to perform other work as a CORBA request is sent to the target,
      handled by the target, and the reply is received.
    
A TAO client ORB can be optimized for various types of applications:
	      A single-threaded client application may not require
	      the internal thread synchronization performed by TAO.
	      It may therefore be useful to add the following line to your
	      svc.conf file:
	    
	      static Client_Strategy_Factory "-ORBProfileLock null"
	    
	    
	      If such an entry already exists in your
	      svc.conf file, then just add
	      -ORBProfileLock null to the list options
	      between the quotes found after
	      Client_Strategy_Factory.
	    
Other options include disabling synchronization in the components of TAO responsible for constructing and sending requests to the target and for receiving replies. These components are called ``connection handlers.'' To disable synchronization in the client connection handlers, simply add:
	      
		
		  -ORBClientConnectionHandler ST
	      
	    
	    
	      to the list of Client_Strategy_Factory
	      options.  Other values for this option, such as
	      RW, are more appropriate for "pure"
	      synchronous clients.  See the 
		
		  -ORBClientConnectionHandler option
	      documentation for details.
	    
	      Clients with lower scalability requirements can dedicate a
	      connection to one request at a time, which means that no
              other requests or replies will be sent or received,
	      respectively, over that connection while a request is
              pending.  The connection is exclusive to a given
              request, thus reducing contention on a connection.
              However, that exclusivity
              
              
              
              
              
              
              
	      
	      comes at the cost of a smaller number of requests that
	      may be issued at a given point in time.
              
              
              
              
              To enable this
	      behaviour, add the following option to the
	      Client_Strategy_Factory line of your
	      svc.conf file:
	    
	      
		
		  -ORBTransportMuxStrategy EXCLUSIVE
	      
	    
	  
      Throughput on the server side can be improved by configuring TAO
      to use a thread-per-connection concurrency model.  With
      this concurrency model, a single thread is assigned to service
      each connection.  That same thread is used to dispatch the
      request to the appropriate servant, meaning that thread context
      switching is kept to minimum.  To enable this concurrency model
      in TAO, add the following option to the
      
	Server_Strategy_Factory
      
      entry in your svc.conf file:
    
      
	
	  -ORBConcurrency thread-per-connection
      
    
    
      While the thread-per-connection concurrency model may
      improve throughput, it generally does not scale well due to
      limitations of the platform the application is running.  In
      particular, most operating systems cannot efficiently handle
      more than 100 or 200 threads running
      concurrently.  Hence performance often degrades sharply as the
      number of connections increases over those numbers.
    
Other concurrency models are further discussed in the Optimizing Server Scalability section below.
      In this context, ``scalability'' refers to how well an ORB
      performs as the number of CORBA requests increases.  For
      example, a non-scalable configuration will perform poorly as the
      number of pending CORBA requests on the client increases from
      10 to 1,000, and similarly on the
      server.  ORB scalability is particularly important on the server
      since it must often handle many requests from multiple clients.
    
      In order to optimize TAO for scalability on the client side,
      connection multiplexing must be enabled.  Specifically, multiple
      requests may be issued and pending over the same connection.
      Sharing a connection in this manner reduces the amount of
      resources required by the ORB, which in turn makes more
      resources available to the application.  To enable this behavior
      use the following Client_Strategy_Factory option:
    
      
	
	  -ORBTransportMuxStrategy MUXED
      
    
    This is the default setting used by TAO.
Scalability on the server side depends greatly on the concurrency model in use. TAO supports two concurrency models:
The thread-per-connection concurrency model is described above in the Optimizing Server Throughput section.
      A reactive concurrency model employs the Reactor design
      pattern to demultiplex incoming CORBA requests.  The underlying
      event demultiplexing mechanism is typically one of the
      mechanisms provided by the operating system, such as the
      select(2) system call.  To enable this concurrency
      model, add the following option to the
      
	Server_Strategy_Factory
      
      entry in your svc.conf file:
    
      
	
	  -ORBConcurrency reactive
      
    
    This is the default setting used by TAO.
The reactive concurrency model provides improved scalability on the server side due to the fact that less resources are used, which in turn allows a very large number of requests to be handled by the server side ORB. This concurrency model provides much better scalability than the thread-per-connection model described above.
      Further scalability tuning can be achieved by choosing a Reactor
      appropriate for your application.  For example, if your
      application is single-threaded then a reactor optimized for
      single-threaded use may be appropriate.  To select a
      single-threaded select(2) based reactor, add the
      following option to the
      
	Advanced_Resource_Factory
      
      entry in your svc.conf file:
    
      
	
	  -ORBReactorType select_st
      
    
    If your application uses thread pools, then the thread pool reactor may be a better choice. To use it, add the following option instead:
      
	
	  -ORBReactorType tp_reactor
      
    
    
      This is TAO's default reactor.  See the
      
	-ORBReactorType
      
      documentation for other reactor choices.
    
      Note that may have to link the TAO_Strategies
      library into your application in order to take advantage of the
      
	Advanced_Resource_Factory
      
      features, such as alternate reactor choices.
    
A third concurrency model, unsupported by TAO, is thread-per-request. In this case, a single thread is used to service each request as it arrives. This concurrency model generally provides neither scalability nor speed, which is the reason why it is often not used in practice.
Disabling optimization for your application will come at the cost of run time performance, so you should normally only do this during development, keeping your test and release build optimized.
In order for code built with -DACE_NO_INLINE to link, you will need to be using a version of ACE+TAO built with the "inline=0" make flag.
In order to accommodate both inline and non-inline builds of your application it will be necessary to build two copies of your ACE+TAO libraries, one with inlining and one without. You can then use your ACE_ROOT and TAO_ROOT variables to point at the appropriate installation.
Ossama Othman Last modified: Wed Dec 25 06:23:55 CST 2002