Monday, May 12, 2008

13.05.2008 MUXERS WORK MANAGERS TUNING EXECUTE QUEUES

View from the Trenches: Looking at Thread-Dumps

http://dev.bea.com/products/wlplatform81/articles/thread_dumps.jsp

****

http://edocs.bea.com/wls/docs92/perform/WLSTuning.html

****

Tuning Execute Queues

Note: Execute Queues are deprecated in this release of WebLogic Server. BEA recommends migrating applications to use work managers.

In previous versions of WebLogic Server, processing was performed in multiple execute queues. Different classes of work were executed in different queues, based on priority and ordering requirements, and to avoid deadlocks. See Using the WebLogic 8.1 Thread Pool Model.

Understanding the Differences Between Work Managers and Execute Queues

The easiest way to conceptually visualize the difference between the execute queues of previous releases with work managers is to correlate execute queues (or rather, execute-queue managers) with work managers and decouple the one-to-one relationship between execute queues and thread-pools.

For releases prior to WebLogic Server 9.0, incoming requests are put into a default execute queue or a user-defined execute queue. Each execute queue has an associated execute queue manager that controls an exclusive, dedicated thread-pool with a fixed number of threads in it. Requests are added to the queue on a first-come-first-served basis. The execute-queue manager then picks the first request from the queue and an available thread from the associated thread-pool and dispatches the request to be executed by that thread.

For releases of WebLogic Server 9.0 and higher, there is a single priority-based execute queue in the server. Incoming requests are assigned an internal priority based on the configuration of work managers you create to manage the work performed by your applications. The server increases or decreases threads available for the execute queue depending on the demand from the various work-managers. The position of a request in the execute queue is determined by its internal priority:

  • The higher the priority, closer it is placed to the head of the execute queue.
  • The closer to the head of the queue, more quickly the request will be dispatched a thread to use.

Work managers provide you the ability to better control thread utilization (server performance) than execute-queues, primarily due to the many ways that you can specify scheduling guidelines for the priority-based thread pool. These scheduling guidelines can be set either as numeric values or as the capacity of a server-managed resource, like a JDBC connection pool.

****


MANY OF US DONT KNOW WHAT IS A MUXER????


DONT MISS THIS ARTICLE



Tuning Muxers

WebLogic Server uses software modules called muxers to read incoming requests on the server and incoming responses on the client. These muxers are of two primary types: the Java muxer or native muxer.

A Java muxer has the following characteristics:

  • Uses pure Java to read data from sockets.
  • It is also the only muxer available for RMI clients.
  • Blocks on reads until there is data to be read from a socket. This behavior does not scale well when there are a large number of sockets and/or when data arrives infrequently at sockets. This is typically not an issue for clients, but it can create a huge bottleneck for a server.

Native muxers use platform-specific native binaries to read data from sockets. The majority of all platforms provide some mechanism to poll a socket for data. For example, Unix systems use the poll system and the Windows architecture uses completion ports. Native provide superior scalability because they implement a non-blocking thread model. When a native muxer is used, the server creates a fixed number of threads dedicated to reading incoming requests. BEA recommends using the default setting of selected for the Enable Native IO parameter which allows the server automatically selects the appropriate muxer for the server to use.

If the Enable Native IO parameter is not selected, the server instance exclusively uses the Java muxer. This maybe acceptable if there are a small number of clients and the rate at which requests arrive at the server is fairly high. Under these conditions, the Java muxer performs as well as a native muxer and eliminate Java Native Interface (JNI) overhead. Unlike native muxers, the number of threads used to read requests is not fixed and is tunable for Java muxers by configuring the Percent Socket Readers parameter setting in the Administration Console. See Changing the Number of Available Socket Readers. Ideally, you should configure this parameter so the number of threads roughly equals the number of remote concurrently connected clients up to 50% of the total thread pool size. Each thread waits for a fixed amount of time for data to become available at a socket. If no data arrives, the thread moves to the next socket.



****

No comments: