iTP Secure WebServer System Administrator's Guide (iTPWebSvr 6.0+)

Using NonStop Servlets for JavaServer Pages
(NSJSP)
iTP Secure WebServer System Administrator’s Guide523346-002
9-10
NonStop Servlets for JavaServer Pages (NSJSP)
Architecture
The iTP Secure WebServer process can be installed across any number of CPUs. It is
recommended to have three httpd processes running in any one CPU; others in other
CPUs can be used for load balancing, scaling to meet required throughput, or backups
for fault tolerance.
Each web container can contain a number of applications, each with their own servlets,
JSP, and other resources. These containers are accessible by any httpd process
running on any CPU. The containers and the httpd processes are part of the NonStop
TS/MP 2.0 environment.
The ITP Secure WebServer software, which is inherently scalable and reliable, enables
the creation of Java servlets that can take advantage of the database and transaction
services infrastructure of the HP NonStop server. Java servlets are implemented as
NonStop TS/MP server processes that can be replicated and automatically load
balanced across multiple processor nodes for scalable throughput. Consequently, large
volumes of servlet-based web transactions can be executed concurrently to maintain
consistent response times.
The complete environment is further enhanced by the addition of Parallel Library
TCP/IP (TCP/IP/PL) support. The architecture introduced by the NonStop S-series
servers allows all processors in a system to access an adapter. Parallel Library TCP/IP
takes advantage of this architecture by using the communications adapter and the
ServerNet™ cloud to route packets directly to the processor containing the application.
By directly routing packets to the correct processor from the adapter, Parallel Library
TCP/IP eliminates the message-system hop that occurred between processes in the
conventional TCP/IP architecture.
By eliminating message-system hops, Parallel Library TCP/IP reduces the total path
length from the application to the wire. This path-length reduction reduces individual
request latency. In addition, more requests per second can be serviced using the same
processor cost, resulting in higher throughput.
In conventional TCP/IP, if you ran multiple process instances of a listening application
in multiple processors (to increase computing power), you needed a different TCP/IP
process (one per listening application process instance) in each processor. Each
TCP/IP processes required a unique physical port (PIF) and presented a unique IP
host to the outside world. Parallel Library TCP/IP allows multiple application process
instances running in different processors to be presented to the outside world as a
single IP host, because by using Parallel Library TCP/IP, you can run multiple process
instances of a listening application in multiple processors, all sharing the same PIF.
ServerNet™ allows all processors in a clustered system to access the same PIF;
Parallel Library TCP/IP allows applications in different processors to access the same
PIF and share a common listening TCP port number.