Open Cloud Rhino SLEE 1.4.3 Administration Manual Version 1.1 November 2, 2006 Open Cloud Limited 54-56 Cambridge Terrace Wellington 6149 New Zealand http://www.opencloud.
LEGAL NOTICE Unless otherwise indicated by Open Cloud, any and all product manuals, software and other materials available on the Open Cloud website are the sole property of Open Cloud, and Open Cloud retains any and all copyright and other intellectual property and ownership rights therein.
Contents 1 Introduction 1.1 Intended Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Chapter Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1 1 2 The Rhino SLEE Platform 2.1 Introduction . . . . . . . . . . . . . . 2.2 Service Logic Execution Environment 2.3 Integration . . . . . . . . . . . . . . . 2.4 Service Development . . . . . . . . . 2.5 Functional Testing . . . . . . . . .
.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7 Export and Import 7.1 Introduction . . 7.2 Exporting State 7.3 Importing State 7.4 Partial Imports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.6.2 Command Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 13 Notification System Configuration 13.1 Introduction . . . . . . . . . . 13.2 The SLEE Notification system 13.2.1 Trace Notifications . . 13.3 Notification Recorder M-Let . 13.3.1 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
18 Application Environment 18.1 Introduction . . . . . . . . . . . . . . . . . . . . . . 18.2 Main Working Memory . . . . . . . . . . . . . . . . 18.2.1 Replication Models . . . . . . . . . . . . . . 18.2.2 Concurrency Control . . . . . . . . . . . . . 18.2.3 Multiple Transactions . . . . . . . . . . . . 18.3 Application Configuration . . . . . . . . . . . . . . 18.3.1 Replication . . . . . . . . . . . . . . . . . . 18.3.2 Concurrency Control . . . . . . . . . . . . . 18.4 Multiple Resource Managers . . . .
22.5.7 Removing the Proxy Service . . 22.5.8 Modifying Service Source Code 22.6 Using the Services . . . . . . . . . . . 22.6.1 Configuring Linphone . . . . . 22.6.2 Using the Registrar Service . . 22.6.3 Using the Proxy Service . . . . 22.6.4 Enabling Debug Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
C Resource Adaptors and Resource Adaptor Entities C.1 Introduction . . . . . . . . . . . . . . . . . . . C.2 Entity Lifecycle . . . . . . . . . . . . . . . . . C.2.1 Inactive State . . . . . . . . . . . . . . C.2.2 Activated State . . . . . . . . . . . . . C.2.3 Deactivating State . . . . . . . . . . . C.3 Configuration Properties . . . . . . . . . . . . C.4 Entity Binding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Chapter 1 Introduction Welcome to the Open Cloud Rhino SLEE Administration Manual for Systems Administrators and Software Developers. This guide is intended for use with the Open Cloud Rhino, a JAIN SLEE 1.0 compliant SLEE implementation. This document contains instructions for installing, running, and configuring the Rhino SLEE, as well as tutorials for the included examples. It also serves as a starting point for the development of new services for deployment into the Rhino SLEE.
Chapter 9 describes installation details and configuration issues of the Web Console used for Open Cloud Rhino SLEE management operations. Chapter 10 details the online and offline configuration of the Rhino SLEE logging system. The logging system is used by Rhino SLEE and application component developers to record output. Chapter 11 describes how to manage the alarms that may occur from time to time. Chapter 12 is an introduction to threshold alarms.
Chapter 2 The Rhino SLEE Platform 2.1 Introduction The Open Cloud Rhino SLEE is a suite of servers, resource adaptors, tools, and examples that collectively support the development and deployment of carrier-grade services in Java. At the core of the platform is the Rhino SLEE, a fault-tolerant, carrier grade implementation of the JAIN SLEE 1.0 specification.
Rhino Platform Integration Service Development Resource Adaptor Toolkit Ex ample Services Ent erprise Int egration Service Editing Prebuilt Resource Adapt ors Funct ional Test ing Load Testing Service Logic Execution Environment Resource Adaptor Architecture Service Ex ecut ion Management Car r i er Gr ad e Enab l i ng Inf r ast r uct ur e Figure 2.1: The Rhino Platform 2.
2.4 Service Development The Service Development category provides a Federated Service Creation Environment (FSCE) which enables the development of SLEE services and Resource Adaptors for the Rhino SLEE platform. Also included in the FSCE are tools to support the following: • Functional unit testing of services. • Performance testing. • Failure recovery testing. • Example demonstration services. The key design objectives of the FSCE initiative as shown in Figure 2.
The Federated Service Creation Environment allows a SLEE component or application to be fully tested under scenarios similar to the actual deployment environment. The same service or resource adaptors binaries can be tested from pre-production staging all the way through to final production deployment.
2.7 Software Development Kit The Open Cloud Rhino SLEE SDK (Figure 2.3) is a JAIN SLEE service development solution and includes: • All software in the SLEE category. • SIP Resource Adaptors. • SIP Demonstration services: Registrar, Proxy, Find-me-follow-me. • JCC Resource Adaptors. • JCC Demonstration services: call forwarding. • Enterprise Integration features. • Example demonstration SIP and JCC applications.
Chapter 3 JAIN SLEE Overview 3.1 Introduction This chapter discusses key principles of the JAIN SLEE 1.0 specification architecture. The SLEE architecture defines the component model for structuring application logic for communications applications as a collection of reusable object-oriented components, and for assembling these components into high-level sophisticated services.
• Within the SLEE. For example: – The SLEE emits events to communicate changes in the SLEE that may be of interest to applications running in the SLEE. – The Timer Facility emits an event when a timer expires. – The SLEE emits an event when an administrator modifies the provisioned data for an application. • An application running in the SLEE – applications may use events to signal or invoke other applications in the SLEE. Every event in the SLEE has an event type.
3.7 Activities An Activity represents a related stream of events. These events represent occurrences of significance that have occurred on the entity represented by the Activity. From the perspective of a resource, an Activity represents an entity within the resource that emits events on state changes within the entity or resource. For example, a phone call may be an Activity. 3.8 Resources and Resource Adaptors A resource represents a system that is external to a SLEE.
Chapter 4 Getting Started 4.1 Introduction This chapter describes the processes required to install, configure and verify an installation of the Rhino SLEE. It is expected that the user has a good working knowledge of the Linux and Solaris command shells. The following steps explain how to install and start using the Rhino SLEE 1.4.3: 1. Checking prerequisites. 2. Unpacking the distribution. 3. Installation. • Configuring a cluster. • Transferring cluster configuration. • Configuring the nodes. 4.
– Linux 2.4 – Solaris 9 – Red Hat Linux 9 The Rhino SLEE is supported on the following Java platforms. – Sun 1.4.2_12 or later for Sparc/Solaris and Linux/Intel • A suitable hardware configuration. • A suitable network configuration. Ensure the system is configured with an IP address and is visible on the network. Also ensure that the system can resolve localhost to the loopback interface. • A PostgreSQL installation. For more information in installing PostgreSQL, refer to Chapter 21. • The Java J2SE SDK 1.
• The “awk” command utility. $ which awk /bin/awk • The “sed” command utility. $ which sed /bin/sed 4.2.2 PostgreSQL database configuration The Rhino SLEE depends on the PostgreSQL RDBMS to persist its main working memory. This working memory is where Rhino SLEE stores it’s current configuration and run-time state. The Rhino SLEE has been tested on PostgreSQL versions 7.4.12 and 8.0.7. Running the Rhino SLEE using these versions of PostgreSQL is supported by Open Cloud.
>./rhino-install.sh -h Usage: ./rhino-install.sh [options] Command line options: -h, --help - Print this usage message. -a - Perform an automated install. This will perform a non-interactive install using the installation defaults. -r - Reads in the properties from before starting the install. This will set the installation defaults to the values contained in the properties file. -d - Outputs a properties file containing the selections made during install (suitable for use with -r). 4.
Management Interface JMX Remote Service Port [1202]: This port is used for the Web Console (Jetty) server and provides remote management user interface. This is a secure port (TLS). Secure Web Console HTTPS Port [8443]: Enter the location of your Java J2SE/JDK installation. This must be at least version 1.4.2. JAVA_HOME directory [/usr/local/java]: Found Java version 1.4.2_04.
Creating installation directory. Writing configuration to /home/user/rhino/etc/defaults/config/config_variables. I will now generate the keystores used for secure transport authentication, Remote management and connections must be verified using paired keys. /home/user/rhino/rhino-public.keystore with a storepass of changeit and a shared keypass of changeit /home/user/rhino/rhino-private.
$ /home/user/rhino/create-node.sh Chose a Node ID (integer 1..255) Node ID [101]: 101 Creating new node /home/user/rhino/node-101 Deferring database creation. This should be performed before starting Rhino for the first time. Run the "/home/user/rhino/node-101/init-management-db.sh" script to create the database. Created Rhino node in /home/user/rhino/node-101. $ /home/user/rhino/create-node.sh 102 Creating new node /home/user/rhino/node-102 Deferring database creation.
>cd node-101 >./start-rhino.sh -p 4.3.2 Starting a Node Subsequent nodes can be started by executing the $RHINO_NODE_HOME/start-rhino.sh shell script. During node startup, the following events occur: • A Java Virtual Machine process is launched by the host. • The node generates and reads its configuration. • The node checks to see if it should become part of the primary component.
• Or start a node with the -s switch by issuing the following command. >cd node-101 >./start-rhino.sh -s Typically, to start the cluster for the first time and create the primary component, the system administrator starts the first node with the -p switch and the last node with the -s switch. >cd node-101 >./start-rhino.sh -p >cd ../node-102 >./start-rhino.sh >cd ../node-103 >./start-rhino.sh -s 4.3.6 Stopping a Node A node can be stopped by executing the $RHINO_NODE_HOME/stop-rhino.sh shell script.
• The Command Console can be started by running the following: $ cd $RHINO_HOME $ ./client/bin/rhino-console Interactive Management Shell [Rhino (cmd (args)* | help (command)* | bye) #1] State SLEE is in the Running state • The Web Console can be accessed by directing a web browser to https://:8443. The default user-name is “admin” and the default password is “password”. The default port number to connect to can be changed during installation from the default "8443".
One important configuration element is to make sure that ntpd is configured to slew the time rather than step time. This can be achieved using the -x flag when running ntpd. Refer to the man page for ntpd. 4.6 Optional Configuration 4.6.1 Introduction The following suggestions can be followed to further configure the Rhino SLEE. 4.6.2 Ports The ports that were chosen during installation time can be changed at a later stage by editing the file $RHINO_HOME/etc/defaults/config/config_variables. 4.6.
4.7 Installed Files A listing of the files in a typical Rhino installation and their descriptions is listed below. CHANGELOG client/ bin/ generate-client-configuration rhino-console rhino-export rhino-stats web-console etc/ client.policy client.properties common.xml jetty-file-auth.xml jetty-jmx-auth.xml jetty.policy rhino-common rmissl.client.properties templates/ client.properties jetty-file-auth.xml jetty-jmx-auth.xml web-console.passwd web-console.properties web-console-log4j.properties web-console.
generate-configuration generate-system-report.sh init-management-db.sh manage.sh read-config-variables rhino-common run-compiler.sh run-jar.sh start-rhino.sh stop-rhino.sh examples/ j2ee/ jcc/ sip/ lib/ Rhino.jar javax-slee-standard-types.jar jcc-1.1.jar jmxr-adaptor.jar jmxtools.jar linux/ libocio3.so log4j.jar notification-recorder.jar postgresql.jar solaris/ libocio3.so solaris_i386/ libocio3.so licenses/ JAIN_SIP.LICENSE JCC_API.LICENSE JLINE.LICENSE POSTGRESQL.LICENSE SERVICES.
node-XXX/ config/ savanna/ work/ rhino.pid start-rhino.sh/ config/ log/ rhino.log audit.log encrypted.audit.log config.log deployments/ tmp/ lib/ state/ - Instantiated Rhino node. Directory containing configuration files. Savanna configuration files. Rhino working directory. File containing the current process ID of Rhino. - Temporary directory. - Default destination for Rhino logging. Log containing all Rhino logs. Log containing licensing auditing.
milliseconds of each other then they probably have a causal relationship. Also, if there is a time-out in the software somewhere, that time-out may often be found by looking at this timestamp. Next is the log level. In this case, it is “INFO”, which is standard. It can also be “WARN” for more serious happenings in the SLEE, or “DEBUG” if debug messages are enabled. Section 10.1.2 in Chapter 10 has much more information about the log levels available and how to set them.
Chapter 5 Management 5.1 Introduction Administration of the Rhino SLEE is done by using the Java Management Extensions (JMX). An administrator can use either the Web Console or the Command Console, which act as front-ends for JMX. The JAIN SLEE 1.0 specification defines JMX MBean interfaces that provide the following management functions: • Management of Deployable Units. • Management of SLEE Services. • Management of SLEE component trace level settings.
5.1.2 Command Console Interface The Command Console is a command line shell which supports both interactive and batch file commands to manage and configure the Rhino SLEE. Usage: rhino-console Valid options: -? -h -p -u -w or --help - Display this message Rhino host Rhino port Username Password, or "-" to prompt If no command is specified, client will start in interactive mode. The help command can be run without connecting to Rhino.
2. Installing a Service. 3. Uninstalling a Service. 4. Uninstalling a Resource Adaptor. 5. Creating a Profile. The tutorial sections 1, 2, 3 and 4 provides examples of how to deploy, activate, deactivate, and undeploy (respectively) the SIP resource adaptor and the demonstration SIP applications. Tutorial 5 provides an example of configuring a profile for the JCC Call Forwarding service. Management operations may have ordered dependencies on the state of other components in the SLEE.
5.4.1 Installing an RA using the Web Console To install a resource adaptor using the Web Console, first open a web browser and direct it to https://localhost:8443. First, the deployable unit is deployed using the Deployment MBean which can be navigated to from the main page: To install the resource adaptor, type in its file name or use the “Browse. . .
OCSIP Open Cloud 1.2 slee/resources/ocsip/1.2/acifactory slee/resources/ocsip/1.
$ ./client/bin/rhino-console Interactive Rhino Management Shell [Rhino@localhost (#0)] Use the install command to install the deployable unit. Alternatively, the installlocaldu command can be used. > install file:examples/sip/lib/ocjainsip-1.2-ra.jar installed: DeployableUnit[url=file:examples/sip/lib/ocjainsip-1.2-ra.jar] The resource adaptor entity is created using the createraentity command: > listResourceAdaptors ResourceAdaptor[OCSIP 1.2, Open Cloud] > createRAEntity "OCSIP 1.
Install the Location Service using the install operation. Install the Registrar Service using the install operation. View the Registrar Service has been successfully deployed. Using the Service Management MBean which can be navigated to from the main page, activate the Location and Registrar services: In order to view the services’ state, use the Service Management MBean to find the active services. The results of the operation are shown: Open Cloud Rhino 1.4.3 Administration Manual v1.
5.5.2 Installing a service using the Command Console To perform the installation using the Command Console in interactive mode; $ ./client/bin/rhino-console Interactive Rhino Management Shell [Rhino@localhost (#0)] The location service DU can be installed using either the install command or the installlocaldu command. > install file:examples/sip/jars/sip-ac-location-service.jar installed: DeployableUnit[url=file:examples/sip/jars/sip-ac-location-service.
service please see the Open Cloud SIP Users Guide. 5.6 Uninstalling a Service The operations used in this example for uninstalling the registrar application are: 1. Deactivating the registrar service. 2. Uninstalling the location and registrar services. 5.6.1 Uninstalling a Service using the Web Console The Service Management MBean is used to deactivate the service.
Remove the Registrar and Location Service deployable units. 5.6.2 Uninstalling a Service using the Command Console The following steps show how to uninstall a service using the Command Console. Firstly, the services need to be deactivated. > deactivateService "OCSIP Registrar Service 1.5, Open Cloud" Deactivated Service[OCSIP Registrar Service 1.5, Open Cloud] > deactivateService "OCSIP AC Location Service 1.5, Open Cloud" Deactivated Service[OCSIP Location Service 1.
5.7.1 Uninstalling an RA using the Web Console The activities are done using the Resource Management MBean. We deactivate the resource adaptor entity using the Resource Management MBean, so that the resource adaptor cannot create new activities. Removed any named links bound to the resource adaptor: Deactivate the resource adaptor entity: The resource adaptor entity can now be removed. Finally the resource adaptor is uninstalled using the Deployment MBean. 5.7.
5.8 Creating a Profile This example explains how to create a Call Forwarding Profile which is used by the Call Forwarding Service. Before creating the profile the JCC Call Forwarding example must be deployed, i.e. the CallForwardingProfile ProfileSpecification must be available to the Rhino SLEE. To deploy the JCC Call Forwarding service please refer to Chapter 23 or perform the operation below. $ ant -f /home/user/rhino/examples/jcc/build.xml deployjcccallfwd 5.8.
The Profile MBean can be in two modes: viewing or editing. The operations available on the profile give some hint as to which mode that profile is in. If you leave the Web Console without committing your changes, the profile will remain in “editing” mode and you will see a long-running transaction in the Rhino logs. Profiles which are still in the “editing” mode can be returned to by navigating from the main page to the “Profile MBeans” link under the “SLEE Profiles” category.
After editing the values, click applyAttributeChanges (this will parse and check the attribute values). Then click commitProfile to commit the changes. If you get an error, you will need to navigate back to the uncommitted profile from the main page again as described above. Once the profile has been committed, the buttons on the form will change and the fields will no longer be editable: Open Cloud Rhino 1.4.3 Administration Manual v1.
Changes made to the profile via the management interfaces are dynamic. The SBBs that implement the example Call Forwarding services will retrieve the profile every time they are invoked, so they will always retrieve the most recently saved properties. Note that Profiles are persistent across cluster re-starts. The configuration of this new profile can be tested by using the CallForwarding service. 5.8.
[Rhino (cmd (args)* | help (command)* | bye) #1] createProfile CallForwarding ForwardingProfile Created profile CallForwarding/ForwardingProfile Configure the Profile Attributes >setProfileAttributes CallForwarding ForwardingProfile ForwardingEnabled true ForwardingAddress ’E.164 88888888’ Addresses ’[E.164 00000000]’ Set attributes in profile CallForwarding/ForwardingProfile >listProfileAttributes CallForwarding ForwardingProfile ForwardingEnabled=false ForwardingAddress=E.164 88888888 Addresses=[E.
• Stopped to Starting: The Rhino SLEE has no operations to execute. • Stopped to Does Not Exist: The Rhino SLEE processes shutdown and terminate gracefully. 5.9.2 The Starting State Resource adaptor entities that are recorded in the management database as being in the activated state are activated. SBB entities are not created in this state.
Chapter 6 Administrative Maintenance 6.1 Introduction A system administrator must ensure that the Rhino SLEE maintains peak operational performance. The administrator can maximise the processing throughput and perform regular precautionary measures to ensure that, in the event of a failure, a recovery can occur effectively. 6.2 Runtime Diagnostics and Maintenance During normal Rhino SLEE operation SBB entities are removed by the SLEE when they are no longer needed.
the administrator should narrow their search results by applying -node, -cb, -ca, and -ra parameters. In the following example the administrator searches for activities belonging to the resource adaptor entity TestRA. [Rhino@localhost (#1)] findactivities pkey handle ra-entity replicated ------------------------- ---------------------------------------- --------------- ----------65.0.4E8BF.1.2A567636 ServiceActivity[HA PingService 1.0, Open Cloud] Rhino internal true 65.2.4E8BF.0.
Table 6.2 contains a summary of the fields returned by getActivityInfo: Field pkey ending events-processed handle processing-node queue-size ra-entity reference-count replicated sbbs-invoked submission-time submission-node update-time attached-sbbs events Description The Activity’s primary key – uniquely identifies this activity within the Rhino SLEE.
6.2.3 Inspecting SBBs Administrators may also search for and query for information about SBB entities. The SBB inspection commands work in the same way as the activity inspection commands with one main difference: when searching for SBBs, there is no SLEE-wide command that will find all SBBs.
Field pkey parent-pkey convergence-name creating-node-id creation-time priority replicated sbb-component-id service-component-id attached-activities Description The SBB entity’s primary key. This identifies the SBB within its service and SBB component type The pkey of the SBBs parent SBB (only applies to child SBBs) The convergence name generated by the SLEE when the SBB entity was created.
SBB Entities The removeAllSBBs command accepts a service component ID and will immediately and forcibly remove all SBB entities belonging to that service. As an additional safeguard, it is required that the service be in the deactivating state before executing this command. 6.3 Upgrading a Cluster Sometimes it will be necessary to upgrade a cluster.
• A new database name will be needed; otherwise the existing database will be overwritten by the new cluster. The existing PostgreSQL installation can be used. • The SLEE will need to be installed in a new directory. • The Management Interface RMI Registry Port (default of 1199) needs to be a free port. • The Management Interface RMI Object Port (default of 1200) needs to be a free port. • The Management Interface JMX Remote Service Port (default of 1202) needs to be a free port.
6.4 Backup and Restore During normal operation, all SLEE management and profile data in its own in-memory distributed database. The memory database is fault tolerant and can survive the failure of a node. However, for management and profile data to survive a total restart of the cluster, it must be persisted to a permanent, disk-based data store. Open Cloud Rhino SLEE uses the PostgreSQL database for this purpose.
Chapter 7 Export and Import 7.1 Introduction The Rhino SLEE provides administrators and programmers with the ability to export the current deployment and configuration state to a set of human-readable text files, and to later import that export image into either the same or another Rhino SLEE instance. This is useful for: • Backing up the state of the SLEE. • Migrating the state of one Rhino SLEE to another Rhino SLEE instance. • Migrating SLEE state between different versions of the Rhino SLEE.
7.2 Exporting State In order to use the exporter, the Rhino SLEE must be available and ready to accept management commands. The exporter is invoked using the $RHINO_HOME/client/bin/rhino-export shell script. The script requires at least one argument, which is the name of the directory in which the export image will be written to. In addition, a number of optional command-line arguments may be specified: $ client/bin/rhino-export Valid command line options are: -h - The hostname to connect to.
7.3 Importing State To import the state into a Rhino SLEE, execute the ant script in the directory created by the exporter. user@host:~/rhino/rhino_export$ ant Buildfile: build.xml management-init: login: [slee-management] establishing new connection to : localhost:1199/admin install-ocjainsip-1.2-ra-du: [slee-management] Install deployable unit file:lib/ocjainsip-1.2-ra.jar install-sip-ac-location-service-du: [slee-management] Install deployable unit file:jars/sip-ac-location-service.
7.4 Partial Imports A partial import is where only some of the import management operations are executed. This is useful when only deployable units are needed to be deployed or only the resource adaptor entities are required and they do not need to be activated. To list the available targets in the build file execute the following command: user@host:~/rhino/rhino_export$ ant -p Buildfile: build.
Note: The import script will ignore any existing components. It is recommended that the import be run against a Rhino SLEE which has no components deployed. The $RHINO_NODE_HOME/init-management-db.sh script will re-initialise the run-time state and working configuration persisted in the main working memory. Open Cloud Rhino 1.4.3 Administration Manual v1.
Chapter 8 Statistics and Monitoring 8.1 Introduction The Rhino SLEE provides monitoring facilities for capturing statistical performance data about the cluster using the client side application rhino-stats. To launch the client and connect to the Rhino SLEE, execute the following command: >cd $RHINO_HOME >client/bin/rhino-stats One (and only one) of -g (Start GUI), -m (Monitor Parameter Set), -l (List Available Parameter Sets) required.
Parameter Set Type Object Pools Staging Threads Memory Database Sizing System Memory Usage Lock Manager Tunable Parameters Object Pool Sizing Staging Configuration Memory Database Size limits JVM Heap Size Lock Strategy Table 8.1: Useful statistics for tuning Rhino performance • Gauges show the quantity of a particular object or item such as the amount of free memory, or the number of active activities. • Sample type statistics collect sample values every time a particular event or action occurs.
$ /home/user/rhino/client/bin/rhino-stats -l Events Parameter Set Type: Events Description: Event stats Counter type statistics: Name: Label: accepted n/a failed n/a rejected n/a successful n/a Description: Accepted events Events that failed in event processing Events rejected due to overload Event processed successfully Sample type statistics: Name: Label: eventProcessin EPT eventRouterSet ERT numSbbsInvoked #sbbs sbbProcessingT SBBT Description: Total event processing time Event router setup time Numbe
format of statistics output: • -R will output raw (single number) timestamps. • -C will output comma seperated statistics. • -q will suppress printing of non-statistics information. For example, to output a command seperated log of event statistics, you could use: [user@host rhino]$ ./client/bin/rhino-stats -m Events -R -C -q 8.
Figure 8.1: Creating a Quick Graph Figure 8.2: Creating a Graph with the Wizard Open Cloud Rhino 1.4.3 Administration Manual v1.
Figure 8.3: Selecting Parameter Sets with the Wizard • Load an existing graph configuration from a file. This allows the user to select a previously saved graph configuration file and create a new graph using that configuration. Selecting the first option, “Line graph for a counter or a gauge”, and clicking “Next” displays the graph components screen. This screen contains a table listing the statistics currently selected for display on the line plot. Initially, this is empty.
Figure 8.4: Adding Counters with the Wizard Figure 8.5: Naming a Graph with the Wizard Open Cloud Rhino 1.4.3 Administration Manual v1.
Figure 8.6: A Graph created with the Wizard 2. Or if the client application is already running, by selecting option 4 in the graph creation wizard – “Load an existing graph configuration from a file”. Note that these saved graph configurations can also be used with with the rhino-stats console when used in conjunction with the -f option. This allows arbitrary statistics sets to be monitored from the command line. Open Cloud Rhino 1.4.3 Administration Manual v1.
Chapter 9 Web Console 9.1 Introduction The Rhino SLEE Web Console is a web application that provides access to management operations of the Rhino SLEE. Using the Web Console, the SLEE administrator can deploy applications, provision profiles, view usage parameters, configure resource adaptors, etc. The Web Console enables the administrator to interact directly with the management objects (known as MBeans) within the SLEE. 9.2 Operation 9.2.
9.2.2 Managed Objects The main page of the Web Console (see Figure 9.1) groups the management beans into several categories: Figure 9.1: Web Console Main Page • The SLEE Subsystem category is an enumeration of the "SLEE" JMX domain and provides access to the management operations mandated by the JAIN SLEE specification. • The Container Configuration category contains MBeans which provide runtime configuration of license, logging, object pools, rate limiting, the staging queue and threshold alarms.
Clicking on the "Logout" link will end the current session and redisplay the login screen. 9.2.4 Interacting with Managed Objects This section describes how the Web Console maps the MBean operations to the web interface.
• The Web Console web application archive (web-console.war) contains the J2EE web application itself, consisting of servlets, static resources (images, stylesheets and scripts) and configuration files. • Third-party library dependencies in $RHINO_HOME/client/lib, such as Jetty itself, the servlet API, etc. 9.3.2 Standalone Web Console In a production environment, it is strongly recommended that the embedded web console is disabled, and a standalone web console is installed on a dedicated management host.
9.4 Configuration 9.4.1 Changing Usernames and Passwords To edit or add usernames and passwords for accessing Rhino with the Web Console, edit either $RHINO_HOME/etc/defaults/config/rhino.passwd (if embedded or using JMX Remote authentication) or $CLIENT_HOME/etc/web-console.passwd (if using local file authentication in a standalone Web Console). The Rhino node (or standalone Web Console) will need to be restarted for changes to this file to take effect.
9.5.1 Secure Socket Layer (SSL) Connections The HTTP server creates encrypted SSL connections using a certificate in the web-console.keystore file. This means sensitive data such as the administrator password is not sent in cleartext when connecting to the Web Console from a remote host. This certificate is generated at installation time using the hostname returned by the operating system. 9.5.2 Declarative Security Declarative container based security is specified for all URLs used by the Web Console.
Chapter 10 Log System Configuration 10.1 Introduction The Rhino SLEE uses the Apache Log4J logging architecture (http://logging.apache.org/) to provide logging facilities to the internet SLEE components and deployed services. This chapter explains how to set up the Log4J environment and examine debugging messages. SLEE application components can use the Trace facility provided by the SLEE for logging facilities. The Trace facility is defined in the SLEE 1.
Log Level FATAL ERROR WARN INFO DEBUG Description Only error messages for unrecoverable errors are produced (not recommended). Only error messages are produced (not recommended). Error and warning messages are produced. The default. Errors and warnings are produced, as well as some informational messages, especially during node startup or deployment of new resource adaptors or services. Will produce a large number of log messages.
>cd $RHINO_HOME >./client/bin/rhino-console [Rhino@localhost (#1)] help createfileappender createFileAppender Create a file appender [Rhino@localhost (#2)] createFileAppender FBFILE foobar.log Done. Once the file appender has been created, log keys can be configured to output their loggers messages to that appender.
Figure 10.1: Creating a file appender To add an AppenderRef so that logging requests for the "savanna.stack" logger key are forwarded to the FBFILE file appender, we choose appropriate fields and click the "addAppenderRef" button as in Figure 10.2. Figure 10.2: Adding an AppenderRef Open Cloud Rhino 1.4.3 Administration Manual v1.
There are also Web Console commands for setting additivity for each logger key and for setting levels as in Figure 10.3. Figure 10.3: Other Logging Administration Commands Open Cloud Rhino 1.4.3 Administration Manual v1.
Chapter 11 Alarms Alarms are described in the JAIN SLEE 1.0 Specification and are faithfully implemented in the Rhino SLEE. Alarms can be raised by various components inside Rhino, including other vendor’s components which have been deployed in the SLEE. In most cases, it is the responsibility of the system administrator to clear Alarms, although in some cases an alarm may be cleared automatically as the cause of that alarm has been resolved. Alarms make their presence known through log messages.
[Rhino@localhost (#28)] listactivealarms Alarm 56875565751424514 (Node 101, 07-Dec-05 16:44:05.435): Major [resources.cap-conductor.capra.noconnection] Lost connection to backend localhost:10222 Alarm 56875565751424513 (Node 101, 07-Dec-05 16:41:04.326): Major [rhino.license] License with serial ’107baa31c0e’ has expired. Clearing alarms can be done individually for each alarm, or for an entire group of Alarms.
• The clearAlarms button will clear all alarms in that category. • The exportAlarmTableAsNotifications button will export all alarms as JMX notifications. The results of this operation will be visible on the logs as notifications. • The logAllActiveAlarms button will write all alarms to the Rhino SLEE’s log. Open Cloud Rhino 1.4.3 Administration Manual v1.
Chapter 12 Threshold Alarms 12.1 Introduction To supplement the standard alarms raised by Rhino, an administrator may configure additional alarms to be raised or cleared automatically based on the evaluation of a set of conditions using input from Rhino’s statistics facility. These alarms are known as Threshold Alarms and are configured using the Threshold Rules MBean.
For parameter set type descriptions and a list of available parameter sets use -l option $ client/bin/rhino-stats -l "System Info" 2006-01-10 17:34:04.
12.6 Creating Rules Rules may be created using either the Web Console or using the Command Console with XML files. The following sections demonstrate how to manage threshold rules using both methods. 12.6.1 Web Console The following example shows creation of a low memory alarm using the Web Console. This rule will raise an alarm on any node if the amount of free memory becomes less than 20% of the total memory. In Figure 9.
The alarm type and message is set with the setAlarm operation. Finally, the rule is activated using the activateRule operation. Once the rule is active it will begin to be evaluated. 12.6.2 Command Console A less resource-intensive manner of viewing, exporting and importing rules is by using the Command Console. Using the command console, rules cannot be edited directly but must be first exported to a file, edited and then imported again.
[Rhino@localhost (#1)] getconfig threshold-rules rule/low_memory
the resource adaptor component jar file only. They are not granted to classes loaded from any other dependent jars required by resource adaptors defined in the resource adaptor component jar’s deployment descriptor, nor to any dependent library jars used by the same. Example: grant { permission java.lang.RuntimePermission "modifyThreadGroup"; }; --> 18.5.
SBB This example shows fragments of the JAIN SLEE 1.0 specification defined SBB deployment descriptor, and a corresponding extension DD that informs Rhino that the SBB should use optimistic concurrency control. SBB deployment descriptor fragment: Test SBB to show use of extension DD mechanism ... SBB Extension deployment descriptor fragment: optimistic ...
therefore knows that SIP transactions time out after a certain period of time. When Rhino queries the RA it will tell Rhino to end the activity corresponding to the outstanding SIP transaction. Rhino will then end the activity according to the JAIN SLEE specification which will detach the SBB and delete it. It should be noted that this approach means that if Rhino discards the event due to lack of CPU resource to process the event that eventually the RA will be queried again.
Chapter 19 Database Connectivity 19.1 Introduction This chapter discusses Rhino SLEE configuration, and recommendations for programming applications which connect to an external SQL database. Rhino SLEE can connect to any external database which has support for JDBC 2.0 and JDBC 2.0’s standard extensions1 . Application components such as Resource Adaptors and Service Building Blocks may execute SQL statements against the external database.
Each element contains a mandatory , and element. These three elements identify the name of the JavaBean property, the Java type for the property, and the value to set the JavaBean property to respectively. Rhino requires that there must be one element that has a element of ManagementResource. This is the database that is used by Rhino to store state related to the management of the Rhino installation2 .
ExternalDataSource org.postgresql.jdbc2.optional.SimpleDataSource serverName java.lang.String dbhost databaseName java.lang.String db1 user java.lang.
For further information regarding extension deployment descriptors refer to Section 18.5 in Chapter 18. The SBB is then able to obtain a reference to an object implementing the DataSource interface via a JNDI lookup as follows: import javax.slee.*; import javax.sql.DataSource; import java.sql.Connection; import java.sql.SQLException; ... public class SimpleSbb implements Sbb { private DataSource ds; ... public void setSbbContext(SbbContext context) { try { Context myEnv = (Context) new InitialContext().
It is recommended that these methods are not invoked by a SLEE components. Open Cloud Rhino 1.4.3 Administration Manual v1.
Chapter 20 J2EE SLEE Integration 20.1 Introduction The Rhino SLEE can inter-operate with a J2EE 1.3-compliant server in two ways: 1. SBBs can obtain references to the home interface of beans hosted on an external J2EE server, and invoke those beans via J2EE’s standard RMI-IIOP mechanisms. 2. EJBs residing in an external J2EE server can send events to the Rhino SLEE via the standardised mechanism described in the JAIN SLEE 1.0 Final Release specification, Appendix F.
element specifies the logical name of this EJB as known to Rhino. It should correspond to the logical name used by SBBs in their deployment descriptors to reference the EJB.
Access to remote EJB stored on a J2EE server. element identifies the JNDI path, relative to java:comp/env, to bind the EJB to. --> ejb/MyEJBReference Entity com.
During this configuration, the Open Cloud Rhino SLEE J2EE Connector will be configured with a list of endpoints. This list is a space- or comma- separated list of host:port pairs that identify the nodes of the Rhino SLEE that the connector should contact to deliver events; in the case of a Rhino SLEE install, there is only one possible node. The port number should correspond to the port that the J2EE Resource Adaptor has been configured to use.
Chapter 21 PostgreSQL Configuration 21.1 Introduction The Rhino SLEE requires a PostgreSQL RDBMS database for persisting the main working memory to non-volatile memory. The main working memory in Rhino contains the runtime state, deployments, profiles, entities and the entities’ bindings. Before installing a production Rhino cluster, PostgreSQL must be installed. For further information on downloading and installing PostgreSQL platform refer to http://www.postgresql.org .
21.4 TCP/IP Connections The PostgreSQL server needs to be configured to accept TCP/IP connections so that it can be used with the Rhino SLEE. As of version 8.0 of PostgreSQL this parameter is no longer required, and the database will accept TCP/IP socket connects by default. Prior to version 8.0 of PostgreSQL, it was necessary to manually enable TCP/IP support. To do this, edit the tcpip_socket parameter in the $PGDATA/postgresql.conf file: tcpip_socket = 1 21.
Figure 21.1: Cluster with mulitple database servers The main working memory consists of several memory databases on each node. • The ManagementDatabase which holds the working configuration and run-time state of the logging system, services, resource adaptor entities. • The ProfileDatabase which holds the profile tables, profiles and profile indexes. Generally both of these memory databases must be configured for fail-over using multiple PostgreSQL database servers.
name, user name, password, login timeout). Variable substitution using @variable-name@ syntax substitutes variables from the $RHINO_NODE_HOME/config/config_variables. Example Configuration For a two database host configuration, first initialize the main working memory on each database server. user@host> init-management-db.sh -h host1 user@host> init-management-db.sh -h host2 Then alter the configuration for each node in the cluster. Here is a sample configuration file.
org.postgresql.jdbc3.Jdbc3SimpleDataSource rhino_sdk_management serverName java.lang.String @MANAGEMENT_DATABASE_HOST2@ portNumber java.lang.Integer @MANAGEMENT_DATABASE_PORT@ databaseName java.lang.
@MANAGEMENT_DATABASE_NAME@ user java.lang.String @MANAGEMENT_DATABASE_USER@ password java.lang.String @MANAGEMENT_DATABASE_PASSWORD@ loginTimeout java.lang.
node is configured to update multiple databases. Open Cloud Rhino 1.4.3 Administration Manual v1.
Chapter 22 SIP Example Applications 22.1 Introduction The Rhino SLEE includes a demonstration resource adaptor and example applications which use SIP (Session Initiation Protocol - RFC 3261). This chapter explains how to build, deploy and demonstrate the examples. The examples illustrate how some typical SIP services can be implemented using a SLEE. They are not intended for production use.
22.2 System Requirements The SIP examples run on all supported Rhino SLEE platforms. Please see Appendix A for details. 22.2.1 Required Software • SIP user agent software, such as Linphone or Kphone. – http://www.linphone.org – http://www.wirlab.net/kphone – http://www.sipcenter.com/sip.nsf/html/User+Agent+Download 22.3 Directory Contents The base directory for the SIP Examples is $RHINO_HOME/examples/sip. The contents of the SIP Examples directories are summarised in Table 22.1.
After changing the PROXY_HOSTNAMES and PROXY_DOMAINS properties so that they are correct for the environment, save the sip.properties file. 22.4.2 Building and Deploying To create the deployable units for the Registrar, Proxy and Location services run Ant with the build target as follows: user@host:~/rhino/examples/sip$ ant build Buildfile: build.
sip-fmfm: [copy] Copying 4 files to /home/user/rhino/examples/sip/jars/sip/classes/sip-examples/fmf m-META-INF [profilespecjar] Building profile-spec-jar: /home/user/rhino/examples/sip/jars/sip/jars/fmfm-p rofile.jar [sbbjar] Building sbb-jar: /home/user/rhino/examples/sip/jars/sip/jars/fmfm-sbb.jar [deployablejar] Building deployable-unit-jar: /home/user/rhino/examples/sip/jars/sip/jars/sipfmfm-service.jar [delete] Deleting: /home/user/rhino/examples/sip/jars/sip/jars/fmfm-profile.
By default, the build script will deploy the Registrar and Proxy example services, and any components these depend on, including the SIP Resource Adaptor and Location Service. To deploy these examples, run Ant with the deployexamples target as follows: user@host:~/rhino/examples/sip$ ant deployexamples Buildfile: build.xml init: build: management-init: [echo] OpenCloud Rhino SLEE Management tasks defined login: [slee-management] establishing new connection to : 127.0.0.
user@host:~/rhino$ ./client/bin/rhino-console Interactive Rhino Management Shell Rhino management console, enter ’help’ for a list of commands [Rhino@localhost (#0)] state SLEE is in the Running state The Registrar and Proxy services are now deployed and ready to use. See Section 22.6 for details on using SIP user agents to test the example services. 22.4.3 Configuring the Services Configuring the services is done by editing the sip.properties file.
The deployment descriptors for the JDBC Location SBB are located in the src/com/opencloud/slee/services/sip/ location/jdbc/META-INF directory. The default data source in the oc-sbb-jar.xml extension deployment descriptor is as follows: jdbc/SipRegistry javax.sql.
Name ListeningPoints Type java.lang.String Default 0.0.0.0:5060/[udp|tcp] ExtensionMethods java.lang.String OutboundProxy java.lang.String UDPThreads TCPThreads RetransmissionFilter java.lang.Integer java.lang.Integer java.lang.Boolean 1 1 False AutomaticDialogSupport java.lang.Boolean False Keystore KeystoreType KeystorePassword Truststore TruststoreType TruststorePassword CRLURL CRLRefreshTimeout CRLLoadFailureRetryTimeout CRLNoCRLLoadFailureRetryTimeout ClientAuthentication java.lang.
user@host:~/rhino/examples/sip$ ant deploysipra Buildfile: build.xml management-init: [echo] OpenCloud Rhino SLEE Management tasks defined login: [slee-management] establishing new connection to : 127.0.0.1:1199/admin deploysipra: [slee-management] [slee-management] [slee-management] [slee-management] Install deployable unit file:lib/ocjainsip-1.2-ra.jar Create resource adaptor entity sipra from OCSIP 1.
# Select location service implementation. # If "usejdbclocation" property is true, JDBC location service will be deployed. # Default is to use Activity Context Naming implementation. usejdbclocation=true The PostgreSQL database that was configured during the SLEE installation is already setup to act as the repository for a JDBC Location Service. Note. The table is removed and recreated every time the $RHINO_NODE_HOME/init-management-db.sh script is executed.
sip-registrar: [copy] Copying 2 files to /home/users/rhino/examples/sip/classes/sip-examples/registrar-M ETA-INF [sbbjar] Building sbb-jar: /home/users/rhino/examples/sip/jars/registrar-sbb.jar [deployablejar] Building deployable-unit-jar: /home/users/rhino/examples/sip/jars/sip-registra r-service.jar [delete] Deleting: /home/users/rhino/examples/sip/jars/registrar-sbb.
deploylocationservice: deployregistrar: [slee-management] Install deployable unit file:jars/sip-registrar-service.jar [slee-management] Activate service SIP Registrar Service 1.5, Open Cloud [slee-management] Set trace level of RegistrarSbb 1.5, Open Cloud to Info BUILD SUCCESSFUL Total time: 48 seconds This will compile the necessary classes, assemble the jar file, deploy the service into the SLEE and activate it. 22.5.
sip-jdbc-location: [sbbjar] Building sbb-jar: /home/user/rhino/examples/sip/jars/jdbc-location-sbb.jar [deployablejar] Building deployable-unit-jar: /home/user/rhino/examples/sip/jars/sip-jdbc-loca tion-service.jar [delete] Deleting: /home/user/rhino/examples/sip/jars/jdbc-location-sbb.jar sip-registrar: [copy] Copying 2 files to /home/user/rhino/examples/sip/classes/sip-examples/registrar-ME TA-INF [sbbjar] Building sbb-jar: /home/user/rhino/examples/sip/jars/registrar-sbb.
deploy-ac-locationservice: [slee-management] Install deployable unit file:jars/sip-ac-location-service.jar [slee-management] Activate service SIP AC Location Service 1.5, Open Cloud [slee-management] Set trace level of ACLocationSbb 1.5, Open Cloud to Info deploylocationservice: deployregistrar: [slee-management] Install deployable unit file:jars/sip-registrar-service.jar [slee-management] Activate service SIP Registrar Service 1.5, Open Cloud [slee-management] Set trace level of RegistrarSbb 1.
22.5.8 Modifying Service Source Code If modifications are made to the source code of any of the SIP services, the altered services can be recompiled and deployed easily using the Ant targets in $RHINO_HOME/examples/sip/build.xml. If the service is already installed, remove it using the relevant undeploy Ant target, and then rebuild and redeploy using the relevant deploy target (use “ant -p” to list the possible targets). 22.
Linphone Setting SIP port Identity Use sip registrar Server address: Your password Address of record Use this registrar server. . . Description Default is 5060. Ensure this is different to the SIP RA’s port if running the SLEE and Linphone on the same system. The local SIP identity on this host, the contact address used in a SIP registration. Ensure this is selected, so that Linphone will automatically send a REGISTER request when it starts. The SIP address of the SLEE server, e.g. sip:hostname.
address-of-record = sip:joe@siptest1.opencloud.com Updating bindings Updating binding: sip:joe@siptest1.opencloud.com -> sip:user@192.168.0.9 Contact: setRegistrationTimer(sip:joe@siptest1.opencloud.com, sip:user@192.168.0.9, 900, 2400921797@192. 168.0 9, 0) set new timer for registration: sip:joe@siptest1.opencloud.com -> sip:joe@192.168.0.9, expires in 900s Adding 1 headers Sending Response: SIP/2.0 200 OK Via: SIP/2.0/UDP 192.168.0.
siptest1 siptest2 siptest1 siptest2 siptest1 siptest2 Open Cloud Rhino 1.4.3 Administration Manual v1.
22.6.4 Enabling Debug Output The SIP services can write tracing information to the Rhino SLEE logging system via the SLEE Trace Facility. To enable tracing logging output, login to the Web ConsoleFrom the main page, select SLEE Subsystems, then View Trace MBean. On the Trace page are setTraceLevel and getTraceLevel buttons. On the drop-down list next to setTraceLevel, select the component to debug, for example the SIP Proxy SBB. Select a trace level; Finest is the most detailed.
Chapter 23 JCC Example Application 23.1 Introduction The Rhino SLEE includes a sample application that makes use of Java Call Control version 1.1 (JCC 1.1). This section explains how to build, deploy and use this example. JCC is a framework that provides applications with a consistent mechanism for interfacing underlying divergent networks. It provides a layer of abstraction over network protocols and presents a high-level API to applications.
Provider Call Connection Connection Address Address Figure 23.1: Object model of a two-party call • Connection: represents the dynamic relationship between a Call and an Address. The purpose of a Connection object is to describe the relationship between a Call object and an Address object. A Connection object exists if the Address is a part of the telephone call. Connection objects are immutable in terms of their Call and Address references.
Figure 23.2: Diagrammatic representation of the Call Forwarding Service 2. It determines whether the called party has call forwarding enabled, and to which number. 3. If so, the call is routed: Call.routeCall(...); 4. The service completes: Connection.continueProcessing(); The Call Forwarding Profile contains the following user subscription information: • Address: address of the terminating party. • Forwarding address: address where the call will be forwarded.
Figure 23.3: Call Attempt (initial-event-select variable="AddressProfile"). So when a “Connection Authorize Call Attempt” event arrives at the JCC Resource Adaptor, a new root SBB will be created for that service if the Address (user B address) is present in the Address Profile Table of the Service. The JCC Resource Adaptor creates an activity for the initial event. The Activity object associated with this activity is the JccConnection object.
Figure 23.4: Call Forwarding SBB creation // get profile for service instance’s current subscriber CallForwardingAddressProfileCMP profile; try { // get profile table name from environment String profileTableName = (String)new InitialContext().lookup( "java:comp/env/ProfileTableName"); // lookup profile ProfileFacility profileFacility = (ProfileFacility)new InitialContext().lookup( "java:comp/env/slee/facilities/profile"); ProfileID profileID = profileFacility.
Figure 23.5: OnCallDelivery method execution trace(Level.FINE, "Forwarding not enabled - ignoring event"); return; } // get forwarding address String routedAddress = profile.getForwardingAddress().getAddressString(); Finally, the SBB executes the Continue Processing method in the JCC connection, and the connection is unblocked.
Figure 23.6: Service finalization. SBB and activity has been removed 23.3 Directory Contents The base directory for the JCC Examples is $RHINO_HOME/examples/jcc. When referring to file locations in the following sections, this directory is abbreviated to $EXAMPLES. The contents of the examples directory are summarised below. File/directory name build.xml build.properties README createjcctrace.sh src/ lib/ classes/ jars/ ra/ Description Ant build script for JCC example applications.
23.4.2 Deploying the Resource Adaptor The Ant build script $EXAMPLES/build.xml contains build targets for deploying and undeploying the JCC RA. To deploy the JCC RA, first ensure that the SLEE is running. Go to the JCC examples directory, and then execute the Ant target deployjccra as shown: user@host:~/rhino/examples/jcc$ ant deployjccra Buildfile: build.xml management-init: [echo] OpenCloud Rhino SLEE Management tasks defined login: [slee-management] establishing new connection to : 127.0.0.
user@host:~/rhino/examples/jcc$ ant undeployjccra Buildfile: build.xml management-init: [echo] OpenCloud Rhino SLEE Management tasks defined login: [slee-management] establishing new connection to : 127.0.0.1:1199/admin undeployjccra: [slee-management] [slee-management] [slee-management] [slee-management] [slee-management] local-ra.jar [slee-management] a-type.
user@host:~/rhino/examples/jcc$ ant deployjcccallfwd Buildfile: build.xml management-init: [echo] OpenCloud Rhino SLEE Management tasks defined login: [slee-management] establishing new connection to : 127.0.0.1:1199/admin buildjccra: [mkdir] [copy] [jar] [delete] Created dir: /home/user/rhino/examples/jcc/library Copying 2 files to /home/user/rhino/examples/jcc/library Building jar: /home/user/rhino/examples/jcc/jars/jcc-1.1-local-ra.
user@host:~/rhino/examples/jcc$ ant undeployjcccallfwd Buildfile: build.xml management-init: [echo] OpenCloud Rhino SLEE Management tasks defined login: [slee-management] establishing new connection to : 127.0.0.1:1199/admin undeployjcccallfwd: [slee-management] Deactivate service JCC Call Forwarding 1.0, Open Cloud [slee-management] Wait for service JCC Call Forwarding 1.0, Open Cloud to deactivate [slee-management] Service JCC Call Forwarding 1.
[Rhino@localhost (#6)] listprofilespecs ProfileSpecification[AddressProfileSpec 1.0, javax.slee] ProfileSpecification[CallForwardingProfile 1.0, Open Cloud] ProfileSpecification[ResourceInfoProfileSpec 1.0, javax.
The profile is presented in edit mode. To change profiles, the web interface must be in the “edit” mode. The web interface is left in “edit” mode after a profile is created. If not, hit the editProfile button, then on the results page hit the profile1 link to go back to the profile. Change the value of one or more attributes by editing their value fields. The web interface will correctly parse values for Java primitive types and Strings, arrays of primitive types or Strings, and also javax.slee.
Note that the Addresses attribute is an array of addresses, hence the enclosing brackets. [Rhino@localhost (#1)] setprofileattributes CallForwardingProfiles profile1 \ Addresses "[E.164:5551212]" \ ForwardingAddress "E.164:5553434" \ ForwardingEnabled "true" Set attributes in profile CallForwardingProfiles/profile1 3. View the profile. [Rhino@localhost (#2)] listprofileattributes CallForwardingProfiles profile1 RW javax.slee.Address[] Addresses=[E.164: 5551212] RW boolean ForwardingEnabled=true RW javax.
The component executes in the same JVM as the SLEE therefore the trace components can only be used if the SLEE process can access a windowing system. 23.6.3 Creating a Call Using the trace components ‘dial’ facility creates a new call. The destination number is entered and the ‘dial’ button selected. The trace component at the destination address should show an incoming call alert, which can be answered or disconnected as desired. 23.6.
For example, the default Call Forwarding Profile enables forwarding from address 1111 to 2222. To test this, launch 3 trace components using addresses 1111, 2222 and 3333 respectively. On the 3333 component, dial 1111. The call will be forwarded to 2222, which can then answer or hangup the call. The screen shot below shows this in action. 23.7 Call Duration Service This service measures the duration of a call, and writes a trace with the result. Figure 23.
1. It starts when receives a JCC event: • CONNECTION_CONNECTED 2. It stores the start time in a CMP field. 3. The service receives one of the following JCC events: • CONNECTION_DISCONNECTED • CONNECTION_FAILED 4. It calculates the call duration, reading the CMP field, and detaches from activity. 5. The service finishes. 23.7.1 Call Duration Service - Architecture The JCC components included are: JCC Resource Adaptor: this is the same resource adaptor as used in the above examples.
Figure 23.8: Initial Event in Call Duration Service This service is executed only in the originating party (user A) because we use an initial event selector method that determines that, as we can see in the source code below. public InitialEventSelector determineIsOriginating(InitialEventSelector ies) { // Get the Activity from the InitialEventSelector JccConnection connection = (JccConnection) ies.
Figure 23.9: Call Duration SBB creation public void onCallConnected(JccConnectionEvent event, ActivityContextInterface aci) { JccConnection connection = event.getConnection(); long startTime = System.currentTimeMillis(); trace(Level.FINE, "Call from " + connection.getAddress().getName()); this.setStartTime(startTime); try { if (connection.isBlocked()) connection.continueProcessing(); } catch (Exception e) { trace(Level.
CallDisconnected javax.csapi.cc.jcc.JccConnectionEvent.CONNECTION_DISCONNECTED javax.csapi.cc.jcc 1.1 CallFailed javax.csapi.cc.jcc.JccConnectionEvent.
Figure 23.10: Call Disconnected or Call Failed events After this, the SBB is not interested in any more events, so it detaches from the activity, and, after a while, the SLEE container will remove that SBB entity. The activity, which is not attached to any SBB will be removed too. Figure 23.11: Call Duration Service finalization. SBB is detached from activity The Command Console can be used to show that the SBB has been removed: [Rhino@localhost (#2)] findsbbs -service JCC\ Call\ Duration\ 1.
Chapter 24 Customising the SIP Registrar 24.1 Introduction This section provides a mini-tutorial which shows developers how to use various features of the Rhino SLEE and of JAIN SLEE. The mechanism employed to achieve this objective is writing a small extension to a pre-written example application the SIP Registrar. A brief background on SIP Registration is provided in Section 24.2 for developers who are not familiar with the SIP Protocol.
24.3 Performing the Customisation The following steps should be carried out in order to provide the additional function. 1. Backup the existing SIP Registrar source example which is located in the $RHINO_HOME/examples/sip/src/com/ opencloud/slee/services/sip/registrar. 2. Install the SIP Registrar (if it is not already installed). From the examples/sip directory under the Rhino SLEE directory, run the following command: ant deployregistrar 3.
// --- EXISTING CODE --... URI uri = ((ToHeader)request.getHeader(ToHeader.NAME)).getAddress().getURI(); String sipAddressOfRecord = getCanonicalAddress(uri); // --- NEW CODE STARTS HERE --// Get myDomain env-entry from JNDI String myDomain = (String) new javax.naming.InitialContext().lookup("java:comp/env/myDomain"); String requestDomain = ((SipURI)uri).getHost(); // Check if domain in request matches myDomain if (requestDomain.equalsIgnoreCase(myDomain)) { if (isTraceable(Level.
To undeploy all the SIP example applications, including the SIP resource adaptor, run: ant undeployexamples At this point, the Ant system has been successfully used. An example SBB implementing a SIP Registrar has been modified and that SBB’s deployment descriptor has had a new environment entry added to it. The JAIN SIP API has been demonstrated, and the logging system’s management has been used to enable or disable the application’s debugging messages. 24.
Appendix A Hardware and Systems Support A.1 Supported Hardware/OS platforms Table A.1 references platforms that Rhino SLEE supports. A.2 Recommended Hardware A.2.1 Introduction This subsection outlines minimum and recommended hardware configurations for different uses of Rhino SLEE. Please refer to Open Cloud Rhino SLEE 1.4.3 support for information related to supported platforms for Rhino products.
Required 3rd Party Software PostgreSQL database(supplied with Rhino) PRODUCT Hardware OS JVM Open Cloud Rhino Intel x86 (Xeon), AMD64 (Opteron), UltraSPARC III or IV CPU Intel x86 (Xeon), AMD64 (Opteron), UltraSPARC III or IV CPU Linux 2.4 or 2.6,Solaris 9 or 10 Sun 1.4.2_03, 1.5.0_05 or later Linux 2.4 or 2.6, Solaris 9 or 10 N/A Ulticom ware v9 Intel x86 (Xeon), AMD64 (Opteron), UltraSPARC III or IV CPU Same requirements for Open Cloud Rhino Linux 2.4 or 2.6,Solaris 9 or 10 Sun 1.4.2_03, 1.
requirements. Failure testing is the process of validating whether or not the combination of Rhino, Resource Adaptors and Application displays appropriate characteristics in failure conditions. Open Cloud Rhino is intended for use in performance and failure testing. There are many different performance measurements that may be of interest, and different performance targets that are required. These are typically dictated by the requirements of the application.
Load generation and network element simulation hardware • Single Ulticom Signalware machine with identical RAM, CPU, Hard Disk configuration to Ulticom Signalware SS7 cluster machines • Two T1/E1 interfaces. • Three 2x900MHz UltraSPARC III machines running several instances of the switch simulator and HLR simulator. Open Cloud Rhino 1.4.3 Administration Manual v1.
Appendix B Redundant Networking This topic describes how to setup redundant networks for the Rhino SLEE so that cluster members can still communicate in the event of link or switch failures. B.1 Redundant Networking in Solaris Solaris 8 and up includes a feature called IP Multipathing (IPMP). This allows multiple ethernet interfaces to be combined into a group with automatic IP address failover within the group if a link failure is detected.
/etc/hosts: ... 192.168.1.1 192.168.1.2 192.168.1.101 192.168.1.102 192.168.1.103 192.168.1.
rhinohost1: # ifconfig -a ... bge0: flags=9040843 mtu 1500 index 3 inet 192.168.1.101 netmask ffffff00 broadcast 192.168.1.255 groupname savanna ether 0:3:ba:3c:9c:d3 bge0:1: flags=1000843 mtu 1500 index 3 inet 192.168.1.1 netmask ffffff00 broadcast 192.168.1.255 bge1: flags=69040843 mtu 1500 index 4 inet 192.168.1.
B.1.5 Editing the Routing Table Finally we need to add a multicast route so that the traffic is directed to our interface group. Rather than directing all multicast traffic over the group, it is best just to create a route for the address range that Rhino SLEE is using. Other services such as routing or time daemons may depend on multicast traffic on a separate interface on a public network. For example, if the cluster is using the address range 224.0.55.1 - 224.0.55.
rhinohost1:/etc/rc2.d/S99static_routes: #!/bin/sh # Probe addresses /usr/sbin/route add rhinohost2-bge0 rhinohost2-bge0 -static /usr/sbin/route add rhinohost2-bge1 rhinohost2-bge1 -static # Savanna multicast traffic route add -interface 224.0.55.0/24 rhinohost1-public rhinohost2:/etc/rc2.d/S99static_routes: #!/bin/sh # Probe addresses /usr/sbin/route add rhinohost1-bge0 rhinohost1-bge0 -static /usr/sbin/route add rhinohost1-bge1 rhinohost1-bge1 -static # Savanna multicast traffic route add -interface 224.0.
Appendix C Resource Adaptors and Resource Adaptor Entities C.1 Introduction The SLEE architecture defines the following resource adaptor concepts: • Resource adaptor type: A resource adaptor type specifies the common definitions for a set of resource adaptors. It defines the Java interfaces implemented by the resource adaptors of the same resource adaptor type. Typically, a resource adaptor type is defined by an organisation of collaborating SLEE or resource vendors, such as the SLEE expert group.
RA entity created Inactive activateEntity() Activated deactivateEntity() Deactivating Figure C.1: Resource Adaptor Entity lifecycle state machine Each state in the lifecycle state machine is discussed below, as are the transitions between these states. C.2.1 Inactive State When the resource adaptor entity is created (through use of the Resource Management MBean) it is in the Inactive state.
A property name must be one of the configuration properties defined by the resource adaptor. The configuration properties defined by a resource adaptor can be retrieved via the Resource Management MBean getConfigurationProperties method. Configuration properties that have no default value defined by the resource adaptor must be specified in the properties parameter when creating an RA entity. A configuration property can be specified at most once.
Appendix D Transactions D.1 Introduction This appendix provides a brief overview of transactions and transaction processing systems including the ACID properties of Transactions, various Concurrency Control Models, components of a transaction processing system and commit protocols. Transactions are a part of the JAIN SLEE event and programming models, therefore it is important that these concepts are understood.
entities, profiles, Activity Context state, and event queues. The transaction that has successfully acquired the exclusive lock on a unit of transacted state will not release this lock until the transaction has either committed or rolled back. Concurrent transactions may deadlock against each other, when more than one lock is held by each transaction and the locks are acquired in different orders in the concurrent transactions. D.3.
When multiple Resource Managers are combined into a single transaction it is commonplace that both Resource Managers support a two-phase commit protocol. D.4.3 Commit Protocols One-phase Commit A one-phase commit protocol will commit the transaction as a single action. It is most often used when a transaction is involved with a single Transacted Resource. Most Transacted Resources support a one-phase commit protocol. It is sometimes used in a last resource commit optimisation.
Appendix E Audit Logs E.1 File Format The format for license audit log files is as follows: { CLUSTER_MEMBERS_CHANGED [comma separated node list] } { INSTALLED_LICENSES nLicenses { [LicenseInfo field=value,field=value,field=value...] } * nLicenses } { USAGE_DATA start_time end_time nFunctions { FunctionName AccountedMin AccountedMax AccountedAvg UnaccountedMin UnaccountedMax UnaccountedAvg LicensedCapacity HardLimited } * nFunctions } E.1.
INSTALLED_LICENSES This is logged whenever the set of valid licenses changes. This may occur when a license is installed or uninstalled, when an installed license becomes valid or when an installed license expires. INSTALLED_LICENSES [LicenseInfo name=value,name=value,name=value] (repeated nLicenses times) For example: INSTALLED_LICENSES 2 [LicenseInfo serial=1074e3ffde9,validFrom=Wed Nov 02 12:53:35 NZDT 2005,... [LicenseInfo serial=2e31e311eca,validFrom=Wed Nov 01 15:01:25 NZDT 2005,...
E.
Appendix F Glossary Administrator – A person who maintains the Rhino SLEE, deploys services and resource adaptors and provides access to the Web Console and Command Console. Ant – The Apache Ant build tool. Activity – A SLEE Activity on which events are delivered. Command Console – The interactive command–line interface use by administrators to issue on–line management commands to the Rhino SLEE. Configuration – The off line Rhino SLEE config files.
Primary Component – The group of nodes in a cluster which can process work. Public Key – A certificate containing an aliased asymmetric public key. Private Key – A certificate containing an aliased asymmetric private key. Policy – A Java sandbox security policy which allocates permissions to codebases. Ready object – An object which has been initialised and is ready to perform work. Rhino platform – The total set of modules, components and application servers which run on JAIN SLEE.