Red Hat Cloud Foundations Reference Architecture Edition One: Automating Private IaaS Clouds on Blades Version 1.
Red Hat Cloud Foundations Reference Architecture Edition One: Automating Private IaaS Clouds on Blades 1801 Varsity Drive™ Raleigh NC 27606-2072 USA Phone: +1 919 754 3700 Phone: 888 733 4281 Fax: +1 919 754 3701 PO Box 13588 Research Triangle Park NC 27709 USA Linux is a registered trademark of Linus Torvalds. Red Hat, Red Hat Enterprise Linux and the Red Hat "Shadowman" logo are registered trademarks of Red Hat, Inc. in the United States and other countries. Microsoft and Windows are U.S.
Table of Contents 1 Executive Summary.........................................................................................6 2 Cloud Computing: Definitions...........................................................................7 2.1 Essential Characteristics.................................................................................................7 2.1.1 On-demand Self-Service .........................................................................................7 2.1.2 Resource Pooling.......
5 Reference Architecture System Configuration................................................25 5.1 Server Configuration......................................................................................................26 5.2 Software Configuration..................................................................................................27 5.3 Blade and Virtual Connect Configuration......................................................................28 5.4 Storage Configuration ..................
8.2 RHEL with Java Application........................................................................................136 8.3 RHEL with JBoss.........................................................................................................137 8.4 RHEL MRG Grid Execute Nodes................................................................................140 8.5 RHEL MRG Grid Rendering........................................................................................145 9 References...............
1 Executive Summary Red Hat's suite of open source software provides a rich infrastructure for cloud providers to build public/private cloud offerings. This reference architecture performs many of the tasks detailed in the previously released Red Hat Cloud Foundations, Edition One: Private IaaS Clouds. In this reference architecture, the application programming interfaces (APIs) and command line interfaces (CLIs) are used in the deployment of the infrastructure.
2 Cloud Computing: Definitions Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model promotes availability and is composed of four essential characteristics, three service models, and four deployment models.
2.2 Service Models 2.2.1 Cloud Infrastructure as a Service (IaaS) The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and invoke arbitrary software, which can include operating systems and applications.
2.2.4 Examples of Cloud Service Models Figure 1 9 www.redhat.
2.3 Deployment Models 2.3.1 Private Cloud The cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on or off premise. Figure 2 www.redhat.
2.3.2 Public Cloud The cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services. Figure 3 11 www.redhat.
2.3.3 Hybrid Cloud The cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., load-balancing between clouds). Figure 4 2.3.4 Community Cloud The cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g.
3 Red Hat and Cloud Computing 3.1 Evolution, not Revolution – A Phased Approach to Cloud Computing While cloud computing requires virtualization as an underlying and essential technology, it is inaccurate to equate cloud computing with virtualization. The figure below displays the different levels of abstraction addressed by virtualization and cloud computing respectively. Figure 5: Levels of Abstraction 13 www.redhat.
The following figure illustrates a phased approach to technology adoption starting with server consolidation using virtualization, then automating large deployments of virtualization within an enterprise using private clouds, and finally extending private clouds to hybrid environments leveraging public clouds as a utility. Figure 6: Phases of Technology Adoption in the Enterprise www.redhat.
3.2 Unlocking the Value of the Cloud Red Hat's approach does not lock an enterprise into one vendor's cloud stack, but instead offers a rich set of solutions for building a cloud. These can be used alone or in conjunction with components from third-party vendors to create the optimal cloud to meet unique needs. Cloud computing is one of the most important shifts in information technology to occur in decades.
3.3 Redefining the Cloud Cloud computing is the first major market wave where open source technologies are built in from the beginning, powering the vast majority of early clouds.
Today each IaaS cloud presents a unique API to which developers and ISVs need to write in order to consume the cloud service. The Deltacloud effort is creating a common, REST-based API, such that developers can write once and manage anywhere. Deltacloud is a cloud broker, so to speak, with drivers that map the API to both public clouds like EC2 and private virtualized clouds based on VMware vCloud, or RHEV-M as depicted in Figure 7.
4 Red Hat Cloud: Software Stack and Infrastructure Components Figure 8 depicts the software stack of Red Hat Cloud Foundation components. Figure 8: Red Hat Software Stack www.redhat.
4.1 Red Hat Enterprise Linux Red Hat Enterprise Linux (RHEL) is the world's leading open source application platform. On one certified platform, RHEL offers a choice of: • Applications - Thousands of certified ISV applications • Deployment - Including standalone or virtual servers, cloud computing, and software appliances • Hardware - Wide range of platforms from the world's leading hardware vendors Red Hat released the fifth update to RHEL 5: Red Hat Enterprise Linux 5.5. RHEL 5.
4.2 Red Hat Enterprise Virtualization (RHEV) for Servers Red Hat Enterprise Virtualization (RHEV) for Servers is an end-to-end virtualization solution that is designed to enable pervasive data center virtualization, and unlock unprecedented capital and operational efficiency. RHEV is the ideal platform on which to build an internal or private cloud of Red Hat Enterprise Linux or Windows virtual machines.
4.3 Red Hat Network (RHN) Satellite All RHN functionality is on the network, allowing much greater functionality and customization. The Satellite server connects with Red Hat over the public Internet to download new content and updates. This model also allows customers to take their Red Hat Network solution completely off-line if desired. Features include: • An embedded database to store packages, profiles, and system information.
access options. 4.4 JBoss Enterprise Middleware The following JBoss Enterprise Middleware Development Tools, Deployment Platforms and Management Environment are available via subscriptions that deliver not only industry leading SLA-based production and development support, but also includes patches, updates, multi-year maintenance policies, and software assurance from Red Hat.
Management: • JBoss Operations Network (JON): An advanced management platform for inventorying, administering, monitoring, and updating JBoss Enterprise Platform deployments. 4.4.1 JBoss Enterprise Application Platform (EAP) JBoss Enterprise Application Platform is the market leading platform for innovative and scalable Java applications.
• manage, monitor and tune applications for improved visibility, performance and availability. One central console provides an integrated view and control of JBoss middleware infrastructure. The JON management platform (server-agent) delivers centralized systems management for the JBoss middleware product suite.
5 Reference Architecture System Configuration This reference architecture in deploying the Red Hat infrastructure for a private cloud used the configuration shown in Figure 9 comprised of: 1. Infrastructure management services, e.g., Red Hat Network (RHN) Satellite, Red Hat Enterprise Virtualization Manager (RHEV-M), DNS, DHCP service, PXE server, NFS server for ISO images, JON, MRG Manager - most of which were installed in VMs within a RHCS cluster for high availability. 2.
5.1 Server Configuration Hardware Systems Specifications Quad Socket, Quad Core (16 cores) Intel® Xeon® CPU X5550 @2.67GHz, 48GB RAM Management Cluster Nodes [2 x HP ProLiant BL460c G6] 2 x 146 GB SATA SSD internal disk drive (mirrored) 2 x QLogic ISP2532-based 8Gb FC HBA 2 x Broadcom NetXtreme II BCM57711E Flex-10 10Gb Ethernet Controller Quad Socket, Quad Core, (16 cores) Intel® Xeon® CPU W5550 @2.
5.2 Software Configuration Software Version 5.5 Red Hat Enterprise Linux (RHEL) (2.6.18-194.11.3.el5 kernel) Red Hat Enterprise Virtualization Manager (RHEV-M) 2.2.0.47069 Red Hat Enterprise Virtualization Hypervisor (RHEV-H) 5.5-2.2 – 6.1 Red Hat Enterprise Linux / KVM (RHEL / KVM) Red Hat Network (RHN) Satellite 5.5.0.2 / 83-164 5.3.0 JBoss Enterprise Application Platform (EAP) 5.0 JBoss Operations Network (JON) 1.4 Red Hat Enterprise MRG Grid 1.2 Table 2: Software Configuration 27 www.
5.3 Blade and Virtual Connect Configuration All the blades are using logical Serial Numbers, MAC addresses and FC WWNs. A single 10Gb network and two 8 Gb FC connections are presented to each host. Appendix A.6 has complete details of the blade and virtual connection configuration. 5.4 Storage Configuration Hardware Specifications Storage Controller: Code Version: M110R28 Loader Code Version: 19.
LUNs were created and presented as outlined in the following table. Volume Size Presentation Purpose MgmtServices 1 TB Management Cluster Volume Group for Logical Volumes: • SatVMvol (300GB) • JonVMvol (40GB) • MRGVMvol (40GB) • RHEVMVMvol (30GB) • RHEVNFSvol (300GB) GFS2 50 GB Management Cluster VM Configuration File Shared Storage RHEVStorage1 1 TB Hypervisor Hosts RHEV-M Storage Pool Table 4: LUN Configuration 5.
6 Deploying Cloud Infrastructure Services This section provides the detailed actions performed to configure Red Hat products that constitute the infrastructure used for a private cloud. The goal is to create a set of highly available cloud infrastructure management services and the initial cloud hosts. The cloud management services and initial hosts are used in the next section to configure additional cloud hosts, VMs within the hosts, and load applications within those VMs.
• NFS service 10. Provision MGMT-2 node using temporary node from Satellite 11. Create file system management services in MGMT-2 • NFS service based on ext3 file system • GFS2 file system 12. Make MGMT-1 and MGMT-2 RHCS cluster nodes including infrastructure management services 13. Install initial RHEV-H and RHEL/KVM hosts 14. Configure RHEV-M • Software • RHEV data center • RHEV Hosts 6.
MGMT1_ICIP=192.168.136.10 MGMT1_FC=mgmt_node1 MGMT1_MAC=00:17:A4:77:24:00 MGMT1_NAME=mgmt1.cloud.lab.eng.bos.redhat.com MGMT1_PW=24^gold MGMT2_ILO=10.16.136.232 MGMT2_ILO_PW=24^goldA MGMT2_IP=10.16.136.15 MGMT2_IC=mgmt2-ic MGMT2_ICIP=192.168.136.15 MGMT2_FC=mgmt_node2 MGMT2_MAC=00:17:A4:77:24:04 MGMT2_NAME=mgmt2.cloud.lab.eng.bos.redhat.com MGMT2_PW=24^gold MRGGRID_IP=10.16.136.50 MRGGRID_KS=mrggrid.ks MRGGRID_MAC=52:54:00:C0:DE:22 MRGGRID_NAME=mrg-vm.cloud.lab.eng.bos.redhat.
3. A set of python based XMLRPC scripts were developed for remote communication with the ricci cluster daemon and can also be obtained from http://people.redhat.com/~mlamouri/: • riccicmd – generalized ricci communication, various commands can be supplied • addnfsexport – update cluster configuration adding NFS client resources • delnfsexport – update cluster configuration removing NFS client resource: 6.1.
ks.cfg install cdrom key lang en_US.UTF-8 keyboard us #xconfig --startxonboot skipx network --device eth0 --bootproto static --ip 10.16.136.15 --netmask 255.255.248.0 --gateway 10.16.143.254 --nameserver 10.16.136.1,10.16.255.2 --hostname mgmt2.cloud.lab.eng.bos.redhat.com network --device eth1 --bootproto static --ip 192.168.136.15 --netmask 255.255.255.0 --hostname mgmt2-ic network --device eth2 --onboot no --bootproto dhcp --hostname mgmt2.cloud.eng.lab.bos.redhat.
@editors @graphical-internet @graphics @java @kvm @legacy-software-support @text-internet @base-x kexec-tools iscsi-initiator-utils bridge-utils fipscheck device-mapper-multipath sgpio emacs libsane-hpaio xorg-x11-utils xorg-x11-server-Xnest sg3_utils ntp pexpect %pre #Save some information about the cdrom device ls -l /tmp/cdrom > /tmp/cdrom.ls %post --nochroot ( #If the device is no longer there by the time post starts, create it if [ ! -b /tmp/cdrom ] then #get the major number major=$(cat /tmp/cdrom.
# # copy the entire content onto the created machine # mkdir /mnt/sysimage/root/distro # (cd /mnt/source; tar -cf - . ) | (cd /mnt/sysimage/root/distro; tar -xpf -) ) 2>&1 | tee /mnt/sysimage/root/ks_post.log %post ( #set the time from a server, then use this to set the hwclock. Enable ntp /usr/sbin/ntpdate -bu 10.16.255.2 hwclock --systohc chkconfig ntpd on cat </etc/ntp.conf restrict default kod nomodify notrap nopeer noquery restrict 127.0.0.
cat /root/distro/resources/temp.rc.local.add >> /etc/rc.d/rc.local #update to latest software yum -y update ) 2>&1 | tee /root/ks_post2.out b) The cvt_br.
/bin/mount -o ro /dev/scd0 /root/distro # source env vars if [[ -x varDefs.sh ]] ; then source varDefs.sh elif [[ -x /root/varDefs.sh ]] ; then source /root/varDefs.sh elif [[ -x /root/resources/varDefs.sh ]] ; then source /root/resources/varDefs.sh elif [[ -x /root/distro/resources/varDefs.sh ]] ; then source /root/distro/resources/varDefs.sh else echo "didn't find a varDefs.sh file" fi # install ssh pexpect commands /root/distro/resources/instpyCmds.
label ks kernel vmlinuz append ks=cdrom:/ks.cfg initrd=initrd.img console=ttyS0,115200 nostorage label local localboot 1 label memtest86 kernel memtest append - e) Create and populate a resources subdirectory with files for later use. Many of the files found in this directory are detailed in this and following sections of this document including the appendixes. 4. Call the mkMedia script to create an ISO image using the modified directory #!/bin/bash mkisofs -r -N -L -d -J -T -b isolinux/isolinux.
ilocommand --ilourl //${LOGIN}:${MGMT2_ILO_PW}@${MGMT2_ILO} set /map1/oemhp_vm1/cddr1 oemhp_image=http://irish.lab.bos.redhat.com/pub/projects/cloud/resources/wa/rhcf.
VCMNUM=`echo $VCMFILE | wc -w` ILONUM=`echo $ILOFILE | wc -w` OANUM=`echo $OAFILE | wc -w` #install storage array command tool if [[ $SANUM -eq 0 ]] then echo "No salib source!" elif [[ $SANUM -gt 1 ]] then echo "Too man salib source files!" else yum -y --nogpgcheck localinstall ${SAFILE} fi #install Virtual Connect Manager command tool if [[ $VCMNUM -eq 0 ]] then echo "No vcmlib source!" elif [[ $VCMNUM -gt 1 ]] then echo "Too man vcmlib source files!" else yum -y --nogpgcheck localinstall ${VCMFILE} fi #i
• • • creates and presents volumes from storage array configures system to access presented volumes configures LVM group to be used with management services #!/bin/bash #source variables if [[ -x varDefs.sh ]] then source varDefs.sh elif [[ -x /root/varDefs.sh ]] then source /root/varDefs.sh elif [[ -x /root/resources/varDefs.sh ]] then source /root/resources/varDefs.sh elif [[ -x /root/distro/resources/varDefs.sh ]] then source /root/distro/resources/varDefs.sh else echo "didn't find a varDefs.
sacommand --saurl //${MSA_USER}:${MSA_PW}@${MSA_IP} unmap volume MgmtServices sacommand --saurl //${MSA_USER}:${MSA_PW}@${MSA_IP} unmap volume GFS2 sacommand --saurl //${MSA_USER}:${MSA_PW}@${MSA_IP} unmap volume RHEVStorage1 #Rescan the SCSI bus to discover newly presented LUNs /usr/bin/rescan-scsi-bus.sh #Deploy multipath configuration file preconfigured with recommended #MSA array settings for optimal performance /bin/cp /root/distro/resources/multipath.conf.template /etc/multipath.
if [[ $NUM -gt 0 ]] then echo "Aliases for this host already exists!" else cd /sys/class/fc_host for f in host* do WWN=`cat ${f}/port_name | cut -f 2 -d 'x'` sacommand --saurl //${MSA_USER}:${MSA_PW}@${MSA_IP} set host-name id ${WWN} ${SHORTHOST}_${f} done fi fi b) The second script, buildMpathAliases.sh, also called by the storage preparation script, searches the multipath devices and assembles a portion of the multipath configuration file that provides aliases for fibre channel devices.
#get the data from the array, assumes all virtual disk have VD the name sacommand --saurl //${MSA_USER}:${MSA_PW}@${array} "show volumes" | grep VD > /tmp/vols #begin the statement construction, the WWN do not line up, so only sections are compared echo "multipaths {" cat /tmp/vols | while read line ; do alias=`echo $line | cut -d ' ' -f 2 | cut -c7-` temp=`echo $line | cut -d ' ' -f 4 | cut -c 8-12``echo $line | cut -d ' ' -f 4 | cut -c 17-23` wwid=`grep $temp /tmp/devs2 | sort -u` if [ "$wwid" != "" ]; th
# source env vars if [[ -x varDefs.sh ]] ; then source varDefs.sh elif [[ -x /root/varDefs.sh ]] ; then source /root/varDefs.sh elif [[ -x /root/resources/varDefs.sh ]] ; then source /root/resources/varDefs.sh elif [[ -x /root/distro/resources/varDefs.sh ]] ; then source /root/distro/resources/varDefs.sh else echo "didn't find a varDefs.
client = xmlrpclib.Server(SATELLITE_URL, verbose=0) key = client.auth.login(SATELLITE_LOGIN, SATELLITE_PASSWORD) #retrieve all the systems AllSystems = client.system.listUserSystems(key) # Initialize the list IDs of systems that match the id IDs = [] #Loop through all the systems, if the name matches save the Id for Sys in AllSystems: if Sys['name'] == deleteName: IDs.
10.16.143.254 --nameserver 10.16.136.1,10.16.255.2 --hostname sat-vm.cloud.lab.eng.bos.redhat.
restrict default kod nomodify notrap nopeer noquery restrict 127.0.0.1 driftfile /var/lib/ntp/drift keys /etc/ntp/keys server 10.16.136.10 server 10.16.136.
# configure DNS /bin/cp /root/resources/db.* /var/named/ /bin/cp /root/resources/named.conf /etc/ chkconfig named on # cobbler preparation /usr/sbin/semanage fcontext -a -t public_content_t "/var/lib/tftpboot/.*" /usr/sbin/semanage fcontext -a -t public_content_t "/var/www/cobbler/images/.*" setsebool -P httpd_can_network_connect true # update DNS resolution /bin/cp /etc/resolv.conf /etc/resolv.conf.orig /bin/cp /root/resources/resolv.conf /etc/resolv.
echo "-" >> /var/log/rc.local.out 2>&1 echo "- begin sat install" >> /var/log/rc.local.out 2>&1 echo "-" >> /var/log/rc.local.out 2>&1 date >> /var/log/rc.local.out 2>&1 /root/resources/instSat.sh >> /var/log/rc.local.out 2>&1 # configure cobbler echo "-" >> /var/log/rc.local.out 2>&1 echo "- begin cobbler config" >> /var/log/rc.local.out 2>&1 echo "-" >> /var/log/rc.local.out 2>&1 date >> /var/log/rc.local.out 2>&1 /root/resources/configCobbler.sh >> /var/log/rc.local.
( nice -n -15 satellite-sync --iss-parent=irish.lab.bos.redhat.com --ca-cert=/pub/RHN-ORGTRUSTED-SSL-CERT; nice -n -15 satellite-sync --iss-parent=irish.lab.bos.redhat.com --cacert=/pub/RHN-ORG-TRUSTED-SSL-CERT; nice -n -15 satellite-sync ) >> /var/log/satSync 2>&1 # prepare another set of actions to be performed on next boot (brought down from temp node then started on provisioned mgmt1) echo "-" >> /var/log/rc.local.out 2>&1 echo "- Preparing next boot round of actions" >> /var/log/rc.local.
yum -y update rhn-satellite restart 4. configCobbler.sh - configure cobbler using this script that: • performs recommended SELinux changes • updates settings • creates the template multipath.conf.template • creates named and zone templates • has cobbler create files from templates #!/bin/bash # This script will configure cobbler: # perform recommended SELinux changes /usr/sbin/semanage fcontext -a -t public_content_t "/var/lib/tftpboot/.
cat <<'EOF'>>/etc/cobbler/named.template #for $zone in $forward_zones zone "${zone}." { type master; file "$zone"; }; #end for #for $zone, $arpa in $reverse_zones zone "${arpa}." { type master; file "$zone"; }; #end for EOF # Create the zone templated from the existing files # The "db." in the original zone file names is dropped mkdir /etc/cobbler/zone_templates ( cd /var/named for f in db.cloud.lab.eng.bos.redhat.com db.10.16.* do /bin/cp $f /etc/cobbler/zone_templates/${f#db.
satellite-sync --step=channels --channel=rhn-tools-rhel-x86_64-server-5 satellite-sync --step=channels --channel=rhel-x86_64-server-cluster-storage-5 satellite-sync --step=channels --channel=rhel-x86_64-rhev-mgmt-agent-5 satellite-sync --step=channels --channel=rhel-x86_64-server-5-mrg-grid-execute-1 satellite-sync --step=channels --channel=rhel-x86_64-server-5-mrg-messaging-1 echo "0 1 * * * perl -le 'sleep rand 9000' && satellite-sync --email >/dev/null 2>/dev/null" >> /var/spool/cron/root 7.
6.2.4 Post Satellite Installation Actions After the satellite has completed installation, the following actions must be performed: 1. User interaction is required to add the first user, by logging into the recently installed RHN satellite (e.g., https://sat-vm.cloud.lab.eng.bos.redhat.com). This can be performed anytime after the satellite software completes installation but must be performed prior to issuing the command to start the script in step 3 of this section. 2.
3. Call post_satellite_build_up.sh, recording the output. /root/resources/post_satellite_build_up.sh 2>&1 | \ tee /tmp/post_sat.out The post_satellite_build_up.
echo "-" date /root/resources/prep_MgmtVMs.sh #Prep and create RHEL/KVM hosts echo "-" echo "Prepping first RHEL host" echo "-" date /root/resources/prep_RHELHost.sh #Prep and create RHEV hosts echo "-" echo "Prepping first RHEV host" echo "-" date /root/resources/prep_RHEVHost.sh #Prep tenant kickstarts echo "-" echo "Import Tenant kickstarts" echo "-" date /root/resources/prep_tenantKS.sh a) createOrgs.py - Create organization, allocate entitlements, and create organizational trusts.
#create the tenant org tenantOrg = client.org.create(key, "tenant", "tenant", "24^gold", "Mr.", "Shadow", "Man", "sm@redhat.com", False) # retrieve all the system entitlements SysEnts = client.org.
#Double loop through the org setting trusts o1 = 0 while o1 < len(Orgs) - 1: o2 = o1 + 1 while o2 < len(Orgs): try: client.org.trusts.addTrust(key, Orgs[o1]['id'], Orgs[o2]['id']) except: print "Org:", Orgs[o1]['id'], " failed to add trust for org: ", Orgs[o2]['id'] else: o2 += 1 o1 += 1 client.auth.logout(key) b) createDefActKeys.
#open channel client = xmlrpclib.Server(SATELLITE_URL, verbose=0) #log into infrastructure org key = client.auth.login(INFRA_LOGIN, INFRA_PASSWD) #create key infra_ak = client.activationkey.create(key, 'infraDefault', 'Default key for infrastructure org', INFRA_PARENT, INFRA_ENTITLE, False) #Add child channels client.activationkey.addChildChannels(key, infra_ak, INFRA_CHILDREN) #Add packages client.activationkey.addPackageNames(key, infra_ak, INFRA_PACKAGES) #log out from infrastructure channel client.auth.
INFRA_LOGIN = "infra" INFRA_PASSWD = "24^gold" INFRA_ENTITLE = [ 'monitoring_entitled', 'provisioning_entitled' ] INFRA_PARENT = 'rhel-x86_64-server-5' INFRA_CHILDREN = ['rhn-tools-rhel-x86_64-server-5', 'rhel-x86_64-server-5-mrg-grid-1', \ 'rhel-x86_64-server-5-mrg-management-1', 'rhel-x86_64-server-5-mrg-messaging-1' ] INFRA_PACKAGES = ['qpidd', 'sesame', 'qmf', 'condor', 'condor-qmf-plugins', 'cumin', \ 'perl-Frontier-RPC', 'rhncfg', 'rhncfg-client', 'rhncfg-actions', \ 'ntp', 'postgresql', 'postgresql-s
tenant_ak = client.activationkey.create(key, 'tenantMRGGridExec', 'Key for MRG Grd Exec Nodes', TENANT_PARENT, TENANT_ENTITLE, False) client.activationkey.addChildChannels(key, tenant_ak, TENANT_CHILDREN) client.activationkey.addPackageNames(key, tenant_ak, TENANT_PACKAGES) client.auth.logout(key) #print out the defined keys print "MRG Manager activation key: ", infra_ak print "MRG Grid Exec activation key: ", tenant_ak d) instMgmt1.
--ksmeta="NAME=$MGMT1_NAME NETIP=$MGMT1_IP ICIP=$MGMT1_ICIP ICNAME=$MGMT1_IC" –kopts="console=ttyS0,115200 nostorage" cobbler sync # set to boot PXE echo "-" echo "Setting Mgmt to boot PXE" echo "-" date while [[ ! `ilocommand -i //${LOGIN}:${MGMT1_ILO_PW}@${MGMT1_ILO} set /system1/bootconfig1/bootsource5 bootorder=1 | grep status=0` ]]; do sleep 2; done #reset blade, power up also in case it was powered off echo "-" echo "Booting Mgmt1" echo "-" date ilocommand -i //${LOGIN}:${MGMT1_ILO_PW}@${MGMT1_ILO} po
#Add cobbler profile and system entries for the MRG VM cobbler profile add --name=${MRGGRID_PROFILE} --distro=${MGMT_DISTRO} --kickstart=/root/resources/${MRGGRID_KS} cobbler system add --name=${MRGGRID_NAME} --profile=${MRGGRID_PROFILE} --mac=$ {MRGGRID_MAC} --ip=${MRGGRID_IP} --hostname=${MRGGRID_NAME} #Rebuild the contents of /tftpboot cobbler sync f) prep_RHELHost.
""" #open channel client = xmlrpclib.Server(SATELLITE_URL, verbose=0) #log into infrastructure org key = client.auth.login(INFRA_LOGIN, INFRA_PASSWD) #create key infra_ak = client.activationkey.create(key, 'RHELH', 'Key for RHEL/KVM based RHEL Hosts', INFRA_PARENT, INFRA_ENTITLE.splitlines(), False) #Add child channels client.activationkey.addChildChannels(key, infra_ak, INFRA_CHILDREN.splitlines()) #Add packages client.activationkey.addPackageNames(key, infra_ak, INFRA_PACKAGES.
filePtr = open(fileName, 'r') ksName = os.path.basename(fileName).split('.')[0] if len(ksName) < 6: today = datetime.date.today() ksName = ksName + '_' + today.strftime("%m%d") #open channel client = xmlrpclib.Server(SATELLITE_URL, verbose=0) #log into infrastructure org key = client.auth.login(INFRA_LOGIN, INFRA_PASSWD) client.kickstart.importFile(key, ksName, VIRT, KSTREE, filePtr.read()) #log out from infrastructure channel client.auth.logout(key) if __name__ == "__main__": sys.
#open channel client = xmlrpclib.Server(SATELLITE_URL, verbose=0) #log into infrastructure org key = client.auth.login(INFRA_LOGIN, INFRA_PASSWD) aKeys = client.activationkey.listActivationKeys(key) for iKey in aKeys: if iKey['key'] == AKName or iKey['key'].split('-')[1] == AKName: fKey = iKey break else: print "No activation key matched: ", AKName client.auth.logout(key) exit(-3) aKS = client.kickstart.
then source /root/resources/varDefs.sh elif [[ -x /root/distro/resources/varDefs.sh ]] then source /root/distro/resources/varDefs.sh else echo "didn't find a varDefs.sh file" fi # Install RHEV-H # find the rpm file names RPMFILE=`ls /root/resources/rhev-hypervisor-*.noarch.
{RHEVM_IP} netconsole=${RHEVM_IP} rootpw=${passwd} ssh_pwauth=1 local_boot" # Update cobbler system files cobbler sync h) prep_tenantKS.
if [[ -x varDefs.sh ]] ; then source varDefs.sh elif [[ -x /root/varDefs.sh ]] ; then source /root/varDefs.sh elif [[ -x /root/resources/varDefs.sh ]] ; then source /root/resources/varDefs.sh elif [[ -x /root/distro/resources/varDefs.sh ]] ; then source /root/distro/resources/varDefs.sh else echo "didn't find a varDefs.sh file" fi #Generate a key and sign the package /root/resources/AppPGPKey.sh # Create our Application channel /root/resources/createAppSatChannel.
# This script will generate a GPG key to use for the App Channel, # resign the javaApp package, and make the pub key available. # -- create profile with GPG key? # Generate the key gpg --batch --gen-key /root/resources/AppPGP # Determine the key name KNAME=`gpg --list-keys --fingerprint --with-colons |grep Vijay | cut -f5 -d':' | cut -c9-` # Prep for signing cat <>~/.rpmmacros %_signature gpg %_gpg_name ${KNAME} EOF # Determine the package name JFILE=`ls /root/resources/javaApp-*.noarch.
match = signproc.expect([pexpect.EOF]) • createAppSatChannel.py – create Application custom channel in Satellite #!/usr/bin/python """ This script will create a custom channel for custom applications """ import xmlrpclib SATELLITE_URL = "http://sat-vm.cloud.lab.eng.bos.redhat.
SATELLITE_URL = "http://sat-vm.cloud.lab.eng.bos.redhat.com/rpc/api" TENANT_LOGIN = "tenant" TENANT_PASSWD = "24^gold" TENANT_ENTITLE = [ 'monitoring_entitled', 'provisioning_entitled' ] TENANT_PARENT = 'rhel-x86_64-server-5' TENANT_CHILDREN = ['rhn-tools-rhel-x86_64-server-5', 'ourapps' ] TENANT_PACKAGES = [ 'ntp', 'javaApp' ] """ Create Key for Java Application """ #open channel client = xmlrpclib.Server(SATELLITE_URL, verbose=0) #log into tenant org key = client.auth.
SATELLITE_URL = "http://sat-vm.cloud.lab.eng.bos.redhat.com/rpc/api" INFRA_LOGIN = "tenant" INFRA_PASSWD = "24^gold" KSTREE = 'ks-rhel-x86_64-server-5-u5' VIRT = "none" def main(): if len(sys.argv) < 2: print "Usage: ",sys.argv[0]," kickstart" sys.exit(-2); # retreieve the passed parameter fileName = sys.argv[1] filePtr = open(fileName, 'r') ksName = os.path.basename(fileName).split('.')[0] if len(ksName) < 6: today = datetime.date.today() ksName = ksName + '_' + today.
TENANT_LOGIN = "tenant" TENANT_PASSWD = "24^gold" def main(): if len(sys.argv) < 3: print "Usage: ",sys.argv[0]," activationKey kickstart" sys.exit(-2); # retreieve the passed parameter AKName = sys.argv[1] KSName = sys.argv[2].split(':')[0] #open channel client = xmlrpclib.Server(SATELLITE_URL, verbose=0) #log into infrastructure org key = client.auth.login(TENANT_LOGIN, TENANT_PASSWD) aKeys = client.activationkey.listActivationKeys(key) for iKey in aKeys: if iKey['key'] == AKName or iKey['key'].
if __name__ == "__main__": sys.exit(main()) v) addGPGKey_tenant.py - loads the GPG key into satellite and associates it with a stated kickstart #!/usr/bin/python """ This script will attempt to add a GPG key to an existing kickstart """ import xmlrpclib import os.path import sys import time import datetime SATELLITE_URL = "http://sat-vm.cloud.lab.eng.bos.redhat.com/rpc/api" INFRA_LOGIN = "tenant" INFRA_PASSWD = "24^gold" def main(): if len(sys.argv) < 3: print "Usage: ",sys.argv[0]," kickstart GPGKEY_pub.
6.3 Provision First Management Node The invocation of post_satellite_build_up.sh, among other actions, starts the installation of the first management node. The same kickstart file is used for both management nodes using variables defining hard-coded network parameters (NAME, NETIP, ICIP, ICNAME) to differentiate between the systems. As the first management node boots, it becomes the host to the satellite VM, create the other management VMs, coordinate with the other management node and form the cluster.
device scsi cciss zerombr clearpart --all --initlabel part /boot --fstype=ext3 --size=200 part pv.01 --size=1000 --grow part swap --size=10000 --maxsize=20000 volgroup mgmtNode1VG pv.01 logvol / --vgname=mgmtNode1VG --name=rootvol --size=1000 --grow bootloader --location mbr timezone America/New_York auth --enablemd5 --enableshadow rootpw --iscrypted $1$bdeGv4w8$zkMITYuaYvSPh2.W.nD.D. selinux --permissive reboot firewall --enabled repo --name=Cluster --baseurl=http://sat-vm.cloud.lab.eng.bos.redhat.
fipscheck imake kexec-tools libsane-hpaio mesa-libGLU-devel ntp perl-XML-SAX perl-XML-NamespaceSupport python-imaging python-dmidecode pexpect sg3_utils sgpio xorg-x11-utils xorg-x11-server-Xnest xorg-x11-server-Xvfb yum-utils bind %post ( # set system time from server then set time to the hwclock /usr/sbin/ntpdate -bu 10.16.255.2 hwclock --systohc cat </etc/ntp.conf restrict default kod nomodify notrap nopeer noquery restrict 10.16.136.0 mask 255.255.248.0 restrict 127.0.0.1 server 10.16.255.
:RH-Firewall-1-INPUT - [0:0] -A INPUT -i virbr0 -p udp -m udp --dport 53 -j ACCEPT -A INPUT -i virbr0 -p tcp -m tcp --dport 53 -j ACCEPT -A INPUT -i virbr0 -p udp -m udp --dport 67 -j ACCEPT -A INPUT -i virbr0 -p tcp -m tcp --dport 67 -j ACCEPT -A INPUT -j RH-Firewall-1-INPUT -A FORWARD -d 192.168.122.0/255.255.255.0 -o virbr0 -m state --state RELATED,ESTABLISHED -j ACCEPT -A FORWARD -s 192.168.122.0/255.255.255.
-A RH-Firewall-1-INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 443 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT -A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited COMMIT EOF #create host file with cluster members cat <<'EOF'> /etc/hosts 127.0.0.1 localhost.localdomain localhost ::1 localhost6.
setsebool -P named_write_master_zones on chmod 770 /var/named chkconfig named on # Set NFS daemon ports cat <>/etc/sysconfig/nfs QUOTAD_PORT=875 LOCKD_TCPPORT=32803 LOCKD_UDPPORT=32769 MOUNTD_PORT=892 STATD_PORT=662 STATD_OUTGOING_PORT=2020 EOF # disable LRO on bnx2x - BZ 518531 echo "options bnx2x disable_tpa=1" >> /etc/modprobe.conf # update software yum -y update # prepare action to be performed on next boot /bin/cp /etc/rc.d/rc.local /etc/rc.d/rc.local.shipped cat mgmt.rc.local.add >> /etc/rc.d/rc.
if [[ -x varDefs.sh ]] ; then source varDefs.sh elif [[ -x /root/varDefs.sh ]] ; then source /root/varDefs.sh elif [[ -x /root/resources/varDefs.sh ]] ; then source /root/resources/varDefs.sh elif [[ -x /root/distro/resources/varDefs.sh ]] ; then source /root/distro/resources/varDefs.sh else echo "didn't find a varDefs.sh file" fi # Install CLI command libs for ILO, Storage Array, # Virtual Console, and Onboard Administrator /root/instpyCmds.
# Deploy preconfigured cluster configuration file and set SELinux label /bin/mv /root/cluster.conf /etc/cluster/cluster.conf /sbin/restorecon /etc/cluster/cluster.
# Add both interfaces of each node known_hosts of the other ssh ${MGMT1_NAME} ssh ${MGMT2_NAME} date ssh ${MGMT1_NAME%.${FQD}} ssh ${MGMT2_NAME%.
EOF # Mount the share GFS2 storage on both nodes and make dir for VM config files /usr/bin/ssh ${MGMT1_IP} /bin/mount -t gfs2 /dev/mapper/GFS2_disk /gfs2_vol/ /bin/mount -t gfs2 /dev/mapper/GFS2_disk /gfs2_vol/ /bin/mkdir -m 666 -p /gfs2_vol/vmconfig # Copy JON, MRG, and RHEV-M VM config files from mgmt1 to GFS2 shared storage /usr/bin/ssh ${MGMT1_IP} /bin/cp /etc/libvirt/qemu/*.xml /gfs2_vol/vmconfig /usr/bin/ssh ${MGMT1_IP} /bin/mv /etc/libvirt/qemu/*.
#Acquire short hostname SHORTHOST=`hostname --short` #Modifying hostname to match name used to define host HBAs at MSA storage array NODE=`echo $SHORTHOST | awk '{ sub("mgmt","mgmt_node"); print }'` #Add the HBAs of this host to the MSA storage array /root/map_fc_aliases.
elif [[ -x /root/resources/varDefs.sh ]] ; then echo /root/resources/varDefs.sh source /root/resources/varDefs.sh elif [[ -x /root/distro/resources/varDefs.sh ]] ; then echo /root/distro/resources/varDefs.sh source /root/distro/resources/varDefs.sh else echo "didn't find a varDefs.
ii) buildMpathAliases.sh – refer to section 6.2.1 c) createMgmtVMs.
# jon-vm nlines=`wc -l /etc/libvirt/qemu/jon-vm.xml | awk '{print \$1}'` hlines=`grep -n "" /etc/libvirt/qemu/jon-vm.xml | cut -d: -f1` tlines=`expr $nlines - $hlines` /bin/mv /etc/libvirt/qemu/jon-vm.xml /tmp/jon-vm.xml /usr/bin/head -n ${hlines} /tmp/jon-vm.xml > /etc/libvirt/qemu/jon-vm.xml cat <> /etc/libvirt/qemu/jon-vm.xml EOF /usr/bin/tail -${tlines} /tmp/jon-vm.
• • • waits for the RHEVM to be configured adds the cluster monitor to the RHEVM VM entry in the cluster configuration file removes the crontab entry #!/bin/sh # This script is intended to be submitted as a cron job and will # check to see if the RHEVM has been configured. Once configured, # add the phrase to the cluster monitor and clean the cron # Can be submitted using a line similar to: # echo "*/15 * * * * /root/rhevm_mon.
else echo "rhev-check.sh not found!" fi 6.4 Creating Management Virtual Machines While the satellite management VM was created early in the process, the remaining management VMs are created when the first management node is installed. This section provides the details used in the creation of the JON, MRG, and RHEV-M VMs. NOTE: Red Hat Bugzilla 593048 may occur during any of the VM installations on KVM hosts. The logs generated during this process must be checked to confirm success. 6.4.
%packages @ Base postgresql84 postgresql84-server java-1.6.0-openjdk.x86_64 %post ( # MOTD echo >> /etc/motd echo "RHN Satellite kickstart on `date +'%Y-%m-%d'`" >> /etc/motd echo >> /etc/motd # Set time ntpdate -bu 10.16.255.2 chkconfig ntpd on /bin/cat </etc/ntp.conf restrict default kod nomodify notrap nopeer noquery restrict 127.0.0.1 driftfile /var/lib/ntp/drift keys /etc/ntp/keys server 10.16.136.10 server 10.16.136.
-A RH-Firewall-1-INPUT -p udp --dport 1161 -m state --state NEW -j ACCEPT -A RH-Firewall-1-INPUT -p udp --dport 1162 -m state --state NEW -j ACCEPT -A RH-Firewall-1-INPUT -p tcp --dport 3528 -m state --state NEW -j ACCEPT -A RH-Firewall-1-INPUT -p tcp --dport 4447 -m state --state NEW -j ACCEPT -A RH-Firewall-1-INPUT -p tcp --dport 7900 -m state --state NEW -j ACCEPT -A RH-Firewall-1-INPUT -p udp --dport 43333 -m state --state NEW -j ACCEPT -A RH-Firewall-1-INPUT -p udp --dport 45551 -m state --state NEW -j
source varDefs.sh elif [[ -x /root/varDefs.sh ]] ; then source /root/varDefs.sh elif [[ -x /root/resources/varDefs.sh ]] ; then source /root/resources/varDefs.sh elif [[ -x /root/distro/resources/varDefs.sh ]] ; then source /root/distro/resources/varDefs.sh else echo "didn't find a varDefs.sh file" fi #script default values: HOSTNAME=`hostname` IP=`ifconfig eth0 | grep 'inet addr' | sed 's/.*inet addr://' | sed 's/ .
if [ -z "$SVC_SCRIPT" ]; then echo " - No previous installations found." return fi echo " - Found JON/JOPR/RHQ service script at: $SVC_SCRIPT" # find home directory echo " * Finding first-defined JON/JOPR/RHQ home directory..." for i in $SVC_SCRIPT; do for dir in `grep RHQ_SERVER_HOME= $i | sed 's/[-a-zA-Z0-9_]*=//'`; do if [ -a $dir ]; then JON_HOME=$dir; fi done if [ -z "$JON_HOME" ]; then echo " - JON/JOPR/RHQ home directory was not defined in the service script, uninstall failed.
JON_USER="`echo $i | sed 's/[-a-zA-Z0-9]*=//'`" ;; --jon-rootdir=*) JON_ROOT="`echo $i | sed 's/[-a-zA-Z0-9]*=//'`" ;; --jon-url=*) JON_URL="`echo $i | sed 's/[-a-zA-Z0-9]*=//'`" ;; --licenseurl=*) JON_LICENSE_URL="`echo $i | sed 's/[-a-zA-Z0-9]*=//'`" ;; --db-connectionurl=*) DB_CONNECTION_URL="`echo $i | sed 's/[-a-zA-Z0-9]*=//'`" ;; --db-servername=*) DB_SERVER_NAME="`echo $i | sed 's/[-a-zA-Z0-9]*=//'`" ;; --ha-name=*) HA_NAME="`echo $i | sed 's/[-a-zA-Z0-9]*=//'`" ;; --uninstall*) UNINSTALL_ONLY=1 ;; -
fi # if specified JON user is not present, we must create it /bin/egrep -i "^$JON_USER" /etc/passwd > /dev/null if [ $? != 0 ]; then echo " - Specified JON local user does not exist; hence, it will be created." RECREATE_USER=1 fi # get jon and pop it into a new jon user directory echo " * Purging any old installs and downloading JON...
host all all 127.0.0.1/32 host all all 10.0.0.1/8 # IPv6 local connections: host all all ::1/128 " > /var/lib/pgsql/data/pg_hba.conf trust md5 trust chkconfig postgresql on service postgresql restart echo " * Unzipping and configuring JON..." # unzip jon su - $JON_USER -c 'unzip jon.zip' su - $JON_USER -c 'rm jon.zip' su - $JON_USER -c "mv jon-server-* $JON_ROOT" su - $JON_USER -c "mv rhq* $JON_ROOT" # configure jon's autoinstall sed -i "s/rhq.autoinstall.enabled=false/rhq.autoinstall.
wget $JON_LICENSE_URL -O /home/$JON_USER/ $JON_ROOT/jbossas/server/default/deploy/rhq.ear.rej/license/license.xml echo " * Starting JON for the first time..." service rhq-server.sh start # install JON plugins echo " * Waiting until server installs..." sleep $AUTOINSTALL_WAITTIME #wait for autoinstall to finish echo " * Installing plugins..." for i in /home/$JON_USER/*.zip ; do unzip -d /tmp/rhq-plugins $i *.jar ; done find /tmp/rhq-plugins -name "*.
key --skip user --name=admin --password=$1$1o751Xnc$kmQKHj6gtZ50IILNkHkkF0 --iscrypted %packages @ Base pexpect ntp SDL ruby %post ( # MOTD echo >> /etc/motd echo "RHN Satellite kickstart on `date +'%Y-%m-%d'`" >> /etc/motd echo >> /etc/motd ntpdate -bu 10.16.255.2 chkconfig ntpd on /bin/cat </etc/ntp.conf restrict default kod nomodify notrap nopeer noquery restrict 127.0.0.1 driftfile /var/lib/ntp/drift keys /etc/ntp/keys server 10.16.136.10 server 10.16.136.
-A RH-Firewall-1-INPUT -p tcp -m tcp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m tcp --dport 2049 -j ACCEPT -A RH-Firewall-1-INPUT -p udp -m udp --dport 2049 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m tcp --dport 2020 -j ACCEPT -A RH-Firewall-1-INPUT -p udp -m udp --dport 2020 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp --dport 4672 -m state --state NEW -j ACCEPT -A RH-Firewall-1-INPUT -p udp --dport 4672 -m state --state NEW -j ACCEPT -A RH-Firewall-1-INPUT -p tcp --dport 5672 -m state --state NEW -
rm -rf /var/lib/pgsql/data su - postgres -c "initdb -D /var/lib/pgsql/data" # update software yum -y update #prepare actions to be performed on next boot wget http://10.16.136.1/pub/resources/mrggrid.rc.local.add -O /etc/rc.d/mrggrid.rc.local.add /bin/cp /etc/rc.d/rc.local /etc/rc.d/rc.local.shipped cat /etc/rc.d/mrggrid.rc.local.add >> /etc/rc.d/rc.local ) >> /root/ks-post.log 2>&1 mrggrid.rc.local.
chmod +x /tmp/add_cumin_user.py /tmp/add_cumin_user.py admin 24^gold #Start Cumin echo "--- Starting cumin ---" chkconfig cumin on service cumin start #Restore original rc.local echo "--- Putting original rc.local back ---" /bin/mv /etc/rc.d/rc.local /etc/rc.d/rc.local.install /bin/mv /etc/rc.d/rc.local.shipped /etc/rc.d/rc.local #setup the perfect number application wget http://sat-vm.cloud.lab.eng.bos.redhat.com/pub/resources/perfect.tgz -O /tmp/perfect.tgz tar -xpzf /tmp/perfect.
In the Home tab, click the Add Management Data Source button. Supply a Name and the Address of the MRG VM, then select the Submit button. 6.4.3 RHEV-M VM Because RHEV-M uses Windows as a platform, the media is not supplied by Red Hat. The VM is created using a previously generated image (refer to Appendix A.3) that is dropped into place. However, this is not required and a blank VM may be created to allow an operator to install using their own media.
) >> /var/log/rc.local2.out 2>&1 The instMgmt2.sh script: • adds cobbler system entry for system • sets boot order so PXE is first • powers on/reset blade • waits before setting PXE boot to last #!/bin/bash # source env vars if [[ -x varDefs.sh ]] ; then source varDefs.sh elif [[ -x /root/varDefs.sh ]] ; then source /root/varDefs.sh elif [[ -x /root/resources/varDefs.sh ]] ; then source /root/resources/varDefs.sh elif [[ -x /root/distro/resources/varDefs.sh ]] ; then source /root/distro/resources/varDefs.
• • • • • • • • • • • • configure the previously generated ssh onto the system prepare the mount of the GFS2 file system use the other cluster member an NTP peer deploy the cluster configuration file execute round robin ssh connections to create secure shell known hosts await a signal from the first node that it has created the management VMs shut down satellite on the first cluster node configure LVM volume for cluster use start the CMAN and CLVMD cluster services create and mount the GFS2 file system pla
6.6 Create First Hosts When the satellite VM boots for the third time, it creates the first of each of the RHEL/KVM and RHEV hypervisor hosts. This third boot occurs after the cluster has been formed when the satellite VM is started as a cluster service. sat.rc.local3.add ( # source env vars if [[ -x varDefs.sh ]] ; then source varDefs.sh elif [[ -x /root/varDefs.sh ]] ; then source /root/varDefs.sh elif [[ -x /root/resources/varDefs.sh ]] ; then source /root/resources/varDefs.
# This script will install and prepare a system to be a RHEL host # Source env vars if [[ -x varDefs.sh ]] ; then source varDefs.sh elif [[ -x /root/varDefs.sh ]] ; then source /root/varDefs.sh elif [[ -x /root/resources/varDefs.sh ]] ; then source /root/resources/varDefs.sh elif [[ -x /root/distro/resources/varDefs.sh ]] ; then source /root/distro/resources/varDefs.sh else echo "didn't find a varDefs.
echo -e "\nCreating cobbler system entry ...\n" cobbler system add --name=${IPname} --profile=${RHELH_PROFILE} --mac=${rawMAC//-/:} --ip=${IPnum} --hostname=${IPname} --dns-name=${IPname} --kopts="console=ttyS0,115200 nostorage" cobbler sync #Remove semaphore /bin/rm /tmp/AvailNameIP # Present storage to hosts echo -e "\nPresenting storage to host ${pname} ...\n" /root/resources/prep_stor_host.
while [[ ! `ilocommand -i //${LOGIN}:${ILO_PW}@${iloIP} set /system1/bootconfig1/bootsource5 bootorder=5 | grep status=0` ]]; do sleep 2; done a) In addition to using several of the *commands and addnfsexport (part of riccicmd), described in Section 6.1.1 , other scripts and commands are called from within the above script. The first of which is GetAvailRhelh.sh which returns the next available IP name and address that can be used for a RHEL/KVM host.
indx=`expr $indx + 1` tIP=`printf "%s.%d" ${IP_DOMAIN} ${indx}` host ${tIP} > /dev/null 2>/dev/null done echo "${fhost} ${tIP}" b) prep_stor_host.sh – adds/verifies each of the host's HBA, then presents the LUN used for RHEV to all the host's HBAs #!/bin/bash #source variables if [[ -x varDefs.sh ]] then source varDefs.sh elif [[ -x /root/varDefs.sh ]] then source /root/varDefs.sh elif [[ -x /root/resources/varDefs.sh ]] then source /root/resources/varDefs.sh elif [[ -x /root/distro/resources/varDefs.
echo "Aliases for this host already exist!" else vcmcommand --vcmurl //${LOGIN}:${VCM_PW}@${VCM_IP} show profile ${PNAME} | grep -A 6 "FC SAN Connections" | grep "[0-9]\+\\W\+[0-9]\+" | awk '{print $1 " " $6}' | while read PORT WWN do # if storage see HBA set the nickname, otherwise create host entry if [[ `sacommand --saurl //${MSA_USER}:${MSA_PW}@${MSA_IP} show hosts | grep $ {WWN//:}` ]] then sacommand --saurl //${MSA_USER}:${MSA_PW}@${MSA_IP} set host-name id $ {WWN//:} ${PNAME}_${PORT} else sacommand -
indx=1 tname=`printf "rhev-nfs-client-%02d" ${indx}` while [[ `echo ${nfsClients} | grep ${tname}` ]] do indx=`expr $indx + 1` tname=`printf "rhev-nfs-client-%02d" ${indx}` done else # No existing hosts tname="rhev-nfs-client-01" fi echo ${tname} d) listRegSystems_infra.py – list system registered to the RHN Satellite system #!/usr/bin/python """ This script lists all registered systems. The org is determined by the login global variable below.
• • • • adds the new NFS export stanza to the cluster configuration file sets the boot order to boot PXE first registers the host with satellite after install sets the boot order to boot PXE last #!/bin/bash # # This script will install and prepare a system for use as a RHEV host # source env vars if [[ -x varDefs.sh ]] ; then source varDefs.sh elif [[ -x /root/varDefs.sh ]] ; then source /root/varDefs.sh elif [[ -x /root/resources/varDefs.sh ]] ; then source /root/resources/varDefs.
IPnum=`/root/resources/GetAvailRhevh.sh | awk '{print $2}'` # Delete any previously defined cobbler system entry if [[ `cobbler system list | grep ${IPname}` ]] then cobbler system delete --name=${IPname} fi # Create cobbler system entry echo -e "\nCreating cobbler system entry ...
# Wait for system to register with satellite indicating installation completion echo -e "\nWaiting for system to register with satellite ...\n" while [[ $initReg -ge `/root/resources/listRegSystems_infra.py | grep -c ${IPname}` ]]; do sleep 5; done echo -e "\nSatellite registration complete ...\n" # Change system boot order to boot network last echo -e "\nChanging system boot order to boot network last ...
# Assumes Satellite uses the lowest IP address IP_DOMAIN=${SAT_IP%.*} indx=1 tIP=`printf "%s.%d" ${IP_DOMAIN} ${indx}` host ${tIP} > /dev/null 2>/dev/null while [[ $? -eq 0 ]] do indx=`expr $indx + 1` tIP=`printf "%s.%d" ${IP_DOMAIN} ${indx}` host ${tIP} > /dev/null 2>/dev/null done echo "${fhost} ${tIP}" 6.6.1 RHEL Configuration for a RHEV Host When the RHEL/KVM host is started for the first time, the kickstart installs the system with the base software including NTP.
timezone America/New_York auth --enablemd5 --enableshadow rootpw --iscrypted $1$1o751Xnc$kmQKHj6gtZ50IILNkHkkF0 selinux --permissive reboot firewall --enabled skipx key --skip %packages @ Base device-mapper-multipath ntp %post ( #set the time from a server, then use this to set the hwclock. Enable ntp /usr/sbin/ntpdate 10.16.255.
# put standard rc.local back into place /bin/mv /etc/rc.d/rc.local.shipped /etc/rc.d/rc.local #reboot to get networks going (restart of network would not address crashed driver) reboot EOF ) >> /root/ks-post.log 2>&1 6.7 Configuring RHEV Section 6.4.3 installed the Windows 2008 VM image used for the RHEV Manager.
Begin the install by opening a Command Prompt window, changing to the C:\saved directory, and issuing the command to install. .\RHEVM_47069.exe -s -f1c:\saved\rhevm_config_2.2u2.iss 6.7.
Xeon Core i7" -CompatibilityVersion $clusversions[$clusversions.length-1] #change cluster policy write "Changing Cluster Policy ..." $clus.SelectionAlgorithm="EvenlyDistribute" $clus.HighUtilization=80 $clus.CpuOverCommitDuratinMinutes=2 $clus = update-cluster $clus #add host write "Adding Host ..." $rhelhost = add-host -Name "rhelh-01.cloud.lab.eng.bos.redhat.com" -Address rhelh01.cloud.lab.eng.bos.redhat.com -RootPassword "24^gold" -HostClusterId $clus.ClusterId -ManagementHostName 10.16.136.
} while ( $timeout -and $stat -ne "Up" ) if ( $timeout -eq 0) { throw 'DATACENTERTIMEOUT' } #Approve any rhev hosts that are present write "Approve any waiting RHEV Hypervisor hosts ..." foreach ($candidate in select-host | ? {$_.Status -eq "Pending Approval"}) { if ($candidate.Name -like "rhevh-01*") { $candidate.PowerManagement.Enabled = $true $candidate.PowerManagement.Address = "10.16.136.234" $candidate.PowerManagement.Type = "ilo" $candidate.PowerManagement.UserName = "Administrator" $candidate.
6.7.3 Upload ISO Images With RHEV-M now operational, upload the guest tools ISO image and the virtio drivers virtual floppy disk. Start -> All Programs -> Red Hat -> RHEV Manager -> ISO Uploader. 125 www.redhat.
7 Dynamic Addition and Removal of Hosts As workloads ramp up and the demand for CPU cycles increases, additional hosts may be added to bear the burden of additional VMs that can provide added compute power. The scripts instRHELH.sh and instRHEVH.sh may be used to create a new host of each type. 7.1 RHEL / KVM Host Addition The instRHELH.sh script requires a single passed parameter, the server blade profile name assigned in the Virtual Connect interface. ./instRHELH.
vcmcommand --vcmurl //${LOGIN}:${VCM_PW}@${VCM_IP} show server | grep ${pname} > /dev/null 2>/dev/null if [[ $? -ne 0 ]] then echo "HP Virtual Connect profile $pname not found!" exit -2 fi nblade=`vcmcommand --vcmurl //${LOGIN}:${VCM_PW}@${VCM_IP} show server | grep ${pname} | awk '{print $3}'` iloIP=`oacommand --oaurl //${LOGIN}:${OA_PW}@${OA_IP} show server info ${nblade} | grep "IP Address" | awk '{print $3}'` rawMAC=`vcmcommand --vcmurl //${LOGIN}:${VCM_PW}@${VCM_IP} show profile ${pname} | grep public
# release semaphore /bin/rm /tmp/UpdateNFSClient # Get the count of systems registered with this name (should be 0) initReg=`/root/resources/listRegSystems_infra.py | grep -c ${IPname}` echo -e "\nNumber of systems registered with this name: ${initReg} ...\n" # Set to boot PXE first echo -e "\nChanging system boot order to boot network (PXE) first ...
echo "Didn't find a varDefs.
/root/resources/prep_stor_host.sh ${pname} # Update cluster configuration for NFS presentation # create a semaphore for unique client names while [[ -e /tmp/UpdateNFSClient ]] ; do sleep 1; done touch /tmp/UpdateNFSClient #Get next available client name nfsClient=`/root/resources/GetAvailNFSClient.
7.3 Host Removal Subsequently, remHost.sh is used to remove a host of either type from the RHEV-M configuration. ./remHost.sh rhevh-02 remHost.sh #!/bin/bash # This script requires a host (hypervisor) name (also the profile name) as a passed parameter. # It will remove a host from the cluster configuration, cobbler, and storage. # It assumes that the host targeted for removal has been removed from RHEV-M. # Source env vars if [[ -x varDefs.sh ]] ; then source varDefs.sh elif [[ -x /root/varDefs.
nfsClient=`riccicmd -H ${MGMT1_IP} cluster configuration | grep ${host} |cut -d\" -f4` if [[ $? -ne 0 ]] then echo "Cluster resource not found for ${host}!" exit -3 fi delnfsexport --ricciroot=/.ricci -H ${MGMT1_IP} rhev-nfs-fs ${nfsClient} # Unpresent storage to removed host and delete host HBAs echo -e "\nUnpresenting storage to removed host ${hostFQDN} ...\n" /root/resources/rem_stor_host.sh ${host} # Remove Satellite registration /root/resources/wipeSatReg.
echo "Usage: $0 \n" exit -1 else host=$1 fi # Confirm that the parameter passed is an existing host if [[ ! `sacommand --saurl //${MSA_USER}:${MSA_PW}@${MSA_IP} show hosts | grep ${host}` ]] then echo ${host} " is not an existing host!" exit -2 fi # Unmap any volumes that may be mapped to the specified host for hostAlias in `sacommand --saurl //${MSA_USER}:${MSA_PW}@${MSA_IP} show hosts |grep $ {host} | awk '{print $2}'` do for mappedVol in `sacommand --saurl //${MSA_USER}:${MSA_PW}@${MSA_
8 Creating VMs This section includes the kickstart and other scripts used in the creation of these RHEV VMs: • Basic RHEL • RHEL with Java Application • RHEL with JBoss Application • RHEL MRG Grid Execute Nodes • RHEL MRG Grid Rendering Each of the VMs are created following a similar procedure: 1. 2. 3. 4. Log in to the RHEV Manager Select the Virtual Machines tab Select the New Server button In the New Server Virtual Machine window a) In the General tab, provide the at least following: • Name: (e.g.
10. In console,select the desired option when the PXE menu appears 11. The VM installs and registers with the local RHN Satellite server 8.1 RHEL The basic RHEL VM was created with following kickstart file, used to install RHEL 5.5 with the latest available updates from RHN. rhel55_basic.ks text network --bootproto dhcp lang en_US keyboard us url --url http://sat-vm.cloud.lab.eng.bos.redhat.com/ks/dist/ks-rhel-x86_64-server-5-u5 zerombr clearpart --linux part /boot --fstype=ext3 --size=200 part pv.
selinux --permissive reboot firewall --enabled skipx key --skip %packages @ Base %post ( /usr/bin/yum -y update ) >> /root/ks-post.log 2>&1 8.2 RHEL with Java Application A self-starting Java workload is demonstrated with VMs created using the following kickstart to: • install RHEL 5.5 with OpenJDK • install the latest OS updates • execute any queued actions on RHN to sync with the activation key rhel55_java.ks install text network --bootproto dhcp lang en_US keyboard us url --url http://sat-vm.cloud.lab.
%post ( # execute any queued actions on RHN to sync with the activation key rhn_check -vv /usr/bin/yum -y update ) >> /root/ks-post.log 2>&1 8.3 RHEL with JBoss A VM demonstrating JBoss EAP functionality is created using the following kickstart to: • • • • • • install RHEL 5.
%post ( # set required firewall ports /bin/cp /etc/sysconfig/iptables /tmp/iptables /usr/bin/head -n -2 /tmp/iptables > /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport 22 -m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport 1098 -m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport 1099 -m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INP
/etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport 4447 -m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport 7900 -m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p udp --dport 43333 -m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p udp --dport 45551 -m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT
cd /root/rhq-agent/bin \mv rhq-agent-env.sh rhq-agent-env.sh.orig wget http://sat-vm.cloud.lab.eng.bos.redhat.com/pub/resources/rhq-agent-env.sh # deploy the test app cd /root/jboss-eap*/jboss-as/server/default/deploy wget http://sat-vm.cloud.lab.eng.bos.redhat.com/pub/resources/jboss-seam-booking-ds.xml wget http://sat-vm.cloud.lab.eng.bos.redhat.com/pub/resources/jboss-seam-booking.ear # configure JBoss and JON agent to auto start cd /etc/init.d wget http://sat-vm.cloud.lab.eng.bos.redhat.
text network --bootproto dhcp lang en_US keyboard us url --url http://sat-vm.cloud.lab.eng.bos.redhat.com/ks/dist/ks-rhel-x86_64-server-5-u5 zerombr clearpart --linux part /boot --fstype=ext3 --size=200 part pv.01 --size=1000 --grow part swap --size=2000 --maxsize=20000 volgroup MRGGRID pv.
#Get configuration files wget http://sat-vm.cloud.lab.eng.bos.redhat.com/pub/resources/sesame.conf -O /etc/sesame/sesame.conf wget http://sat-vm.cloud.lab.eng.bos.redhat.com/pub/resources/condor_config.local.mrgexec -O /var/lib/condor/condor_config.local /usr/bin/yum -y update chkconfig condor on chkconfig qpidd on condor_status -any chkconfig sesame on chkconfig ntpd on ) >> /root/ks-post.log 2>&1 The kickstart installs a simple RHEL VM including the software and configuration as a MRG Grid Execute node.
3. Shut down the VM 4. In the RHEV-M Virtual Machines tab, select the VM that was just shut down and click the Make Template button specifying a Name. 5. Create several VMs from the template using the add-vms.
if ($clus.
8.5 RHEL MRG Grid Rendering A VM to execute the MRG Grid rendering application is created using the following kickstart to: • • • • • • • • • install RHEL 5.
/bin/cp /etc/sysconfig/iptables /tmp/iptables /usr/bin/head -n -2 /tmp/iptables > /etc/sysconfig/iptables cat <>/etc/sysconfig/iptables -A RH-Firewall-1-INPUT -p tcp --dport 4672 -m state --state NEW -j ACCEPT -A RH-Firewall-1-INPUT -p udp --dport 4672 -m state --state NEW -j ACCEPT -A RH-Firewall-1-INPUT -p tcp --dport 5672 -m state --state NEW -j ACCEPT -A RH-Firewall-1-INPUT -p udp --dport 5672 -m state --state NEW -j ACCEPT -A RH-Firewall-1-INPUT -p tcp --dport 45672 -m state --state NEW -j ACCEPT
0.1.noarch.rpm http://download1.rpmfusion.org/nonfree/el/updates/testing/5/i386/rpmfusion-nonfreerelease-5-0.1.noarch.rpm wget http://irish.lab.bos.redhat.com/pub/kits/fribidi-0.10.7-5.1.i386.rpm wget http://irish.lab.bos.redhat.com/pub/kits/fribidi-0.10.7-5.1.x86_64.rpm yum -y localinstall fribidi-0.10.7-5.1.x86_64.rpm yum -y localinstall fribidi-0.10.7-5.1.i386.rpm yum -y install mencoder ) >> /root/ks-post.log 2>&1 The kickstart creates a RHEL VM that mounts the render directory from MRG Manger.
9 References 1. The NIST Definition of Cloud Computing Version 15 07 October 2009 http://csrc.nist.gov/groups/SNS/cloud-computing/cloud-def-v15.doc 2. Above the Clouds: A Berkley View of Cloud Computing Technical Report No. UCB/EECS-2009-28 Department of Electrical Engineering and Computer Science University of California at Berkeley http://www.eecs.berkeley.edu/Pubs/TechRpts/2009/EECS-2009-28.pdf 3.
Appendix A: Configuration Files, Images, Etc. This appendix contains various configuration files, images, tar/zip files used in the construction of the infrastructure or in the various use cases demonstrated. A.1 Management Nodes A.1.1 multipath.conf.template The multipath.conf.template file provides the settings to multipath for the specific hardware in use in this configuration.
hardware_handler "0" path_selector "round-robin 0" prio_callout "/sbin/mpath_prio_alua /dev/%n" path_grouping_policy group_by_prio failback immediate rr_weight uniform no_path_retry 3 rr_min_io 100 path_checker tur } } A.1.2 cluster.conf A cluster is instantly formed by depositing the following /etc/cluster/cluster.conf file in place and starting the cluster daemons. Additions to cluster.conf to add the NFS clients are handled by the riccicmd scripting library.
PAGE 152A.2 Satellite A.2.1 answers.txt # Administrator's email address. Required. # Multiple email addresses can be used, separated with commas. # # Example: # admin-email = user@example.com, otheruser@example.com admin-email = @.com ## RHN connection information. # # Passed to rhn-register to register the system if it is not already # registered. # # Only required if the system is not already registered, or if the # '--re-register' commandline option is used.
# Example: # ssl-set-org-unit = Information Systems Department ssl-set-org-unit = Reference Architecture # Location information for the SSL certificates. Required. # # Example: # ssl-set-city = New York # ssl-set-state = NY # ssl-set-country = US ssl-set-city = Westford ssl-set-state = MA ssl-set-country = US # Password for CA certificate. Required. Do not lose or forget this # password! # # Example: # ssl-password = c5esWL7s ssl-password = 24^gold ## Database connection information.
# ssl-config-sslvhost = ssl-config-sslvhost = Y # *** Options below this line usually don't need to be set. *** # The Satellite server's hostname. This must be the working FQDN of # the satellite server. # hostname = sat-vm.cloud.lab.eng.bos.redhat.com # The mount point for the RHN package repository. Defaults to # /var/rhn/satellite # # mount-point = # Mail configuration. # # mail-mx = # mdom = # 'Common name' for the SSL certificates. Defaults to the system's # hostname, or whatever 'hostname' is set to.
A.2.2 AppPGP When generating the application channel security key for satellite, the input to gpg is specified in this file. %echo Generating a standard key Key-Type: DSA Key-Length: 1024 Subkey-Type: ELG-E Subkey-Length: 1024 Name-Real: Vijay Trehan Name-Comment: Cloud Foundations Name-Email: sm@redhat.com Expire-Date: 0 Passphrase: Cloud Foundations # Do a commit here, so that we can later print "done" :-) %commit %echo done A.2.3 DNS database files Nine files exist for the named/DNS database.
3 [...] 253 254 255 PTR cloud-138-3.cloud.lab.eng.bos.redhat.com. PTR PTR PTR cloud-138-253.cloud.lab.eng.bos.redhat.com. cloud-138-254.cloud.lab.eng.bos.redhat.com. cloud-138-255.cloud.lab.eng.bos.redhat.com. A.2.4 dhcpd.conf This file is placed in /etc/dhcpd.conf. # # DHCP Server Configuration file. # see /usr/share/doc/dhcp*/dhcpd.conf.sample # authoritive; ddns-update-style interim; ignore client-updates; subnet 10.16.136.0 netmask 255.255.248.0 { option routers 10.16.143.
} host ra-c7000-01-db4-ilo { option host-name "ra-c7000-01-db4-ilo.cloud.lab.eng.bos.redhat.com"; hardware ethernet D8:D3:85:63:FF:74; fixed-address 10.16.136.234; } host ra-c7000-01-db5-ilo { option host-name "ra-c7000-01-db5-ilo.cloud.lab.eng.bos.redhat.com"; hardware ethernet D8:D3:85:63:DE:92; fixed-address 10.16.136.235; } host ra-c7000-01-db6-ilo { option host-name "ra-c7000-01-db6-ilo.cloud.lab.eng.bos.redhat.com"; hardware ethernet D8:D3:85:63:EE:DA; fixed-address 10.16.136.
} host ra-c7000-01-db14-ilo { option host-name "ra-c7000-01-db14-ilo.cloud.lab.eng.bos.redhat.com"; hardware ethernet D8:D3:85:63:EE:26; fixed-address 10.16.136.244; } host ra-c7000-01-db15-ilo { option host-name "ra-c7000-01-db15-ilo.cloud.lab.eng.bos.redhat.com"; hardware ethernet D8:D3:85:63:FF:1A; fixed-address 10.16.136.245; } host ra-c7000-01-db16-ilo { option host-name "ra-c7000-01-db16-ilo.cloud.lab.eng.bos.redhat.com"; hardware ethernet D8:D3:85:5F:49:3E; fixed-address 10.16.136.
search cloud.lab.eng.bos.redhat.com bos.redhat.com nameserver 10.16.136.10 nameserver 10.16.136.1 nameserver 10.16.255.2 A.2.7 settings Cobbler's configuration is specified by /etc/cobbler/settings. --# cobbler settings file # restart cobblerd and run "cobbler sync" after making changes # This config file is in YAML 1.0 format # see http://yaml.
# allow access to the filesystem as Cheetah templates are evaluated # by cobblerd as code. cheetah_import_whitelist: - "random" - "re" - "time" # if no kickstart is specified, use this template (FIXME) default_kickstart: /var/lib/cobbler/kickstarts/default.ks # cobbler has various sample kickstart templates stored # in /var/lib/cobbler/kickstarts/. This controls # what install (root) password is set up for those # systems that reference this variable.
# controls whether cobbler will add each new profile entry to the default # PXE boot menu. This can be over-ridden on a per-profile # basis when adding/editing profiles with --enable-menu=0/1. Users # should ordinarily leave this setting enabled unless they are concerned # with accidental reinstalls from users who select an entry at the PXE # boot menu.
ro: ~ ip: off vnc: ~ # configuration options if using the authn_ldap module. See the # the Wiki for details. This can be ignored if you are not using # LDAP for WebUI/XMLRPC authentication. ldap_server: "ldap.example.com" ldap_base_dn: "DC=example,DC=com" ldap_port: 389 ldap_tls: 1 ldap_anonymous_bind: 1 ldap_search_bind_dn: '' ldap_search_passwd: '' ldap_search_prefix: 'uid=' # set to 1 to enable Cobbler's DHCP management features. # the choice of DHCP management engine is in /etc/cobbler/modules.
# if using cobbler with manage_dhcp, put the IP address # of the cobbler server here so that PXE booting guests can find it # if you do not set this correctly, this will be manifested in TFTP open timeouts. next_server: sat-vm.cloud.lab.eng.bos.redhat.com # if using cobbler with manage_dhcp and ISC, omapi allows realtime DHCP # updates without restarting ISC dhcpd. However, it may cause # problems with removing leases and make things less reliable.
# Are you using a Red Hat management platform in addition to Cobbler? # Cobbler can help you register to it. Choose one of the following: # "off" : I'm not using Red Hat Network, Satellite, or Spacewalk # "hosted" : I'm using Red Hat Network # "site" : I'm using Red Hat Satellite Server or Spacewalk # You will also want to read: https://fedorahosted.org/cobbler/wiki/TipsForRhn redhat_management_type: "site" # if redhat_management_type is enabled, choose your server # "management.example.
restart_dhcp: 1 # if set to 1, allows /usr/bin/cobbler-register (part of the koan package) # to be used to remotely add new cobbler system records to cobbler. # this effectively allows for registration of new hardware from system # records. register_new_installs: 1 # install triggers are scripts in /var/lib/cobbler/triggers/install # that are triggered in kickstart pre and post sections. Any # executable script in those directories is run. They can be used # to send email or perform other actions.
tftpd_conf: /etc/xinetd.d/tftp # cobbler's web directory. Don't change this setting -- see the # Wiki on "relocating your cobbler install" if your /var partition # is not large enough. webdir: /var/www/cobbler # cobbler's public XMLRPC listens on this port. Change this only # if absolutely needed, as you'll have to start supplying a new # port option to koan if it is not the default.
b) Select Install now c) Choose OS Version (e.g., Windows Server 2008 R2 Enterprise (Full Installation)) d) Accept License terms e) Choose Custom install f) Choose Disk size (e.g., 30GB) g) Upon reboot after Windows installation, set Administrator password h) Optionally, activate Windows i) Set the time zone j) Configure networking • Switch CD/DVD to RHEV-tools ISO image • Install virtio drivers (e.g., RHEV-Network64) • Set TCP/IP properties k) Configure/Install updates l) Install .
b) Update libvirtd with changes virsh define /etc/libvirt/qemu/RHEVM.xml c) Start vm virsh start RHEVM 6. Set Network Properties 7. Perform System Preparation (\Windows\System32\sysprep\sysprep.exe) • System Cleanup Actions: Enter System Out-of-Box-Experience • Select the Generalize checkbox • Shutdown Options: Shutdown 8. On host, create image file dd if=/dev/MgmtServicesVG/RHEVMvol of=/rhevm-`date +%y%m%d`.
CCB_ADDRESS = $(PUBLIC_HOST):$(PUBLIC_PORT) COLLECTOR.CCB_ADDRESS = # Avoid needing CCB within the VPN PRIVATE_NETWORK_NAME = mrg-vm # Set TCP_FORWARDING_HOST so CCB will advertise its public # address. Without this, it will advertise its private address # and the Starter will not be able to connect to reverse its # connection to the Shadow. COLLECTOR.
CONDOR_DEVELOPERS = NONE CONDOR_HOST = mrg-vm.cloud.lab.eng.bos.redhat.com COLLECTOR_HOST = $(CONDOR_HOST) COLLECTOR_NAME = Grid On a Cloud FILESYSTEM_DOMAIN = $(FULL_HOSTNAME) UID_DOMAIN = $(FULL_HOSTNAME) START = TRUE SUSPEND = FALSE PREEMPT = FALSE KILL = FALSE ALLOW_WRITE = $(FULL_HOSTNAME), *.cloud.lab.eng.bos.redhat.com DAEMON_LIST = MASTER, STARTD NEGOTIATOR_INTERVAL = 20 TRUST_UID_DOMAIN = TRUE IN_HIGHPORT = 9700 IN_LOWPORT = 9600 # Plugin configuration MASTER.
# Plugin configuration MASTER.PLUGINS = $(LIB)/plugins/MgmtMasterPlugin-plugin.so QMF_BROKER_HOST = mrg-vm.cloud.lab.eng.bos.redhat.com A.4.4 cumin.conf This file is downloaded to /etc/cumin/cumin.conf. [main] data: postgresql://cumin@localhost/cumin addr: 10.16.136.50 ssl: yes A.4.5 pg_hba.
# METHOD can be "trust", "reject", "md5", "crypt", "password", # "krb5", "ident", or "pam". Note that "password" sends passwords # in clear text; "md5" is preferred since it sends encrypted passwords. # # OPTION is the ident map or the name of the PAM service, depending on METHOD. # # Database and user names containing spaces, commas, quotes and other special # characters must be quoted.
A.4.7 Blender Blender is an open source 3D content creation suite and is used in movie rendering example in this paper. Version 2.48a was used for compatibility with the version of python in Red Hat Enterprise Linux 5.5 and was download from http://download.blender.org/release/Blender2.48a/blender-2.48a-linux-glibc236-py24-x86_64static.tar.bz2. A.4.8 render.tgz This render tar file contains the command scripts and source to render of a small movie clip.
[BLEND_FILE, FRAME_DIR, frame]) jobFile.write("Log = %s/frame%d.log\n" % [LOGS_HOME, frame]) jobFile.write("Output = %s/frame%d.out\n" % [LOGS_HOME, frame]) jobFile.write("Error = %s/frame%d.err\n" % [LOGS_HOME, frame]) jobFile.write("transfer_executable = false\n") jobFile.write("Requirements = Arch =?= \"X86_64\"\n") jobFile.write("QUEUE\n") jobFile.close end movieFile = File.new("%s/create_movie.sub"%[JOBS_HOME], "w") movieFile.write("Universe = vanilla\n") movieFile.
sleep 5 echo Sunmitting DAG ... condor_submit_dag /home/admin/render/production/jobs/render.dag hello.sh – simple script which write date of the start of the render process #!/bin/bash echo "`date '+%H:%M:%S'` Starting Render Job" >> /home/admin/render/production/logs/testloop.out testjob – condor job description file used to submit the hello.sh script #Test Job Executable = /home/admin/render/production/jobs/hello.sh Universe = vanilla #input = test.
• jon-plugin-pack-soa-2.4.0.GA.zip – plugin supporting the SOA platform server • rhq-enterprise-agent-3.0.0.GA.jar – contains the JON RHQ agent comprised of: ◦ enterprise-agent-default.GA.jar – a softlink maintained to reference the latest obtained jarfile for the agent, in this case v3 ◦ rhq-install.sh – the agent installer ◦ rhq-agent-env.sh - the init file residing on each JBoss instance reporting JBoss specific information to the JON server.
# is used. If this and RHQ_AGENT_JAVA_HOME are not set, the # agent's embedded JRE will be used. # #RHQ_AGENT_JAVA_EXE_FILE_PATH="/usr/local/bin/java" # RHQ_AGENT_JAVA_OPTS - Java VM command line options to be # passed into the agent's VM. If this is not defined this script will pass in a default set of options. # If this is set, it completely overrides the agent's defaults. If you only want to add options to the # agent's defaults, then you will want to use RHQ_AGENT_ADDITIONAL_JAVA_OPTS # instead.
# RHQ_AGENT_IN_BACKGROUND - If this is defined, the RHQ Agent JVM will # be launched in the background (thus causing this script to exit immediately). If the value is # something other than "nofile", it will be assumed to be a full file path which this script will # create and will contain the agent VM's process pid value. If this is not defined, the VM is # launched in foreground and this script blocks until the VM exits, at which time this # script will also exit.
#Script Generated by user #Generated on: Tue May 11 15:37:56 2010 #Set Enclosure Time SET TIMEZONE EST5EDT #Set Enclosure Information SET ENCLOSURE ASSET TAG "" SET ENCLOSURE NAME "ra-c7000-01" SET RACK NAME "L2E6" SET POWER MODE REDUNDANT SET POWER SAVINGS ON #Power limit must be within the range of 2700-16400 SET POWER LIMIT OFF #Enclosure Dynamic Power Cap must be within the range of 2570-7822 SET ENCLOSURE POWER_CAP OFF SET ENCLOSURE POWER_CAP_BAYS_TO_EXCLUDE None #Set PowerDelay Information SET INTERCO
SET SERVER POWERDELAY 7A 0 SET SERVER POWERDELAY 8A 0 SET SERVER POWERDELAY 9A 0 SET SERVER POWERDELAY 10A 0 SET SERVER POWERDELAY 11A 0 SET SERVER POWERDELAY 12A 0 SET SERVER POWERDELAY 13A 0 SET SERVER POWERDELAY 14A 0 SET SERVER POWERDELAY 15A 0 SET SERVER POWERDELAY 16A 0 SET SERVER POWERDELAY 1B 0 SET SERVER POWERDELAY 2B 0 SET SERVER POWERDELAY 3B 0 SET SERVER POWERDELAY 4B 0 SET SERVER POWERDELAY 5B 0 SET SERVER POWERDELAY 6B 0 SET SERVER POWERDELAY 7B 0 SET SERVER POWERDELAY 8B 0 SET SERVER POWERDEL
#Set SNMP Information SET SNMP CONTACT "" SET SNMP LOCATION "" SET SNMP COMMUNITY READ "public" SET SNMP COMMUNITY WRITE "" DISABLE SNMP #Set Remote Syslog Information SET REMOTE SYSLOG SERVER "" SET REMOTE SYSLOG PORT 514 DISABLE SYSLOG REMOTE #Set Enclosure Bay IP Addressing (EBIPA) Information SET EBIPA SERVER 10.16.136.231 1 SET EBIPA SERVER 10.16.136.232 2 SET EBIPA SERVER 10.16.136.233 3 SET EBIPA SERVER 10.16.136.234 4 SET EBIPA SERVER 10.16.136.235 5 SET EBIPA SERVER 10.16.136.
SET EBIPA SERVER NONE 5B SET EBIPA SERVER NONE 6B SET EBIPA SERVER NONE 7B SET EBIPA SERVER NONE 8B SET EBIPA SERVER NONE 9B SET EBIPA SERVER NONE 10B SET EBIPA SERVER NONE 11B SET EBIPA SERVER NONE 12B SET EBIPA SERVER NONE 13B SET EBIPA SERVER NONE 14B SET EBIPA SERVER NONE 15B SET EBIPA SERVER NONE 16B SET EBIPA SERVER NETMASK 255.255.248.0 SET EBIPA SERVER GATEWAY 10.16.143.254 SET EBIPA SERVER DOMAIN "cloud.lab.eng.bos.redhat.com" ADD EBIPA SERVER DNS 10.16.136.1 ADD EBIPA SERVER DNS 10.16.255.
ASSIGN OA "mlamouri" ENABLE USER "mlamouri" ADD USER "spr" SET USER CONTACT "spr" "" SET USER FULLNAME "spr" "" SET USER ACCESS "spr" ADMINISTRATOR ASSIGN INTERCONNECT 1,2,3,4,5,6,7,8 "spr" ASSIGN OA "spr" ENABLE USER "spr" ADD USER "testuser" SET USER CONTACT "testuser" "" SET USER FULLNAME "testuser" "" SET USER ACCESS "testuser" OPERATOR ENABLE USER "testuser" ADD USER "tim" SET USER CONTACT "tim" "" SET USER FULLNAME "tim" "Tim Wilkinson" SET USER ACCESS "tim" ADMINISTRATOR ASSIGN INTERCONNECT 1,2,3,4,5
# If your connection is dropped this script may not execute to conclusion. # SET OA NAME 2 ra-c7000-01-oa2 SET IPCONFIG STATIC 2 10.16.136.254 255.255.248.0 10.16.143.254 0.0.0.0 0.0.0.0 SET NIC AUTO 2 # SET OA NAME 1 ra-c7000-01-oa1 SET IPCONFIG STATIC 1 10.16.136.255 255.255.248.0 10.16.143.254 0.0.0.0 0.0.0.
---------------------------------------------------------------------------MAC Address Type : VC-Defined Pool ID : 10 Address Start : 00-17-A4-77-24-00 Address End : 00-17-A4-77-27-FF ---------------------------------------------------------------------------Fibre Channel WWN Address Settings ---------------------------------------------------------------------------WWN Address Type : VC-Defined Pool ID : 10 Address Start : 50:06:0B:00:00:C2:86:00 Address End : 50:06:0B:00:00:C2:89:FF **********************
**************************************************************************** ENET-VLAN INFORMATION **************************************************************************** VLAN Tag Control : Tunnel Shared Server VLAN ID : false Preferred Speed : Auto Max Speed : Unrestricted *************************************************************************** EXTERNAL-MANAGER INFORMATION *************************************************************************** No external manager exists ************************
====================================================================== ID Enclosure Bay Type Firmware Version Status ====================================================================== enc0:1 ra-c7000-01 1 VC-ENET 2.32 2010-01-06T02:07:47Z OK enc0:2 ra-c7000-01 2 VC-ENET 2.32 2010-01-06T02:07:47Z OK enc0:3 ra-c7000-01 3 VC-FC 1.01 v6.1.0_36 OK enc0:4 ra-c7000-01 4 VC-FC 1.01 v6.1.
**************************************************************************** MAC-CACHE INFORMATION **************************************************************************** Enabled : true Refresh Interval : 5 **************************************************************************** NETWORK INFORMATION **************************************************************************** =========================================================================== Name Status Shared VLAN ID Native Private Preferre
BL460c G6 ------------------------------------------------------------------------enc0:4 ra-c7000-01 4 ProLiant OK Off BL460c G6 ------------------------------------------------------------------------enc0:5 ra-c7000-01 5 ProLiant OK Off BL460c G6 ------------------------------------------------------------------------enc0:6 ra-c7000-01 6 ProLiant OK Off BL460c G6 ------------------------------------------------------------------------enc0:7 ra-c7000-01 7 ProLiant OK O
**************************************************************************** ========================================================================== Port Server I/O Adapter Type ID Profile Module ========================================================================== 1 (Flex enc0:1 1 -- -enc0:1:d1 -- -NIC) -------------------------------------------------------------------------1 (Flex enc0:10 1 -- -enc0:1:d10 -- -NIC) -------------------------------------------------------------------------1 enc0:11
-------------------------------------------------------------------------2 (Flex enc0:1 2 -- -enc0:2:d1 -- -NIC) -------------------------------------------------------------------------2 (Flex enc0:10 2 -- -enc0:2:d10 -- -NIC) -------------------------------------------------------------------------2 enc0:11 2 Flex-10 Embedded enc0:2:d11 -- -Ethernet -------------------------------------------------------------------------2 enc0:12 2 Flex-10 Embedded enc0:2:d12 -- -Ethernet --------------------------------
-------------------------------------------------------------------------1 enc0:10 3 QLogic QMH2562 8Gb enc0:3:d10 test1 FC HBA for HP BladeSystem c-Class -------------------------------------------------------------------------1 enc0:11 3 QLogic QMH2562 8Gb enc0:3:d11 -- -FC HBA for HP BladeSystem c-Class -------------------------------------------------------------------------1 enc0:12 3 QLogic QMH2562 8Gb enc0:3:d12 -- -FC HBA for HP BladeSystem c-Class ---------------------------------------------------
-------------------------------------------------------------------------1 enc0:8 3 QLogic QMH2562 8Gb enc0:3:d8 -- -FC HBA for HP BladeSystem c-Class -------------------------------------------------------------------------1 enc0:9 3 QLogic QMH2562 8Gb enc0:3:d9 -- -FC HBA for HP BladeSystem c-Class -------------------------------------------------------------------------2 enc0:1 4 QLogic QMH2562 8Gb enc0:4:d1 mgmt_node1 FC HBA for HP BladeSystem c-Class ----------------------------------------------------
-------------------------------------------------------------------------2 enc0:5 4 QLogic QMH2562 8Gb enc0:4:d5 -- -FC HBA for HP BladeSystem c-Class -------------------------------------------------------------------------2 enc0:6 4 QLogic QMH2562 8Gb enc0:4:d6 -- -FC HBA for HP BladeSystem c-Class -------------------------------------------------------------------------2 enc0:7 4 QLogic QMH2562 8Gb enc0:4:d7 -- -FC HBA for HP BladeSystem c-Class -----------------------------------------------------------
**************************************************************************** SSL-CERTIFICATE INFORMATION **************************************************************************** ============================================================================ Serial Number Issuer Subject ============================================================================ 80:39:6F:C6:A7:7B:A8:E3 VCEXTW29520140:Virtual VCEXTW29520140:Virtual Connect Connect Manager:Hewlett-Packard Manager:Hewlett-Packard -------------
enc0:1:X6 ra-c7000-01 Not Linked absent Auto -- --------------------------------------------------------------------------enc0:1:X7 ra-c7000-01 Linked SFP-DAC Auto public (Active) (10Gb) -------------------------------------------------------------------------enc0:2:X1 ra-c7000-01 Not Linked CX4 Auto -- --------------------------------------------------------------------------enc0:2:X2 ra-c7000-01 Not Linked absent Auto -- --------------------------------------------------------------------------enc0:2:X3 r
**************************************************************************** UPLINKSET INFORMATION **************************************************************************** No shared uplink port sets exist **************************************************************************** USER INFORMATION **************************************************************************** ================================================================= User Name Privileges Full Name Contact Info Enabled ==============
www.redhat.
Appendix B: Bugzillas The following Red Hat bugzilla reports were open(ed) issues at the time of this exercise. 1. BZ 518531 - Need "disable_tpa" parameter with bnx2x to get networking to work for guests https://bugzilla.redhat.com/show_bug.cgi?id=518531 2. BZ 593048 - Intermittent "could not query memory balloon allocation" errors with virtinstall https://bugzilla.redhat.com/show_bug.cgi?id=593048 3. BZ 593093 - not all cobbler kickstart variables correctly applied https://bugzilla.redhat.com/show_bug.