HP Reference Architecture for OpenStack on Ubuntu 14.04 LTS

Technical white paper | Product, solution, or service
4
Cloud controller
The cloud controller provides the central management system for multi-node OpenStack deployments. Typically the cloud
controller manages authentication and sends messages to all the systems through a message queue. For our example, the
cloud controller has a collection of nova-* components that represent the global state of the cloud, talk to services such as
authentication, maintain information about the cloud in a database, communicate with all compute nodes and storage
workers through a queue, and provide API access. Each service running on a designated cloud controller may be broken out
into separate nodes for scalability or availability. It is also possible to use virtual machines for all or some of the services
that the cloud controller manages, such as the message queuing.
In this reference architecture, we used a single cloud controller server to host the OpenStack management services. By
doing this we are trading off fault tolerance for simplicity. It is possible to configure a fully redundant and highly available
cloud controller configuration by replicating services and clustering the database storage and message queue capability. We
have chosen an implementation that runs all services directly on the cloud controller. This provides a simple and scalable
configuration that works well for small to medium size clouds.
Database
Most OpenStack Compute central services, and currently also the nova-compute nodes, use the database for stateful
information. Loss of database availability leads to errors. As a result, in a production deployment you should consider
clustering your databases in some way to make them failure tolerant. As shown this reference architecture does not
implement a clustered database configuration.
Message queue
Most OpenStack Compute services communicate with each other using the Message Queue. In general, if the message
queue fails or becomes inaccessible, the cluster grinds to a halt and ends up in a “read only” state, with information stuck
at the point where the last message was sent. Accordingly, we recommend that in a production OpenStack install you
cluster the message queue - and RabbitMQ has built-in abilities to do this. This reference architecture does not show
implementation of a clustered message queuing capability.
Images
The OpenStack Image Catalog and Delivery service consists of two parts: glance-api and glance-registry. The former is
responsible for the delivery of images and the compute node uses it to download images from the back-end. The latter
maintains the metadata information associated with virtual machine images and requires a database.
The glance-api is an abstraction layer that allows a choice of back-end used when providing storage for deployment images.
The most common back-end drivers are:
Swift OpenStack Object StorageAllows you to store images as objects.
File systemUses any traditional file system to store the images as files.
S3Allows you to fetch images from Amazon S3.
HTTPAllows you to fetch images from a web server. You cannot write images by using this mode.
This is not an exhaustive list however. A complete listing can be found in the online documentation for Glance
(http://docs.openstack.org/icehouse/install-guide/install/apt/content/image-service-overview.html). This reference
architecture specifies a robust Swift object storage service spread across 2 nodes. We recommend using this as a scalable
place to store your glance managed images. The built-in replication mechanism in Swift insures the image data remains
available should a server fail.
Dashboard
The OpenStack Dashboard is implemented as a Python web application that runs in Apache httpd. It is accessed using a web
browser via traditional http protocol. Because it uses the service API’s for the other OpenStack components it must also be
able to reach the API servers (including their admin endpoints) over the network.
Authentication and authorization
The concepts supporting OpenStack authentication and authorization are derived from well understood and widely used
systems of a similar nature. Users have credentials they can use to authenticate, and they can be a member of one or more
groups (known as projects or tenants interchangeably).