Mastering OpenStack: From Installation to Auto-Scaling Your Cloud Infrastructure

Chanaka Fernando
17 min readOct 3, 2023

--

Installing and configuring OpenStack can be a daunting task, but it’s a crucial step for organizations looking to harness the power of a private cloud. There are many different alternatives, but the best one should be chosen depending on the platform and the objectives. In this comprehensive guide, I’ll take you through the process of installing OpenStack using DevStack on a single virtual machine created in the VirtualBox. I’ll also show you how to extend OpenStack’s functionality by enabling optional services for orchestration, metering, and alarming, and demonstrate the use of OpenStack’s CLI. But that’s not all — we’ll dive deep into the world of Infrastructure as Code (IaC) and explore the wonders of auto-scaling within OpenStack.

During this guide, a heat sack is deployed using the OpenStack orchestration service. The stack deploys a single compute instance group attaching alarm receivers and scaling policies with certain scale-in and scale-out criteria. Additionally, it shows that the number of compute instances increased (scale-out) and decreased (scale-in) based on resource utilization (CPU usage) as needed.

Getting Started with OpenStack

OpenStack is an open-source platform that is being used to create and administer cloud platforms by pooling virtual resources. It is typically used in infrastructure-as-a-service deployments (IaaS). The default installation of OpenStack offers a number of handy services including networking, storage, computation, identity etc. It may be expanded to include additional services, such as orchestration, metering, and alarming.

Infrastructure as a code is important in a virtualised platform, especially for consistent and speedy provisioning. Additionally, autoscaling in a cloud platform aids in the optimization of resource utilisation and other cost reductions while using cloud services.

Understanding Optional Services

While the default installation of OpenStack provides essential services for managing virtual resources, extending OpenStack with optional services unlocks advanced capabilities that can enhance your cloud environment. Let’s take a closer look at the optional services used in this guide.

Orchestration (Heat): OpenStack Heat is the service responsible for orchestrating composite cloud applications. It allows you to define cloud application infrastructure using text-based Heat templates. These templates describe how resources relate to one another, making it easier to manage complex applications within your cloud environment.

Metering (Ceilometer): Ceilometer is an OpenStack service that offers measurements of cloud metrics for various resources, including compute instances, networks, and storage volumes. Ceilometer collects these data points, offering valuable insights into the performance of your OpenStack cloud.

Time Series Database (Gnocchi): Gnocchi is a part of the OpenStack telemetry project, which offers a time series database as a service to deal with the issue of storing and gathering time series data. The ceilometer service sends massive volumes of data points to Gnocchi automatically. It handles them, stores them, and provides access to the information. A timestamp plus a value makes up a single data point. When Gnocchi collects measurements, it correlates them to resources from which measurements are sampled and metrics such as CPU time and memory usage.

Alarming (Aodh): Aodh is OpenStack’s alarming service, closely integrated with Ceilometer and Gnocchi. It enables you to set up alerts and trigger actions based on the metrics collected by Ceilometer and Gnocchi. With Aodh, you can respond to changes in your cloud environment, ensuring optimal performance and resource management.

Installing OpenStack with DevStack

Now we understand the importance of OpenStack and its core concepts. In this section, I’ll guide you through the steps of setting up OpenStack using DevStack on a single VirtualBox virtual machine.

Requirements for Installation

Before we begin, ensure you have the following prerequisites in place.

  • VirtualBox 7.0: You’ll need VirtualBox installed on your system to create a virtual machine for hosting OpenStack.
  • Ubuntu 22.04.1 Server Image: Download and keep the Ubuntu 22.04.1 server image handy for installation.
  • SSH Client: You’ll need an SSH client for remote access. Tools like MobaXterm work well for this purpose.

Setting Up the Virtual Machine

  1. Creating the Virtual Machine:

Start by creating a virtual machine in VirtualBox with the following recommended specifications.

  • 4 GB RAM
  • 50 GB HDD
  • 2 CPUs
  • Network Adapter Type: NAT

2. Installing Ubuntu Server:

Install the Ubuntu server on the virtual machine with the following configurations (You may change as you wish).

  • Hostname: ubuntu
  • User Name: stack
  • Password: a (or choose a secure password)

3. Port Forwarding Rules:

To ensure proper communication with your virtual machine, set up port forwarding rules. Add Port forwarding rules as below.

Fg1: Network Port forwarding rules

Getting DevStack and Configuration

With the virtual machine ready, it’s time to fetch DevStack and configure it for the OpenStack installation. Follow these steps.

  1. Download devstack (release: stable/zed):

Open a terminal on your SSH client and connect to your Ubuntu virtual machine (ssh localhost:2220). Clone the git repository as follows.

$ sudo apt-get update
$ git clone https://git.openstack.org/openstack-dev/devstack.git -b stable/zed

2. Copy Sample Default Configurations:

Copy the sample default configurations to set up your OpenStack environment.

$ cd devstack
~/devstack $ cp samples/local.conf .
~/devstack $ vim local.conf

3. Configuration for Optional Services:

In the local.conf file, add the following configurations to explicitly enable optional services such as Orchestration (Heat), Ceilometer, Gnocchi Storage support, and Aodh. This step extends the functionality of your OpenStack setup.

Fig2: local.conf

4. Install DevStack:

Execute the following command to start the DevStack installation process.

~/devstack $ ./stack.sh

5. Network Configuration (if needed):

After restarting the virtual machine, you may need to manually configure the external network interface (br-ex) for OpenStack.

NOTE: The OpenStack external network interface (which is used to communicate with Openstack virtual machines from external clients) had not been added permanently to the network configuration of the Ubuntu virtual machine. Therefore, that interface required manual configuration. The prospective upgrade of automating it had to be included in upcoming devstack releases.

$ vim /etc/netplan/00-installer-config.yaml
Fig3: NetPlan configurations
$ sudo netplan apply

Accessing the OpenStack GUI (Horizon)

After the successful installation of OpenStack, you can access the OpenStack GUI, known as Horizon, to manage your cloud resources. Use the following credentials:

Fig4: The Horizon GUI — orchestration feature has been enabled

With access to the Horizon dashboard, you’ll have a visual interface for managing your OpenStack environment. This includes creating and managing instances, configuring networks, and monitoring resources.

Configure Ceilometer Service

To make the most of Ceilometer’s capabilities, let’s explore how to configure the service.

1. List Gnocchi Time Series Data Archive Policies.

$ gnocchi archive-policy list
Fig5: Gnocchi time series data archive policies

This command provides insights into the data retention and archival policies defined in Gnocchi.

2. Configure Ceilometer Sink:

Configure the Ceilometer sink to use the medium archive policy in Ceilometer pipeline configuration file as it provides per-minute metric samples.

$ vim /etc/ceilometer/pipeline.yaml
Fig6: ceilometer- pipeline.yaml

Within this file, you can configure Ceilometer’s data processing pipeline. Specifically, you can set the sink to use the appropriate Gnocchi archive policy from the list of available policies. Choosing the right policy is essential for fine-grained metric sampling.

3. Expand Metric Polling:

In addition to the default meters, you can configure the Ceilometer to poll additional metrics, such as CPU and memory usage. Modify the polling configuration file.

$ vim /etc/ceilometer/polling.yaml
Fig7: ceilometer- polling.yaml

By expanding the list of polled metrics, you gain deeper insights into your cloud resources’ performance.

4. Restart the Ceilometer service:

To apply the configuration changes, restart the Ceilometer service

$ sudo systemctl restart devstack@ceilometer*

The Ceilometer service will now collect and process metrics based on your configuration.

With Ceilometer configured, you’ll have a robust monitoring and measurement system in place, providing valuable data on your OpenStack cloud’s performance.

OpenStack CLI: Your Command-Line Companion

The OpenStack CLI is a unified command-line client that provides direct access to OpenStack services and APIs. With the CLI, you can perform various tasks, from creating and managing instances to configuring networks and monitoring resources. Here’s how to get started with the OpenStack CLI.

  1. Source the OpenRC Script:

To configure login information and environmental variables suitable for OpenStack CLI usage, you can utilize the openrc script, which is typically included as part of the DevStack scripts. Source the script using the following command.

~ /devstack $ source openrc

This step ensures that the CLI tools have access to the necessary authentication information.

2. Verify your setup :

To ensure that your OpenStack CLI is correctly configured and functional, you can start by running a simple command to list networks.

$ openstack network list
Fig8: Openstack network list

3. Exploring CLI Commands:

The OpenStack CLI offers a wide range of commands that allow you to interact with different services and resources. Here are some common CLI commands to get you started.

  • openstack server list: Lists all compute instances (servers) in your OpenStack environment.
  • openstack image list: Provides a list of available images for creating instances.
  • openstack flavor list: Lists available flavours (resource configurations) for instances.
  • openstack network list: Shows a list of available networks.

These commands serve as building blocks for managing various aspects of your OpenStack cloud. One of the strengths of the OpenStack CLI is its scripting and automation capabilities. You can create custom scripts to automate repetitive tasks, such as provisioning instances, configuring networking, or scaling resources. By leveraging the CLI in your scripts, you can streamline cloud management and save valuable time. As you delve deeper into your OpenStack journey, you’ll find that the CLI is an indispensable tool for efficiently managing your cloud resources, especially in large-scale environments.

Infrastructure as Code (IaC) in OpenStack

In modern cloud environments, Infrastructure as Code (IaC) is a fundamental practice that brings automation, consistency, and efficiency to cloud resource provisioning. OpenStack embraces IaC principles, enabling you to define and manage your cloud infrastructure through code. In this section, we’ll delve into the significance of IaC in OpenStack and explore how to create Human-Readable OpenStack Heat (HOT) templates for cloud resource provisioning.

Infrastructure as Code (IaC) is a paradigm shift in the way we manage cloud infrastructure. It allows you to define and provision resources programmatically, rather than relying on manual configurations. Here’s why IaC is vital in a virtualized environment like OpenStack.

  • Consistency: IaC ensures that your cloud resources are provisioned consistently every time you deploy or update your infrastructure. This eliminates the risk of configuration drift and ensures predictable results.
  • Efficiency: Automating resource provisioning through code accelerates the deployment process. You can spin up complex environments in minutes, significantly reducing deployment time.
  • Version Control: IaC code, including templates, can be version-controlled using tools like Git. This provides a historical record of changes and simplifies collaboration among team members.

Creating HOT Templates

In OpenStack, you use Heat templates, often referred to as HOT templates, to define your cloud infrastructure in a human-readable format. HOT templates describe the resources, relationships, and properties of your cloud resources. Let’s explore how to create HOT templates:

Template Structure:

A HOT template consists of various sections, including the following.

  • Heat Template Version: Specifies the version of the HOT template.
  • Description: Provide a brief description of the template’s purpose.
  • Parameters: Defines input parameters that users can provide when creating a stack from the template.
  • Resources: Lists the cloud resources (e.g., instances, networks) to be created.
  • Outputs: Specify values or resources to be exposed as stack outputs.

HOT Template:

In our exploration of OpenStack’s orchestration and autoscaling capabilities, we’ll be delving into the heart of Infrastructure as Code (IaC) using OpenStack Heat (HOT) templates. These templates serve as the foundation for creating, configuring, and orchestrating cloud resources seamlessly within OpenStack.

To provide you with a tangible and practical understanding of IaC, I have included the YAML files that were used below;

environment.yaml: This file serves as a resource registry, mapping resources within the templates to their respective files. It plays a crucial role in maintaining the structure and organization of your OpenStack infrastructure.

resource_registry:

"OS::Nova::Server::Cirros": cirros.yaml

cirros.yaml: In this template, define the specifications for launching a Cirros instance, a lightweight Linux distribution, which will be at the core of our autoscaling experiment. It covers parameters like instance flavor, networking, and metadata.

  heat_template_version: 2016-10-14
description: Template to spawn an cirros instance.

parameters:
metadata:
type: json
flavor:
type: string
description: instance flavor to be used
default: m1.micro
network:
type: string
description: project network to attach instance to
default: private
external_network:
type: string
description: network used for floating IPs
default: public

resources:
server:
type: OS::Nova::Server
properties:
image: cirros-0.5.2-x86_64-disk
flavor: {get_param: flavor}
metadata: {get_param: metadata}
networks:
- port: { get_resource: port }

port:
type: OS::Neutron::Port
properties:
network: {get_param: network}
security_groups:
- default

floating_ip:
type: OS::Neutron::FloatingIP
properties:
floating_network: {get_param: external_network}

floating_ip_assoc:
type: OS::Neutron::FloatingIPAssociation
properties:
floatingip_id: { get_resource: floating_ip }
port_id: { get_resource: port }

server-group.yaml: This template is the heart of our autoscaling demonstration. It outlines the auto scaling group, policies, and alarms necessary to dynamically adjust the number of Cirros instances based on CPU utilization. Let’s explore the details of these policies and alarms.

  • Auto Scaling group — Group of compute instances with the default number of servers as 1, while the maximum and minimum number of servers as 3 and 1 respectively.
  • Scaling out policy — When this is triggered the number of instances will be increased by 1 up to 3 (max).
  • Scaling in policy — When this is triggered the number of instances will be reduced by 1 up to 1 (min).
  • Cpu high alarm — Triggered when the average CPU usage is > 80%. Then the Scaling out policy is called.
  • Cpu low alarm — Triggered when the average CPU usage is < 20%. Then the Scaling in policy is called.
heat_template_version: 2016-10-14
description: Example auto scale group, policy and alarm
resources:
instance_group:
type: OS::Heat::AutoScalingGroup
properties:
cooldown: 300
desired_capacity: 1
max_size: 3
min_size: 1
resource:
type: OS::Nova::Server::Cirros
properties:
metadata: {"metering.server_group": {get_param: "OS::stack_id"}}

scaleout_policy:
type: OS::Heat::ScalingPolicy
properties:
adjustment_type: change_in_capacity
auto_scaling_group_id: { get_resource: instance_group }
cooldown: 120
scaling_adjustment: 1

scalein_policy:
type: OS::Heat::ScalingPolicy
properties:
adjustment_type: change_in_capacity
auto_scaling_group_id: { get_resource: instance_group }
cooldown: 120
scaling_adjustment: -1

cpu_alarm_high:
type: OS::Aodh::GnocchiAggregationByResourcesAlarm
properties:
description: Scale up if CPU > 80%
metric: cpu
aggregation_method: rate:mean
granularity: 60
evaluation_periods: 2
threshold: 800000000.0
resource_type: instance
comparison_operator: gt
alarm_actions:
- str_replace:
template: trust+url
params:
url: {get_attr: [scaleout_policy, alarm_url]}
query:
str_replace:
template: '{"=": {"server_group": "stack_id"}}'
params:
stack_id: {get_param: "OS::stack_id"}

cpu_alarm_low:
type: OS::Aodh::GnocchiAggregationByResourcesAlarm
properties:
metric: cpu
aggregation_method: rate:mean
granularity: 60
evaluation_periods: 2
threshold: 200000000.0
resource_type: instance
comparison_operator: lt
alarm_actions:
- str_replace:
template: trust+url
params:
url: {get_attr: [scalein_policy, alarm_url]}
query:
str_replace:
template: '{"=": {"server_group": "stack_id"}}'
params:
stack_id: {get_param: "OS::stack_id"}

outputs:
scaleout_policy_signal_url:
value: {get_attr: [scaleout_policy, alarm_url]}

scalein_policy_signal_url:
value: {get_attr: [scalein_policy, alarm_url]}

HOT templates can be as simple or complex as your cloud infrastructure requires. You can define networks, security groups, storage, and more, all within your templates. These templates enable you to fully automate your cloud deployments, ensuring consistency and repeatability. These YAML files, when combined, demonstrate the power of IaC by automating the creation and management of cloud resources.

Alarming and Auto-Scaling in OpenStack

As we delve deeper into the capabilities of OpenStack, it’s crucial to explore two advanced features that can significantly enhance your cloud environment, i.e. alarming and auto-scaling. Auto-scaling systems in the cloud are primarily based on reactive automation rules that scale a cluster once some metric, such as average CPU consumption, surpasses a predefined threshold. In this section, I’ll guide you through deploying a “Heat Stack” using HOT templates, and we’ll explore how alarming and auto-scaling functions work within OpenStack.

Deploying a Heat Stack

In OpenStack, the Heat service allows you to orchestrate composite cloud applications using HOT templates. A Heat Stack represents a collection of cloud resources created from a template. Let’s go through the process of deploying a Heat Stack.

  1. Create the HEAT Stack — To create a Heat Stack from a HOT template, use the following command.
$ openstack stack create -t server-group.yaml -e environment.yaml server-group
Fig9: Create HEAT Stack

This command instructs OpenStack to create a stack named “server-group” based on the provided template and environment file. The environment file can be used to override parameter values defined in the template.

2. Verify Stack creation — After initiating the stack creation, you can use the following command to list the stacks and check the status of your newly created stack.

$ openstack stack list
Fig10: HEAT Stack Status

This command displays information about the stack, including its current status.

3. Explore Stack Resources — You can also view the resources created within the stack in Horizon GUI under the Orchestration-Stack section.

Fig11: The HEAT stack has been deployed with the five resource elements mentioned in the HOT templates

This shows a detailed view of the resources provisioned by the stack.

4. Verify Server creation — After initiating the stack creation, you can use the following command to list the servers.

$ openstack server list
Fig12: Initial compute instance

A single compute instance has been created with a Floating IP in addition to the private IP which can be used to login to the instance.

5. Verify alarm creation — After initiating the stack creation, you can use the following command to list the alarms.

$ openstack alarm list
Fig13: Initial alarm status

Initially, it indicated that there were not enough data points available in the evaluation periods to meaningfully determine the alarm state (insufficient data). After some time, both the alarm status was evaluated as False (ok).

Understanding Alarming and Auto-Scaling

With a Heat Stack deployed, you can leverage OpenStack’s alarming and auto-scaling capabilities. These features are instrumental in optimizing resource utilization and ensuring your cloud environment remains responsive to changing workloads.

Alarming (Aodh): Alarming in OpenStack is handled by the Aodh service, which can provide alerts and trigger actions based on the metrics collected by Ceilometer and Gnocchi. Alarms are defined based on specific criteria, and when those criteria are met, actions are taken. For example, you can set up alarms to trigger when CPU usage exceeds a certain threshold.

Auto-Scaling: Auto-scaling allows your cloud resources to dynamically adjust based on predefined rules and metrics. OpenStack auto-scaling can include both scaling out (adding more resources) and scaling in (reducing resources) based on conditions like CPU load, memory usage, or other custom metrics. This ensures optimal resource utilization and responsiveness.

Real-World Experiment: Scaling a Heat Stack

Let’s walk through a real-world experiment to illustrate the power of alarming and auto-scaling.

Scale-Out the instance group automatically

Produce a high CPU load on the instance. As a result, it will scale out the instance group to 2 instances.

  1. Increase CPU load on the instance to trigger a scale-out action.
$ cat /dev/zero > /dev/null

2. Verify the alarm — After some time initiating the CPU load, you can use the following command to list the alarms.

$ openstack alarm list
Fig14: Alarm — CPU usage high

As the CPU usage exceeds the threshold, the alarm for high CPU usage will be evaluated as true, triggering the scale-out policy.

3. Verify the Scale-out —You can use the following command to list the servers.

$ openstack server list
Fig15: The Instance group scaled out to two instances

The compute instance group has been automatically scaled out according to the scaling-out policy, increasing the count to 2.

Scale-In instance the group automatically

Remove the high CPU load on the instance. As a result, it will scale in the instance group to 1 instance.

  1. Reduce the CPU load on the instance by removing the high CPU load by killing the running high CPU load program.
$ killall cat

2. Verify the alarm — After some time removing the CPU load, you can use the following command to list the alarms.

$ openstack alarm list
Fig16: Alarm — CPU usage low

As the CPU usage decreases below the threshold, the alarm for low CPU usage will be evaluated as true, triggering the scale-in policy.

3. Verify the Scale-in — You can use the following command to list the servers.

$ openstack server list
Fig17: Instance group scaled-in to one instance

The compute instance group automatically scale-in according to the scaling-in policy reducing the count to 1.

Delete HEAT Stack

Clear the deployment by deleting the deployed stack.

$ openstack stack delete server-group

This real-world experiment showcases how OpenStack’s alarming and auto-scaling features adapt to changing resource demands, ensuring optimal performance and cost efficiency.

Conclusion

To recap, We started by understanding the importance of OpenStack as a platform for creating and administering cloud resources. You are guided through the installation process using DevStack, a handy tool for setting up OpenStack on a single virtual machine. Then, We explored the significance of optional services like orchestration (Heat), metering (Ceilometer), and alarming (Aodh). Configuring these services allows you to gain deeper insights into your cloud environment and proactively respond to changes. After that, We delved into the concept of IaC in OpenStack and showed you how to create Human-Readable OpenStack Heat (HOT) templates. These templates enable you to define and provision cloud resources efficiently through code, ensuring consistency and automation. The deployment of computer systems on the cloud using orchestration engines like OpenStack Heat is becoming more and more common in the context of Network Functions Virtualization (NFV). Finally, We explored the world of alarming and auto-scaling within OpenStack, showcasing their real-world applications. These features empower you to optimize resource utilization, maintain performance, and manage costs effectively. Despite the fact that virtualized infrastructure supports automatic scale-in and scale-out capabilities, the application needs to be cloud-enabled to work properly during autoscaling, including stateless request handling, graceful shutdown/spin-up, and support distribution processing, among other things. Autoscaling also becomes extremely difficult when scaling a cluster requires non-negligible bootstrapping durations for new instances, which is a common occurrence in real-world cloud services.

As you continue your journey with OpenStack, consider these potential areas for further exploration and expansion of your cloud infrastructure. The experiment can be expanded to include the creation of a load balancer component and the placement of compute instance groups as upstream instances behind it. In order for the instance group to be scalable in response to changes in resource utilisation and end-user request patterns. Additionally, auto-recovery is a valuable feature that every production deployment ought to have. When predefined monitoring shows that a particular instance is having problems, it automatically starts a new instance. Furthermore, matrices other than CPU usage in the ceilometer service can be used when defining the alarming thresholds.

OpenStack is a versatile and powerful platform that offers endless possibilities for building and managing cloud infrastructure. By mastering the skills and concepts we’ve covered in this guide, you’re well on your way to becoming an OpenStack expert. As you expand your OpenStack knowledge, remember that the cloud landscape is ever-evolving. Stay up-to-date with the latest developments in OpenStack and cloud technology to ensure your cloud environment remains efficient, secure, and aligned with your organization’s goals.

Bibliography

1. Welcome to OpenStack documentation OpenStack Docs: Zed. Available at: https://docs.openstack.org/zed/index.html (Accessed: Sep 28, 2023).

2. Teamrhos-docs@redhat.com, O.S.D. and Team, O.S.D. Auto scaling for instances Red Hat OpenStack platform 17.0, Red Hat Customer Portal. Available at: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/17.0/html/ auto_scaling_for_instances/ index (Accessed: Sep 30, 2023).

--

--

Chanaka Fernando
Chanaka Fernando

Responses (1)