For those people following my blogs, you would’ve had an all-in-one install all up and running by now; but now we are going to start afresh. We are going to have a fresh system as explained in my first post, with all the relevant RDO packages installed, and we have all the repositories enabled. Now this installation will be for the foreseeable future, so we won’t need to re-install our OpenStack machine.
This post is going to be quite intensive, such that we are going to try and get to the same stage that we finished in the previous post, but with extra compute nodes in mind.
“`
packstack –gen-answer-file /root/packstack.txt
“`
Modify the packstack.txt with the following changes
“`
CONFIG_KEYSTONE_ADMIN_PW=openstack
CONFIG_HEAT_INSTALL=y
CONFIG_NTP_SERVERS=10.0.0.251
CONFIG_USE_EPEL=n
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-external
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet_internal:br-internal,physnet_external:br-external
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-internal:enp2s1f0,br-external:enp2s1f1
CONFIG_PROVISION_DEMO=n
“`
We are adding heat to the installation, so that we can use that in future posts.
To initiate the install of OpenStack, we run the command below
“`
packstack –answer-file=/root/packstack.txt
“`
Below is an example output of the packstack command run on my system
“`
[root@stack01 ~]# packstack –answer-file=packstack.txt
Welcome to Installer setup utility
Installing:
Clean Up [ DONE ]
Setting up ssh keys [ DONE ]
Discovering hosts’ details [ DONE ]
Adding pre install manifest entries [ DONE ]
Installing time synchronization via NTP [ DONE ]
Preparing servers [ DONE ]
Adding AMQP manifest entries [ DONE ]
Adding MySQL manifest entries [ DONE ]
Adding Keystone manifest entries [ DONE ]
Adding Glance Keystone manifest entries [ DONE ]
Adding Glance manifest entries [ DONE ]
Adding Cinder Keystone manifest entries [ DONE ]
Adding Cinder manifest entries [ DONE ]
Checking if the Cinder server has a cinder-volumes vg[ DONE ]
Adding Nova API manifest entries [ DONE ]
Adding Nova Keystone manifest entries [ DONE ]
Adding Nova Cert manifest entries [ DONE ]
Adding Nova Conductor manifest entries [ DONE ]
Creating ssh keys for Nova migration [ DONE ]
Gathering ssh host keys for Nova migration [ DONE ]
Adding Nova Compute manifest entries [ DONE ]
Adding Nova Scheduler manifest entries [ DONE ]
Adding Nova VNC Proxy manifest entries [ DONE ]
Adding Openstack Network-related Nova manifest entries[ DONE ]
Adding Nova Common manifest entries [ DONE ]
Adding Neutron API manifest entries [ DONE ]
Adding Neutron Keystone manifest entries [ DONE ]
Adding Neutron L3 manifest entries [ DONE ]
Adding Neutron L2 Agent manifest entries [ DONE ]
Adding Neutron DHCP Agent manifest entries [ DONE ]
Adding Neutron LBaaS Agent manifest entries [ DONE ]
Adding Neutron Metering Agent manifest entries [ DONE ]
Adding Neutron Metadata Agent manifest entries [ DONE ]
Checking if NetworkManager is enabled and running [ DONE ]
Adding OpenStack Client manifest entries [ DONE ]
Adding Horizon manifest entries [ DONE ]
Adding Swift Keystone manifest entries [ DONE ]
Adding Swift builder manifest entries [ DONE ]
Adding Swift proxy manifest entries [ DONE ]
Adding Swift storage manifest entries [ DONE ]
Adding Swift common manifest entries [ DONE ]
Adding Heat manifest entries [ DONE ]
Adding Heat Keystone manifest entries [ DONE ]
Adding MongoDB manifest entries [ DONE ]
Adding Ceilometer manifest entries [ DONE ]
Adding Ceilometer Keystone manifest entries [ DONE ]
Adding Nagios server manifest entries [ DONE ]
Adding Nagios host manifest entries [ DONE ]
Adding post install manifest entries [ DONE ]
Installing Dependencies [ DONE ]
Copying Puppet modules and manifests [ DONE ]
Applying 10.0.0.1_prescript.pp
10.0.0.1_prescript.pp: [ DONE ]
Applying 10.0.0.1_ntpd.pp
10.0.0.1_ntpd.pp: [ DONE ]
Applying 10.0.0.1_amqp.pp
Applying 10.0.0.1_mysql.pp
10.0.0.1_amqp.pp: [ DONE ]
10.0.0.1_mysql.pp: [ DONE ]
Applying 10.0.0.1_keystone.pp
Applying 10.0.0.1_glance.pp
Applying 10.0.0.1_cinder.pp
10.0.0.1_keystone.pp: [ DONE ]
10.0.0.1_glance.pp: [ DONE ]
10.0.0.1_cinder.pp: [ DONE ]
Applying 10.0.0.1_api_nova.pp
10.0.0.1_api_nova.pp: [ DONE ]
Applying 10.0.0.1_nova.pp
10.0.0.1_nova.pp: [ DONE ]
Applying 10.0.0.1_neutron.pp
10.0.0.1_neutron.pp: [ DONE ]
Applying 10.0.0.1_neutron_fwaas.pp
Applying 10.0.0.1_osclient.pp
Applying 10.0.0.1_horizon.pp
10.0.0.1_neutron_fwaas.pp: [ DONE ]
10.0.0.1_osclient.pp: [ DONE ]
10.0.0.1_horizon.pp: [ DONE ]
Applying 10.0.0.1_ring_swift.pp
10.0.0.1_ring_swift.pp: [ DONE ]
Applying 10.0.0.1_swift.pp
Applying 10.0.0.1_heat.pp
10.0.0.1_swift.pp: [ DONE ]
10.0.0.1_heat.pp: [ DONE ]
Applying 10.0.0.1_mongodb.pp
10.0.0.1_mongodb.pp: [ DONE ]
Applying 10.0.0.1_ceilometer.pp
Applying 10.0.0.1_nagios.pp
Applying 10.0.0.1_nagios_nrpe.pp
10.0.0.1_ceilometer.pp: [ DONE ]
10.0.0.1_nagios.pp: [ DONE ]
10.0.0.1_nagios_nrpe.pp: [ DONE ]
Applying 10.0.0.1_postscript.pp
10.0.0.1_postscript.pp: [ DONE ]
Applying Puppet manifests [ DONE ]
Finalizing [ DONE ]
**** Installation completed successfully ******
Additional information:
* File /root/keystonerc_admin has been created on OpenStack client host 10.0.0.1. To use the command line tools you need to source the file.
* To access the OpenStack Dashboard browse to http://10.0.0.1/dashboard .
Please, find your login credentials stored in the keystonerc_admin in your home directory.
* To use Nagios, browse to http://10.0.0.1/nagios username: nagiosadmin, password: 8a2817ab92b34fbf
* The installation log file is available at: /var/tmp/packstack/20141001-005853-wQawew/openStack-setup.log
* The generated manifests are available at: /var/tmp/packstack/20141001-005853-wQawew/manifests
“`
Once the packstack installation is finished, we need to reboot the machine to
make sure that the machine’s networking is correct, as `systemctl restart network`
was not working correctly. and then after reboot we need to run
`systemctl restart network` to get the networking correctly configured on the
machine due to an openvswitch bug.
We need to remove the br-ex on the line the line `external_network_bridge = br-ex`
in `/etc/neutron/l3_agent.ini`
We will also need to restart the openStack services to make sure that all the
services are in-fact working and correctly running. We can do this by running
`openStack-service restart`. After a approx 60 seconds, you should have a
a prompt back, and all services would be running.
In-fact if you have no other networks setup to get into the machine, then due
to the openvswitch bug, you will not be able to get into the machine. Fortunately
for me I have IPMI SOL access to machine, so therefore I can do it manually from
the console terminal.
Before we start, we need to load the admin environment to run admin commands
The command below sources the `keystonerc_admin`, which was created by the
packstack command earlier.
“`
. /root/keystonerc_admin
“`
To start things off, we first create the physical networks that are going to be
available in the OpenStack environment. So firstly we create the network that
is going to provide the external access
“`
[root@stack01 ~(keystone_admin)]# neutron net-create ext_net –router:external=True
Created a new network:
+—————————+————————————–+
| Field | Value |
+—————————+————————————–+
| admin_state_up | True |
| id | e4160c94-4c4b-4541-a91c-9a62dd5c4da2 |
| name | ext_net |
| provider:network_type | vxlan |
| provider:physical_network | |
| provider:segmentation_id | 10 |
| router:external | True |
| shared | False |
| status | ACTIVE |
| subnets | |
| tenant_id | 9e34668b9838449e871fca858b916c35 |
+—————————+————————————–+
“`
We attach the external subnet to the external network, in our case here this is
`192.168.56.0/24`. The command below creates the subnet with dhcp disabled; the
gateway to be `192.168.56.254`, which is the main gateway of the network. We
also assign a range of IPs that we can then assign to the instances later on.
“`
[root@stack01 ~(keystone_admin)]# neutron subnet-create –name ext_subnet –disable-dhcp ext_net 192.168.56.0/24 \
–gateway 192.168.56.254 –allocation-pool start=192.168.56.161,end=192.168.56.190
Created a new subnet:
+——————+——————————————————+
| Field | Value |
+——————+——————————————————+
| allocation_pools | {“start”: “192.168.56.161”, “end”: “192.168.56.190”} |
| cidr | 192.168.56.0/24 |
| dns_nameservers | |
| enable_dhcp | False |
| gateway_ip | 192.168.56.254 |
| host_routes | |
| id | 24161a58-627e-47fa-ad1e-262987a63d4d |
| ip_version | 4 |
| name | ext_subnet |
| network_id | e4160c94-4c4b-4541-a91c-9a62dd5c4da2 |
| tenant_id | 9e34668b9838449e871fca858b916c35 |
+——————+——————————————————+
“`
We now create the second physical network, which is the network that has been
allocated to the management of the whole cluster. We have used this network
to provision the physical machines. This is also the network where the hostnames
correspond to in `/etc/hosts`
“`
[root@stack01 ~(keystone_admin)]# neutron net-create int_net
Created a new network:
+—————————+————————————–+
| Field | Value |
+—————————+————————————–+
| admin_state_up | True |
| id | 2484f456-a516-492d-ba88-a6723d52145c |
| name | int_net |
| provider:network_type | vxlan |
| provider:physical_network | |
| provider:segmentation_id | 11 |
| shared | False |
| status | ACTIVE |
| subnets | |
| tenant_id | 9e34668b9838449e871fca858b916c35 |
+—————————+————————————–+
[root@stack01 ~(keystone_admin)]# neutron subnet-create –name int_subnet –disable-dhcp int_net 10.0.0.0/23 \
> –gateway 10.0.0.251 –allocation-pool start=10.0.0.161,end=10.0.0.190
Created a new subnet:
+——————+———————————————-+
| Field | Value |
+——————+———————————————-+
| allocation_pools | {“start”: “10.0.0.161”, “end”: “10.0.0.190”} |
| cidr | 10.0.0.0/23 |
| dns_nameservers | |
| enable_dhcp | False |
| gateway_ip | 10.0.0.251 |
| host_routes | |
| id | 7b214244-b68c-45b3-936d-4435cb62f690 |
| ip_version | 4 |
| name | int_subnet |
| network_id | 2484f456-a516-492d-ba88-a6723d52145c |
| tenant_id | 9e34668b9838449e871fca858b916c35 |
+——————+———————————————-+
“`
Before we go ahead and create a user, we need to download a simple image that we
can use to create a small instance. In our case we will be using the cirros
cloud image. First we will grab the file using `wget` from the website, and then
we will add the image into glance, see the commands and their respective outputs
below
“`
[root@stack01 ~(keystone_admin)]$ wget –no-check-certificate https://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img
[root@stack01 ~(keystone_admin)]$ glance image-create –name cirros –is-public=True –disk-format=qcow2 \
–container-format=bare –disk-format=qcow2 –file /root/cirros-0.3.3-x86_64-disk.img
+——————+————————————–+
| Property | Value |
+——————+————————————–+
| checksum | 133eae9fb1c98f45894a4e60d8736619 |
| container_format | bare |
| created_at | 2014-09-30T21:38:48 |
| deleted | False |
| deleted_at | None |
| disk_format | qcow2 |
| id | 58414882-bec3-4331-9e2b-91751146440d |
| is_public | True |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | 9e34668b9838449e871fca858b916c35 |
| protected | False |
| size | 13200896 |
| status | active |
| updated_at | 2014-09-30T21:38:49 |
| virtual_size | None |
+——————+————————————–+
“`
Now we have the main stuff done as admin, we need to create a new user to start
running the instances. This was automatically created in the allinone
installation we did in the previous posts. I meant for us to add this manually
do that we can see the steps. So firstly we create a tenant for the user to run
under, in horizon this is called project. Then we create the user, and finally
we add the user to the relevant role under the tenant. In this case it’s the
`_member_` role.
“`
[root@stack01 ~(keystone_admin)]# keystone tenant-create –name demo
+————-+———————————-+
| Property | Value |
+————-+———————————-+
| description | |
| enabled | True |
| id | cce106178b4e47febbfeacbb3bb8d302 |
| name | demo |
+————-+———————————-+
[root@stack01 ~(keystone_admin)]# keystone user-create –name demo –pass demo
+———-+———————————-+
| Property | Value |
+———-+———————————-+
| email | |
| enabled | True |
| id | b7288609185a42b781e8a4e086ffba1e |
| name | demo |
| username | demo |
+———-+———————————-+
[root@stack01 ~(keystone_admin)]# keystone user-role-add –user demo –role _member_ –tenant demo
“`
In order for us to get into the demo user environment, we need to create a
`keystonerc_demo` file, which will have all the required variables filled out
for us, and we don’t have to remember them
{% codeblock /root/keystonerc_demo %}
export OS_USERNAME=demo
export OS_TENANT_NAME=demo
export OS_PASSWORD=demo
export OS_AUTH_URL=http://10.0.0.1:5000/v2.0/
export PS1='[\u@\h \W(keystone_demo)]\$ ‘
{% endcodeblock %}
Once the file has been created, we need to source the file to load the
environment variables.
“`
[root@stack01 ~(keystone_admin)]# . /root/keystonerc_demo
“`
The next step is to create an SSH key that will allow us to be able to login to
any instances we create in CLI or in horizon. We can use the `ssh-keygen`
command as shown below to accomplish this.
“`
[root@stack01 ~(keystone_demo)]$ ssh-keygen -t rsa -b 4096 -N ” -f /root/id_rsa_demo
Generating public/private rsa key pair.
Your identification has been saved in /root/id_rsa_demo.
Your public key has been saved in /root/id_rsa_demo.pub.
The key fingerprint is:
ba:74:8c:00:4c:1b:0b:3c:20:5d:9e:7b:b9:5c:2b:66 root@stack01.cluster
The key’s randomart image is:
+–[ RSA 4096]—-+
|*.o.. |
|.*.= . |
| * o |
| . . . |
| o o S |
| + * . |
| E + |
| + + |
| . |
+—————–+
“`
Now we add the SSH key into nova, and give it the name `demo_key`
“`
[root@stack01 ~(keystone_demo)]$ nova keypair-add –pub-key /root/id_rsa_demo.pub demo_key
“`
Now, for the purpose of the demo tenant, we need to create a network such that
this is only accessible by it, so in the same manner above in creating the
networks for the physical networks, we do this in the same way. I have made sure
in this case that the IP addresses would not conflict, so that we may be able
to diagnose any issues in future labs
“`
[root@stack01 ~(keystone_demo)]$ neutron net-create stack_net_priv
Created a new network:
+—————-+————————————–+
| Field | Value |
+—————-+————————————–+
| admin_state_up | True |
| id | 9a74a6f4-0bc3-4891-8cba-a7ee1b796d6f |
| name | stack_net_priv |
| shared | False |
| status | ACTIVE |
| subnets | |
| tenant_id | cce106178b4e47febbfeacbb3bb8d302 |
+—————-+————————————–+
[root@stack01 ~(keystone_demo)]$ neutron subnet-create –name stack_subnet_priv –dns-nameserver 8.8.8.8 stack_net_priv 10.0.8.0/24
Created a new subnet:
+——————+——————————————–+
| Field | Value |
+——————+——————————————–+
| allocation_pools | {“start”: “10.0.8.2”, “end”: “10.0.8.254”} |
| cidr | 10.0.8.0/24 |
| dns_nameservers | 8.8.8.8 |
| enable_dhcp | True |
| gateway_ip | 10.0.8.1 |
| host_routes | |
| id | e0e60efc-9753-493b-9f18-18a739043b14 |
| ip_version | 4 |
| name | stack_subnet_priv |
| network_id | 9a74a6f4-0bc3-4891-8cba-a7ee1b796d6f |
| tenant_id | cce106178b4e47febbfeacbb3bb8d302 |
+——————+——————————————–+
“`
Networks on them own don’t really mean anything, as all it will allow us to do
is to attach instances to it, so we need to create a router that will route
traffic from the physical network to the tenant network
“`
[root@stack01 ~(keystone_demo)]$ neutron router-create extnet_stackrouter
Created a new router:
+———————–+————————————–+
| Field | Value |
+———————–+————————————–+
| admin_state_up | True |
| external_gateway_info | |
| id | 5ccac9e7-fb41-44cb-9f1d-eaceab50c8ac |
| name | extnet_stackrouter |
| status | ACTIVE |
| tenant_id | cce106178b4e47febbfeacbb3bb8d302 |
+———————–+————————————–+
[root@stack01 ~(keystone_demo)]$ neutron router-gateway-set extnet_stackrouter ext_net
Set gateway for router extnet_stackrouter
[root@stack01 ~(keystone_demo)]$ neutron router-interface-add extnet_stackrouter stack_subnet_priv
Added interface 23b3e519-8ab5-4421-be4e-5dc3d8cef9c6 to router extnet_stackrouter.
“`
Now finally before we actually create an instance we need to allow ourselves to
be able to ping and SSH to the VM. We do this through security groups, The
following commands will achieve this for us.
“`
[root@stack01 ~(keystone_demo)]$ neutron security-group-rule-create –protocol icmp default
Created a new security_group_rule:
+——————-+————————————–+
| Field | Value |
+——————-+————————————–+
| direction | ingress |
| ethertype | IPv4 |
| id | e606a67d-2eed-44e8-a184-1d736c55daf2 |
| port_range_max | |
| port_range_min | |
| protocol | icmp |
| remote_group_id | |
| remote_ip_prefix | |
| security_group_id | b682568a-1357-4cce-966a-ace56dadea40 |
| tenant_id | cce106178b4e47febbfeacbb3bb8d302 |
+——————-+————————————–+
[root@stack01 ~(keystone_demo)]$ neutron security-group-rule-create –protocol tcp \
–port-range-min 22 –port-range-max 22 default
Created a new security_group_rule:
+——————-+————————————–+
| Field | Value |
+——————-+————————————–+
| direction | ingress |
| ethertype | IPv4 |
| id | 6409c463-b631-46fb-b8e5-ddc9d33d00b9 |
| port_range_max | 22 |
| port_range_min | 22 |
| protocol | tcp |
| remote_group_id | |
| remote_ip_prefix | |
| security_group_id | b682568a-1357-4cce-966a-ace56dadea40 |
| tenant_id | cce106178b4e47febbfeacbb3bb8d302 |
+——————-+————————————–+
“`
So now to create an instance we use `nova boot` command with the relevant
flavors, the network and the SSH key we want to use
“`
[root@stack01 ~(keystone_demo)]$ nova boot –poll –flavor m1.tiny –image cirros \
–nic net-id=9a74a6f4-0bc3-4891-8cba-a7ee1b796d6f –key-name demo_key test0
+————————————–+———————————————–+
| Property | Value |
+————————————–+———————————————–+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | – |
| OS-SRV-USG:terminated_at | – |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | 6DG7zT5CsKZ4 |
| config_drive | |
| created | 2014-09-30T21:40:42Z |
| flavor | m1.tiny (1) |
| hostId | |
| id | f40b66ae-dbeb-42fc-ad9e-09997b7444fc |
| image | cirros (58414882-bec3-4331-9e2b-91751146440d) |
| key_name | demo_key |
| metadata | {} |
| name | test0 |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | default |
| status | BUILD |
| tenant_id | cce106178b4e47febbfeacbb3bb8d302 |
| updated | 2014-09-30T21:40:43Z |
| user_id | b7288609185a42b781e8a4e086ffba1e |
+————————————–+———————————————–+
Server building… 100% complete
Finished
“`
With the instance built and ready, we still can’t login to the VM, we need to
create a floating IP from the range that we created earlier on in this article.
The commands below will check which network is available for us to create the
floating IP, and then we go ahead and create one.
“`
[root@stack01 ~(keystone_demo)]$ nova floating-ip-pool-list
+———+
| name |
+———+
| ext_net |
+———+
[root@stack01 ~(keystone_demo)]$ nova floating-ip-create ext_net
+—————-+———–+———-+———+
| Ip | Server Id | Fixed Ip | Pool |
+—————-+———–+———-+———+
| 192.168.56.162 | | – | ext_net |
+—————-+———–+———-+———+
“`
The following command then allocates the IP to the instance
“`
[root@stack01 ~(keystone_demo)]$ nova floating-ip-associate test0 192.168.56.162
“`
And to check that this has been applied, we can run `nova list`, and we should
see that is is there
“`
[root@stack01 ~(keystone_demo)]$ nova list
+————————————–+——-+——–+————+————-+—————————————–+
| ID | Name | Status | Task State | Power State | Networks |
+————————————–+——-+——–+————+————-+—————————————–+
| f40b66ae-dbeb-42fc-ad9e-09997b7444fc | test0 | ACTIVE | – | Running | stack_net_priv=10.0.8.4, 192.168.56.162 |
+————————————–+——-+——–+————+————-+—————————————–+
“`
Now you should be able to ping the VM in the same manner I mentioned in the
previous article
This now get’s us to a stage similar to the previous article, and therefore
seems a good place to end. In the next episode we will look at how we add a new
nova node, and get instances running on the new node as well as the original
node we created in this article.