Welcome to the second part of the series of the blog, today we are going to look at installing and configuring OpenStack using packstack, which is provided by the RDO openstack-packstack RPM.
You will need to have your system up to the stage where all the RPMs, repositories, and base config of the system is up to follow this blog. See Part 1 of this topic for more information
If you have already gone through the first part, then we should have a controller node provisioned, and therefore we can use packstack to initialise the environment. We can run the all-in-one installation using the command below, we use `–use-epel=n` due to the fact we have that already as part of our repos.
packstack --allinone --use-epel=n
This should install, if not already, all the RPMs and their dependancies on the controller node, and configure Openstack with the core infrastructure. The list of items that it will configure are listed below
- amqp
- MySQL
- Keystone
- Glance
- Cinder
- Nova
- Neutron
- Horizon
- Swift
- Ceilometer
- Nagios
We can always allow packstack to install the other components in openstack by changing the relevant parameters in the packstack answers file that was created. In my case the file was in `/root/packstack-answers-20140926-203610.txt`. You should be able to find all the credentials as well in this file.
Once installed you should be able to check the status of all the services by running `openstack-status`. Below is an example run on `stack01`
== Nova services == openstack-nova-api: active openstack-nova-cert: active openstack-nova-compute: active openstack-nova-network: inactive (disabled on boot) openstack-nova-scheduler: active openstack-nova-volume: inactive (disabled on boot) openstack-nova-conductor: active == Glance services == openstack-glance-api: active openstack-glance-registry: active == Keystone service == openstack-keystone: active == Horizon service == openstack-dashboard: active == neutron services == neutron-server: active neutron-dhcp-agent: active neutron-l3-agent: active neutron-metadata-agent: active neutron-lbaas-agent: inactive (disabled on boot) neutron-openvswitch-agent: active neutron-linuxbridge-agent: inactive (disabled on boot) neutron-ryu-agent: inactive (disabled on boot) neutron-nec-agent: inactive (disabled on boot) neutron-mlnx-agent: inactive (disabled on boot) == Swift services == openstack-swift-proxy: active openstack-swift-account: active openstack-swift-container: active openstack-swift-object: active == Cinder services == openstack-cinder-api: active openstack-cinder-scheduler: active openstack-cinder-volume: active openstack-cinder-backup: active == Ceilometer services == openstack-ceilometer-api: active openstack-ceilometer-central: active openstack-ceilometer-compute: active openstack-ceilometer-collector: active openstack-ceilometer-alarm-notifier: active openstack-ceilometer-alarm-evaluator: active == Heat services == openstack-heat-api: inactive (disabled on boot) openstack-heat-api-cfn: inactive (disabled on boot) openstack-heat-api-cloudwatch: inactive (disabled on boot) openstack-heat-engine: inactive (disabled on boot) == Support services == libvirtd: active openvswitch: active dbus: active tgtd: inactive (disabled on boot) rabbitmq-server: active memcached: active == Keystone users == Warning keystonerc not sourced
The next part of this article focuses on the initial setup to get an instance up, and verify that it is working.
Source the demo user, so that we are able to run openstack commands as the user
. /root/keystonerc_demo
Your shell should automatically look like below
[root@stack01 ~(keystone_demo)]#
Now before we launch an instance we need to create an ssh key that we can use to login into the VM
ssh-keygen -t rsa -b 4096 -N '' -f /root/id_rsa_demo nova keypair-add --pub-key /root/id_rsa_demo.pub demo
Now in order to be able to ping or access the instance we need to add these into the default security profile
neutron security-group-rule-create --protocol icmp default
neutron security-group-rule-create --protocol tcp --port-range-min 22 --port-range-max 22 default
Now we also need to find out the the id of the `private` net that packstack automatically created, we can do that with the command below
[root@stack01 ~(keystone_demo)]# neutron net-list +--------------------------------------+---------+--------------------------------------------------+ | id | name | subnets | +--------------------------------------+---------+--------------------------------------------------+ | 7a9b3210-e4b7-405f-ab53-87fa51840ba2 | public | e5e8ae5d-0f7c-4e7d-a25f-4d4f3d992ff2 | | 80b10c99-0068-4169-91fa-ce4b5c77b078 | private | 756eb7a4-f089-4e77-920c-e467fb2746a5 10.0.0.0/24 | +--------------------------------------+---------+--------------------------------------------------+
In this instance we have the id of `80b10c99-0068-4169-91fa-ce4b5c77b078` for the private net, and we need to use this when we create the instance. Now to create a new instance we run the command below, we have named the instance test0.
nova boot --poll --flavor m1.tiny --image cirros --nic net-id=80b10c99-0068-4169-91fa-ce4b5c77b078 --key-name demo test0
Note, In the above command, the cirros image was automatically downloaded and added to glance through packstack, as well as the m1.tiny flavor creation. The key-name, is the same name of the SSH key we created earlier.
The above command will take a few minutes, and the output should look similar to
+--------------------------------------+-----------------------------------------------+ | Property | Value | +--------------------------------------+-----------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | adminPass | yov6BFCHikgR | | config_drive | | | created | 2014-09-26T19:59:14Z | | flavor | m1.tiny (1) | | hostId | | | id | a79e4176-647b-48c7-b335-2ac5436ef444 | | image | cirros (ec8ebbd8-5cea-42b6-83f8-c35a867908cc) | | key_name | demo | | metadata | {} | | name | test0 | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | security_groups | default | | status | BUILD | | tenant_id | e6ea36f417c3455d8cacab4531a9fe00 | | updated | 2014-09-26T19:59:14Z | | user_id | 95a08d3937a748c0967b15eb28163056 | +--------------------------------------+-----------------------------------------------+ Server building... 100% complete Finished
Now to make sure that the instance is running, we need to run the command below
[root@stack01 ~(keystone_demo)]# nova list +--------------------------------------+-------+--------+------------+-------------+------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+-------+--------+------------+-------------+------------------+ | a79e4176-647b-48c7-b335-2ac5436ef444 | test0 | ACTIVE | - | Running | private=10.0.0.2 | +--------------------------------------+-------+--------+------------+-------------+------------------+
Now, you won’t be able to get to the machine right away, as we have not created any routers or bridge networks, but we can still see if it is working from the networking aspect, first run the command below to check the network namespace
[root@stack01 ~(keystone_demo)]# ip netns qdhcp-80b10c99-0068-4169-91fa-ce4b5c77b078 qrouter-c6c62a1f-9e61-4799-92d1-ef6fe2c21d89
In our case here we are looking for the `qrouter` namespace, and we will run commands against that, as shown below
[root@stack01 ~(keystone_demo)]# ip netns exec qrouter-c6c62a1f-9e61-4799-92d1-ef6fe2c21d89 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 10: qg-4b581aba-bd: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether fa:16:3e:27:1d:dd brd ff:ff:ff:ff:ff:ff inet 172.24.4.226/28 brd 172.24.4.239 scope global qg-4b581aba-bd valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fe27:1ddd/64 scope link valid_lft forever preferred_lft forever 11: qr-77223f8b-42: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether fa:16:3e:05:b0:75 brd ff:ff:ff:ff:ff:ff inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-77223f8b-42 valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fe05:b075/64 scope link valid_lft forever preferred_lft forever
As we can see, we have an `10.0.0.1` defined in this namespace as the interface of the router, so therefore we should be able to ping `10.0.0.2`, so the command below will show you this
[root@stack01 ~(keystone_demo)]# ip netns exec qrouter-c6c62a1f-9e61-4799-92d1-ef6fe2c21d89 ping -c 3 10.0.0.2 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=2.62 ms 64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.259 ms 64 bytes from 10.0.0.2: icmp_seq=3 ttl=64 time=0.239 ms --- 10.0.0.2 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2001ms rtt min/avg/max/mdev = 0.239/1.042/2.628/1.121 ms
So we can now enter the namespace and ssh into the instance by using the following commands
[root@stack01 ~(keystone_demo)]# ip netns exec qrouter-c6c62a1f-9e61-4799-92d1-ef6fe2c21d89 bash [root@stack01 ~(keystone_demo)]# ssh -i ~/id_rsa_demo 10.0.0.2 -l cirros Warning: Permanently added '10.0.0.2' (RSA) to the list of known hosts. $ $ $ cat /etc/issue login as 'cirros' user. default password: 'cubswin:)'. use 'sudo' for root. $
To exit out of the namespace, we use the `exit` command
Hopefully this instalment of the blog shows us how quickly we can get an OpenStack system up and running. In the next instalment we will look at creating the interface bridges and the networking so that we are able to get to the instances without going into the namespace. Check out the next installment in a few days.
If you have any questions or comments, then you can contact me on IRC arif-ali at freenode