You are on page 1of 12

Kolla multi-node deploying OpenStack

tags: openstack

Object 1

First, ready to work

1. Prepare 6 machines (mainly to deploy separately, can also


be less than 6 units, suggesting at least 6 units)
CPU
Node name The internet Disk
name
NIC 1:

Name: ENS33

Mode: NAT mode

  ip:192.168.185.23

Role: and external network communication

NIC 2:

Net card Name: ENS34

Mode: only host mode


contorl Control node A disk
  ip:10.66.66.23

Role: Used as API network, VM network (Tenant network)

NIC 3:

Name: ENS35

Mode: NAT mode

IP: No IP

Role: Used as an External network, used for virtual machine


connection external network

network Network node NIC 1: A disk


Name: ENS33

Mode: NAT mode

  ip:192.168.185.24

Role: and external network communication

NIC 2:

Net card Name: ENS34

Mode: only host mode

  ip:10.66.66.24

Role: Used as API network, VM network (Tenant network)

NIC 3:

Name: ENS35

Mode: NAT mode

IP: No IP

Role: Used as an External network, used for virtual machine


connection external network

NIC 1:

Name: ENS33

Mode: NAT mode

  ip:192.168.185.25

Role: and external network communication

NIC 2:
compute calculate node A disk
Net card Name: ENS34

Mode: only host mode

  ip:10.66.66.25

Role: Used as API network, VM network (Tenant network)

NIC 3:
Name: ENS35

Mode: NAT mode

IP: No IP

Role: Used as an External network, used for virtual machine


connection external network

NIC 1:

Name: ENS33

Mode: NAT mode

  ip:192.168.185.26

Role: and external network communication

NIC 2:

Net card Name: ENS34

Mode: only host mode


Two
storage Storage node
disks
  ip:10.66.66.26

Role: Used as API network, VM network (Tenant network)

NIC 3:

Name: ENS35

Mode: NAT mode

IP: No IP

Role: Used as an External network, used for virtual machine


connection external network

monitor Monitoring NIC 1: A disk


node
Name: ENS33

Mode: NAT mode

  ip:192.168.185.27

Role: and external network communication


NIC 2:

Net card Name: ENS34

Mode: only host mode

  ip:10.66.66.27

Role: Used as API network, VM network (Tenant network)

NIC 3:

Name: ENS35

Mode: NAT mode

IP: No IP

Role: Used as an External network, used for virtual machine


connection external network

NIC 1:

Name: ENS33

Mode: NAT mode

  ip:192.168.185.28

Role: and external network communication

NIC 2:

Net card Name: ENS34

Mode: only host mode


deploy Deploy node A disk
  ip:10.66.66.28

Role: Used as API network, VM network (Tenant network)

NIC 3:

Name: ENS35

Mode: NAT mode

IP: No IP

Role: Used as an External network, used for virtual machine


connection external network
 

2. Storage node
To start the Cinder storage service, you need to do the following processes for the second disk.
pvcreate /dev/sdb
1  Vgcreate Cinder-Volumes / dev / sdb #VG name is Cinder-Volumes,
2 which is mainly consistent with the VG name in the Kolla
configuration file.

3. All nodes close SELNUX


# Temporarily close, do not need to be restarted, but restart
1 failure
2 setenforce 0
3  # Permanently shut down needs to be restarted to take effect
4 sed -i 's/SELINUX=.*/SELINUX=Disabled/g' /etc/selinux/config
5
 # 重启
6
reboot
7
 # View SELNUX status
8
getenforce

4. All nodes turn off the firewall


1 systemctl stop firewalld
2 systemctl disable firewalld

5. All nodes set the host name, follow the names in the above
table, or other names, it is recommended to name this,
convenient
1 Hostnamectl Set-Hostname Host Name

6. All node configuration / etc / hosts


1 cat >> /etc/hosts << EOF
2 192.168.185.23 control
3 192.168.185.24 network
4 192.168.185.25 compute
5 192.168.185.26 storage
6 192.168.185.27 monitor
7 192.168.185.28 deploy
8 EOF
7. Deploying nodes from other nodes
# Generate a public key and private password, it will be default
1 ssh-keygen -t rsa
2  # Distribute a shared key to other nodes
3  SSH-COPY-ID -I ~ / .ssh / id_rsa.pub node name # No Previous
4 mapping, here is written, it is recommended to configure some of
the maps.

8. All nodes are installed Docker


yum update -y && yum upgrade -y
1 yum install -y yum-utils   device-mapper-persistent-data   lvm2
2 yum-config-manager --add-repo
3 https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/docke
4 r-ce.repo
5 yum install docker-ce docker-ce-cli containerd.io -y
6  # Configure domestic source
7 mkdir -p /etc/docker
8 cat >> /etc/docker/daemon.json << EOF
9 {
10     "registry-mirrors" : [
11          "https://registry.docker-cn.com",
12          "https://docker.mirrors.ustc.edu.cn",
13          "http://hub-mirror.c.163.com",
14          "https://cr.console.aliyun.com/"
15   ]
16 }
17 EOF
systemctl restart docker

9. All nodes set packet forwarding


1echo " net.ipv4.ip_forward = 1 ">> /etc/sysctl.conf&&sysctl -p

10. All nodes are installed and upgraded PIP


1 # 方式 1:
2 yum install -y epel-release
3 # python2
4 yum install -y python-pip
5 pip install -U pip
6 # python3
7 yum install -y python3-pip
8 pip3 install -U pip
9
 # 方式 2:
10
# python2
11
curl -o get-pip.py https://bootstrap.pypa.io/pip/2.7/get-pip.py
12 python2 get-pip.py
13 # python3
14 curl -o get-pip.py https://bootstrap.pypa.io/get-pip.py
15 python3 get-pip.py

11. Deploy nodes Install ANSIBLE and KOLLA-ANSIBLE


yum install -y python-devel libffi-devel gcc openssl-devel
1 libselinux-python
2 pip install -U ansible -i http://mirrors.aliyun.com/pypi/simple/
3 pip install kolla-ansible==9.1.0 --ignore-installed PyYAML -i
http://mirrors.aliyun.com/pypi/simple/

12. Deploy node Optimize ANSIBLE configuration (can not be


done)
vim /etc/ansible/ansible.cfg
Forks = 10 # Chapter 19, set the number of parallel processes. If
you want to manage, you can try to add this value priority.
 Host_key_checking = false / # Chapter 67, Skip SSH First
1
Connection Tips Verification section
2
 Pipelining = true # No. 403, turn on the pipeline delivery.
3
ANSIBLE requires SSH to the destination host multiple times in
4
executing a module, turns on the number of SSH connections, and
5
shorten the ANSIBLE execution time.
 # Very much to deploy large-scale servers or reference modules,
open PIPELING will give ANSIBLE to bring significant performance
improvement

13. Deploy nodes copy some configurations of Kolla-Ansible


1 cp -r /usr/share/kolla-ansible/etc_examples/kolla /etc/
2 cp /usr/share/kolla-ansible/ansible/inventory/* .

14. Deploying nodes automatically generate password files for


OpenStack services
1 kolla-genpwd
2  # 修 网 login password
3 vim /etc/kolla/passwords.yml
4  KeyStone_admin_password: admin # Chapter 165
15. Deploy nodes Modify the global configuration of Kolla-
Ansible (/etc/kolla/global.yml), the following is my example,
everyone change according to your own situation
# Select the foundation image of the download
kolla_base_distro: "centos"
 # Select installation method: binary binary installation, Source
Source Code Installation
kolla_install_type: "source"
1  # Select OpenStack's version tag, please see:
2 https://releses.openstack.org/
3 openstack_release: "train"
4  # OpenStack internal management network address, access to the
5 OpenStack web page through this IP. If you are enabled, you need
6 to set to VIP (drift IP)
7 kolla_internal_vip_address: "192.168.185.20"
8  # OpenStack external management network address
9 kolla_external_vip_address: "10.66.66.20"
10  # Docker namespace
11 docker_namespace: "kolla"
12  # OpenStack network card interface for internal management
13 network address
14 network_interface: "ens33"
15  # OpenStack external (or public) network network card interface,
16 can be a VLAN mode or FLAT mode, this network card should be
17 active without IP addresses, if not, then the cloud main machine
18 instance in the OpenStack cloud will not be able to access
19 external networks . (BR-EX bridge is not successful when there is
20 IP)
21 neutron_external_interface: "ens34"
22  # Neutron Network Service Plugin
23 neutron_plugin_agent: "openvswitch"
24  # Enable Cinder (block storage)
25 enable_cinder: "yes"
26  # Cinder (block storage) rear end Enable LVM
enable_cinder_backend_lvm: "yes"
 # Open the web interface
enable_horizon: "yes"
 # Open Neutron network service
enable_neutron_provider_networks: "yes"

16. Deploy nodes Configure Multinode Multi-Node Host


Inventory File
1 [control]
2 control
3  
4  
5 [network]
6 network
7  
8 [compute]
9 compute
10  
11 [monitoring]
12 monitor
13  
14  
15 [storage]
16 storage
17  
18 [deployment]
19 localhost       ansible_connection=local

17. Deploy nodes Detect all hosts to communicate properly


1ansible -i ~/multinode all -m ping

Second, began to deploy OpenStack9 (only in


deploying node)
1. Install OpenStack with OpenStack by Kolla-Ansible
1kolla-ansible -i ~/multinode bootstrap-servers

2. Pre-deployment check for the host


1kolla-ansible -i ~/multinode prechecks

3. Pull the mirror of OpenStack


1kolla-ansible -i ~/multinode pull

4. Deploy OpenStack
1kolla-ansible -i ~/multinode deploy

5. Verify deployment
1kolla-ansible -i ~/multinode post-deploy
6. View some configuration information after deployment
1 . /etc/kolla/admin-openrc.sh
2 cat /etc/kolla/admin-openrc.sh  
7. View the web interface
8. Install the OpenStack command client
pip install python-openstackclient python-glanceclient python-
1
neutronclient --ignore-installed

9.OpenStack command test


1 # Virtualization type list
2 openstack hypervisor list
3  # 镜 列表
4 openstack image list
5  # NOVA service list
6 nova service-list

You might also like