Openstack(Kilo)安装系列之neutron(九)
2015-12-17 18:05
232 查看
控制节点
Before you configure the OpenStack Networking (neutron) service, you must create a database, service credentials, and API endpoint.
一、创建neutron数据库并授权
1.登陆数据库
2.创建数据库并授权
Replace
Source the
3.To create the service credentials, complete these steps:
Create the
Add the
Create the
Create the Networking service API endpoint:
To install the Networking components
To configure the Networking server component
The Networking server component configuration includes the database, authentication mechanism, message queue, topology change notifications, and plug-in.
Edit the
In the
Replace
In the
Replace
In the
Replace
注意:Comment out or remove any other options in the
In the
In the
Replace
(Optional) To assist with troubleshooting, enable verbose logging in the
To configure the Modular Layer 2 (ML2) plug-in
The ML2 plug-in uses the Open vSwitch (OVS) mechanism (agent) to build the virtual networking framework for instances. However, the controller node does not need the OVS components because it does not handle instance network traffic.
Edit the
In the
注意:Once you configure the ML2 plug-in, changing values in the
In the
In the
To configure Compute to use Networking
By default, distribution packages configure Compute to use legacy networking. You must reconfigure Compute to manage networks through Networking.
Edit the
In the
In the
Replace
To finalize installation
1.The Networking service initialization scripts expect a symbolic link
2.Populate the database:
注意:Database population occurs later for Networking because the script requires complete server and plug-in configuration files.
3.Restart the Compute services:
4.Start the Networking service and configure it to start when the system boots:
Verify operation
注意:Perform these commands on the controller node.
1.Source the
2.List loaded extensions to verify successful launch of the
Before you configure the OpenStack Networking (neutron) service, you must create a database, service credentials, and API endpoint.
一、创建neutron数据库并授权
1.登陆数据库
mysql -u root -p
2.创建数据库并授权
CREATE DATABASE neutron; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ IDENTIFIED BY 'NEUTRON_DBPASS'; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ IDENTIFIED BY 'NEUTRON_DBPASS';
Replace
NEUTRON_DBPASSwith a suitable password.
Source the
admincredentials to gain access to admin-only CLI commands:
source admin-openrc.sh
3.To create the service credentials, complete these steps:
Create the
neutronuser:
openstack user create --password-prompt neutron
Add the
adminrole to the
neutronuser:
openstack role add --project service --user neutron admin
Create the
neutronservice entity:
openstack service create --name neutron \ --description "OpenStack Networking" network
Create the Networking service API endpoint:
openstack endpoint create \ --publicurl http://controller:9696 \ --adminurl http://controller:9696 \ --internalurl http://controller:9696 \ --region RegionOne \ network
To install the Networking components
yum install openstack-neutron openstack-neutron-ml2 python-neutronclient which
To configure the Networking server component
The Networking server component configuration includes the database, authentication mechanism, message queue, topology change notifications, and plug-in.
Edit the
/etc/neutron/neutron.conffile and complete the following actions:
In the
[database]section, configure database access:
[database] ... connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron
Replace
NEUTRON_DBPASSwith the password you chose for the database.
In the
[DEFAULT]and
[oslo_messaging_rabbit]sections, configure RabbitMQ message queue access:
[DEFAULT] ... rpc_backend = rabbit [oslo_messaging_rabbit] ... rabbit_host = controller rabbit_userid = openstack rabbit_password = RABBIT_PASS
Replace
RABBIT_PASSwith the password you chose for the
openstackaccount in RabbitMQ.
In the
[DEFAULT]and
[keystone_authtoken]sections, configure Identity service access:
[DEFAULT] ... auth_strategy = keystone [keystone_authtoken] ... auth_uri = http://controller:5000 auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = neutron password = NEUTRON_PASS
Replace
NEUTRON_PASSwith the password you chose for the
neutronuser in the Identity service.
注意:Comment out or remove any other options in the
[keystone_authtoken]section.
In the
[DEFAULT]section, enable the Modular Layer 2 (ML2) plug-in, router service, and overlapping IP addresses:
[DEFAULT] ... core_plugin = ml2 service_plugins = router allow_overlapping_ips = True
In the
[DEFAULT]and
[nova]sections, configure Networking to notify Compute of network topology changes:
[DEFAULT] ... notify_nova_on_port_status_changes = True notify_nova_on_port_data_changes = True nova_url = http://controller:8774/v2 [nova] ... auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default region_name = RegionOne project_name = service username = nova password = NOVA_PASS
Replace
NOVA_PASSwith the password you chose for the
novauser in the Identity service.
(Optional) To assist with troubleshooting, enable verbose logging in the
[DEFAULT]section:
[DEFAULT] ... verbose = True
To configure the Modular Layer 2 (ML2) plug-in
The ML2 plug-in uses the Open vSwitch (OVS) mechanism (agent) to build the virtual networking framework for instances. However, the controller node does not need the OVS components because it does not handle instance network traffic.
Edit the
/etc/neutron/plugins/ml2/ml2_conf.inifile and complete the following actions:
In the
[ml2]section, enable the flat, VLAN, generic routing encapsulation (GRE), and virtual extensible LAN (VXLAN) network type drivers, GRE tenant networks, and the OVS mechanism driver:
[ml2] ... type_drivers = flat,vlan,gre,vxlan tenant_network_types = gre mechanism_drivers = openvswitch
注意:Once you configure the ML2 plug-in, changing values in the
type_driversoption can lead to database inconsistency.
In the
[ml2_type_gre]section, configure the tunnel identifier (id) range:
[ml2_type_gre] ... tunnel_id_ranges = 1:1000
In the
[securitygroup]section, enable security groups, enable ipset, and configure the OVS iptables firewall driver:
[securitygroup] ... enable_security_group = True enable_ipset = True firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
To configure Compute to use Networking
By default, distribution packages configure Compute to use legacy networking. You must reconfigure Compute to manage networks through Networking.
Edit the
/etc/nova/nova.conffile on the controller node and complete the following actions:
In the
[DEFAULT]section, configure the APIs and drivers:
[DEFAULT] ... network_api_class = nova.network.neutronv2.api.API security_group_api = neutron linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver firewall_driver = nova.virt.firewall.NoopFirewallDriver
In the
[neutron]section, configure access parameters:
[neutron] ... url = http://controller:9696 auth_strategy = keystone admin_auth_url = http://controller:35357/v2.0 admin_tenant_name = service admin_username = neutron admin_password = NEUTRON_PASS
Replace
NEUTRON_PASSwith the password you chose for the
neutronuser in the Identity service.
To finalize installation
1.The Networking service initialization scripts expect a symbolic link
/etc/neutron/plugin.inipointing to the ML2 plug-in configuration file,
/etc/neutron/plugins/ml2/ml2_conf.ini. If this symbolic link does not exist, create it using the following command:
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
2.Populate the database:
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
注意:Database population occurs later for Networking because the script requires complete server and plug-in configuration files.
3.Restart the Compute services:
systemctl restart openstack-nova-api.service openstack-nova-scheduler.service \ openstack-nova-conductor.service
4.Start the Networking service and configure it to start when the system boots:
systemctl enable neutron-server.service systemctl start neutron-server.service
Verify operation
注意:Perform these commands on the controller node.
1.Source the
admincredentials to gain access to admin-only CLI commands:
source admin-openrc.sh
2.List loaded extensions to verify successful launch of the
neutron-serverprocess:
neutron ext-list
相关文章推荐
- Linux根据进程名杀死进程
- linux下shell获取时间date的写法,日期加减,用date获得前一天的日期
- Nginx学习笔记七Nginx的Web缓存服务
- nginx 无 ngx_cache_purge 模块时的刷新方法
- linux 免用户密码登录
- 工作环境搭建(8) - CentOS7命令行安装Android SDK
- 安装成功的nginx如何添加未编译安装模块
- Nginx系列教程:ngx_cache_purge模块
- openstack controller ha测试环境搭建记录(十五)——创建实例
- 王高利:Linux解决安装ossim(debian系列linux系统)出现firmware报错
- zabbix使用Omsa来监控Dell服务器的硬件信息
- 属性动画(property animation) &重复执行
- hadoop再次集群搭建(4)-Cloudera Manager Installation
- linux coredump
- hadoop再次集群搭建(4)-Cloudera Manager Installation
- 在linux(ubuntu )上安装配置weka
- centos 下lnmp(linux+nginx+mysql+php)环境搭建
- shell 数字循环中变量的使用
- linux信号的阻塞和未决
- Hadoop分布式文件系统--HDFS结构分析