2014年10月27日 星期一

從最底層基礎建設到軟體開發,無所不包的雲端(Openstack Day 21)

最後一天的稿了


中間面臨到 雙十連假 生病 重感冒 心情低落  但還是到最後一天了


今天就讓我們繼續翱翔於 Openstack 的 雲端中



Go to http://[Controller_IP]/horizon


讓我們連到你原先設定好的主機上



然後!!



快利用先前Day 26的實作操作筆記

export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS

來做登入



登入之後 我們一樣會看到先前介紹的

使用者的使用量的資訊

讓我們點選到instances 的頁面 來開設虛擬機器!!




這裡 需要調整的主要是 Flavor 與 os  image 的部份

當然別忘了開設虛擬機器名稱

建立完成 你就可以再介面上看到




你剛剛建立的虛擬機器 且 網路是依照你先前設定的 環境
於nova-network  的 192.168.1.x的網路中





點選到虛擬機器的 console 介面 應該就會看到先前安裝的novnc 的登入服務了

記得帳號密碼用:

cirros 與 cubswin:)

就可以體驗一下虛擬機器建完的速度 與 native 的主機使用操作差距

當然您也可以嘗試看看虛擬機器來額外加載一個volume 透過cinder service.

這邊就不截圖了

讓各位有自行操作與學習的空間


一連串的發文 總算到了底了。

這次的鐵人賽雖然沒有時間介紹  ezilla  與 ceph等 雲端上常用的opensource 專案

但沒關係,我們還有2015年的IT 鐵人

期待鐵人賽的修成T-shirt!!!!

打到這的時候,真的有種放下胸口重擔的感覺

完美的
Day 30. Ending


Jonathan Chen 

從最底層基礎建設到軟體開發,無所不包的雲端(Openstack Day 20)

鐵人賽要進入尾聲了,

希望可以再這幾天的課程中,把Openstack剩下的安裝介紹完成

今天剩下的是我們來設置有關虛擬機器上網的路徑

我們暫時性以小規模的狀態來使用的虛擬機器的網路,
所以我們用Nova的network module 而不是選用先前介紹的Neutron專案

設定Nat for 192.168.1.0 於 br0裝置上
  1. Create Nova Network on Controller node
# nova network-create vmnet --fixed-range-v4=192.168.1.0/24 --bridge-interface=br0

  1. Launch an instance on Controller node
# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
# ssh-keygen
# nova keypair-add --pub_key ~/.ssh/id_rsa.pub mykey

# nova flavor-list
# nova image-list (得到 [Image_ID]填入下方)
# nova boot --flavor 1 --key_name mykey --image [Image_ID] --security_group default cirrOS
# nova list (得到 [VM_IP]填入下方)
# ssh cirros@[VM_IP]


這邊我們來設定安裝 Cinder 這個 for vm使用的 block-storage 
  1. Install Cinder Central Service on Controller node
# apt-get install -y cinder-api cinder-scheduler
# nano /etc/cinder/cinder.conf (add)
[DEFAULT]
rpc_backend = cinder.openstack.common.rpc.impl_kombu
rabbit_host = [Controller_IP]
rabbit_port = 5672
rabbit_userid = guest
rabbit_password = RABBIT_PASS
glance_host = [Controller_IP]

[database]
connection = mysql://cinder:CINDER_DBPASS@[Controller_IP]/cinder

[keystone_authtoken]
auth_uri = http://[Controller_IP]:5000
auth_host = [Controller_IP]
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = cinder
admin_password = CINDER_PASS

# mysql -u root -p
> CREATE DATABASE cinder;
> GRANT ALL ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'CINDER_DBPASS';
> GRANT ALL ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'CINDER_DBPASS';

# su -s /bin/sh -c "cinder-manage db sync" cinder

# keystone user-create --name=cinder --pass=CINDER_PASS --email=cinder@example.com
# keystone user-role-add --user=cinder --tenant=service --role=admin

# keystone service-create --name=cinder --type=volume --description="Cinder Volume Service"
# keystone endpoint-create --service-id=$(keystone service-list | awk '/ volume / {print $2}') --publicurl=http://[Controller_IP]:8776/v1/%\(tenant_id\)s --internalurl=http://[Controller_IP]:8776/v1/%\(tenant_id\)s --adminurl=http://[Controller_IP]:8776/v1/%\(tenant_id\)s

# keystone service-create --name=cinderv2 --type=volumev2 --description="Cinder Volume Service V2"
# keystone endpoint-create --service-id=$(keystone service-list | awk '/ volumev2 / {print $2}') --publicurl=http://[Controller_IP]:8776/v2/%\(tenant_id\)s --internalurl=http://[Controller_IP]:8776/v2/%\(tenant_id\)s --adminurl=http://[Controller_IP]:8776/v2/%\(tenant_id\)s

# service cinder-scheduler restart
# service cinder-api restart

  1. Install Cinder LVM Volume Service on Controller nodes
# apt-get install -y lvm2 cinder-volume

# dd if=/dev/zero of=/root/cinder-volumes.img bs=1M seek=100000 count=0
# losetup /dev/loop0 /root/cinder-volumes.img (加載裝置)
(可加到/etc/rc.local)
# losetup -a

# pvcreate /dev/loop0
# vgcreate cinder-volumes /dev/loop0
# vgdisplay

# nano /etc/lvm/lvm.conf
devices {
filter = [ "a/loop0/","r/.*/"]
}

# service cinder-volume restart
# service tgt restart

  1. Installing the OpenStack Dashboard
# apt-get install -y apache2 memcached libapache2-mod-wsgi openstack-dashboard
# apt-get remove -y --purge openstack-dashboard-ubuntu-theme

# nano /etc/openstack-dashboard/local_settings.py
OPENSTACK_HOST = "[Controller_IP]"

ALLOWED_HOSTS = ['localhost', '[Controller_IP]','[Controller_Hostname]']

# service apache2 restart

# service memcached restart

明天將邁入最後一天,介面操作!!

Day 29 Ending 

從最底層基礎建設到軟體開發,無所不包的雲端(Openstack Day 19)

由於Nova 的部分 比較複雜 將會分為幾天來做教學



首先 我們先需要一個Controller Node 來做虛擬機器的管理

所以,今天就讓我們先來安裝Controller node吧
  1. Install Compute controller services on controller node
# apt-get install -y nova-api nova-cert nova-conductor nova-consoleauth nova-novncproxy nova-scheduler python-novaclient nova-compute-kvm python-guestfs nova-network
(若使用qemu,請改裝nova-compute-qemu,並修改nova.conf)
設定libguestfs0 : Create or update supermin appliance now? ()

# rm /var/lib/nova/nova.sqlite
# dpkg-statoverride --update --add root root 0644 /boot/vmlinuz-$(uname -r)

# modprobe kvm_intel
# nano /etc/libvirt/qemu.conf (modify)
user = "root"
group = "root"

cgroup_device_acl = [
"/dev/null", "/dev/full", "/dev/zero",
"/dev/random", "/dev/urandom",
"/dev/ptmx", "/dev/kvm", "/dev/kqemu",
"/dev/rtc", "/dev/hpet","/dev/net/tun"
]

# service libvirt-bin restart

  1. Configure Compute controller services on controller node
# nano /etc/nova/nova.conf (清空檔案再新增內容)
[DEFAULT]
#force_config_drive=true
verbose=True
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
rootwrap_config=/etc/nova/rootwrap.conf
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
rpc_backend=rabbit
rabbit_host=[Controller_IP]
rabbit_password=RABBIT_PASS
bindir=/usr/bin

# GLANCE
image_service=nova.image.glance.GlanceImageService
glance_api_servers=[Controller_IP]:9292

# NOVA
nova_url=http://[Controller_IP]:8774/v1.1/
connection_type=libvirt
compute_driver=libvirt.LibvirtDriver
compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
instance_name_template=instance-%08x
api_paste_config=/etc/nova/api-paste.ini
libvirt_use_virtio_for_bridges=True
osapi_compute_extension=nova.api.openstack.compute.contrib.standard_extensions
ec2_dmz_host=[Controller_IP]
s3_host=[Controller_IP]
enabled_apis=ec2,osapi_compute,metadata
allow_resize_to_same_host=True
resume_guests_state_on_host_boot=True
start_guests_on_host_boot=False

# Networking
network_api_class = nova.network.api.API
security_group_api = nova
network_manager=nova.network.manager.FlatDHCPManager
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
dhcpbridge=/usr/bin/nova-dhcpbridge
dhcpbridge_flagfile=/etc/nova/nova.conf
force_dhcp_release=True
network_size=254
allow_same_net_traffic=False
send_arp_for_ha=True
share_dhcp_address=True
flat_network_bridge=br0
flat_interface=eth0
public_interface=eth1
fixed_range=192.168.1.0/24

# VOVNC CONSOLE
my_ip=[Controller_IP]
novncproxy_base_url=http://[Controller_IP]:6080/vnc_auto.html
vncserver_listen=[Controller_IP]
vncserver_proxyclient_address=[Controller_IP]

# Cinder
volumes_path=/var/lib/nova/volumes
volume_api_class=nova.volume.cinder.API
volume_driver=nova.volume.driver.ISCSIDriver
volume_group=cinder-volumes
volume_name_template=volume-%s
iscsi_helper=tgtadm

auth_strategy=keystone

[keystone_authtoken]
auth_host=[Controller_IP]
auth_port=35357
auth_protocol=http
auth_uri=http://[Controller_IP]:5000
admin_tenant_name=service
admin_user=nova
admin_password=NOVA_PASS
signing_dirname=/tmp/keystone-signing-nova

[database]
connection=mysql://nova:NOVA_DBPASS@[Controller_IP]/nova

[libvirt]
libvirt_type=kvm

建立For Nova的表格 

# mysql -u root -p
> CREATE DATABASE nova;
> GRANT ALL ON nova.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
> GRANT ALL ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';

# su -s /bin/sh -c "nova-manage db sync" nova

# keystone user-create --name=nova --pass=NOVA_PASS --email=nova@example.com
# keystone user-role-add --user=nova --tenant=service --role=admin

# keystone service-create --name=nova --type=compute --description="Nova Compute service"
# keystone endpoint-create --service-id=$(keystone service-list | awk '/ compute / {print $2}') --publicurl=http://[Controller_IP]:8774/v2/%\(tenant_id\)s --internalurl=http://[Controller_IP]:8774/v2/%\(tenant_id\)s --adminurl=http://[Controller_IP]:8774/v2/%\(tenant_id\)s

# nano ~/restart.sh
#!/bin/bash

service nova-api restart
service nova-cert restart
service nova-consoleauth restart
service nova-scheduler restart
service nova-conductor restart
service nova-novncproxy restart
service nova-compute restart
service nova-network restart
# chmod +x ~/restart.sh
# ~/restart.sh


# nova-manage service list
理論上你應該會看到6個笑臉正在對你笑

這代表著是 目前所有服務正常啟動中


Day 28 Ending !

2014年10月25日 星期六

從最底層基礎建設到軟體開發,無所不包的雲端(Openstack Day 18)




今天主要的安裝目標是將Glance Service 安裝完成,並且註冊一個image 

提供之後Nova Service 來做虛擬機器的開設


Step 1.      Installing the Image Service
# apt-get install -y glance python-glanceclient

Step 2.      Edit the Glance configuration files and paste ini middleware files
# nano /etc/glance/glance-api.conf  (modify)
[DEFAULT]
rpc_backend = rabbit
rabbit_host = [Controller_IP]
rabbit_password = RABBIT_PASS

[database]
connection = mysql://glance:GLANCE_DBPASS@[Controller_IP]/glance

[keystone_authtoken]
auth_uri = http://[Controller_IP]:5000
auth_host = [Controller_IP]
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = GLANCE_PASS

[paste_deploy]
flavor = keystone

# nano /etc/glance/glance-registry.conf  (modify)
[database]
connection = mysql://glance:GLANCE_DBPASS@[Controller_IP]/glance

[keystone_authtoken]
auth_uri = http://[Controller_IP]:5000
auth_host = [Controller_IP]
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = GLANCE_PASS

[paste_deploy]
flavor = keystone

# mysql -u root -p
> CREATE DATABASE glance;
> GRANT ALL ON glance.* TO 'glance'@'%' IDENTIFIED BY 'GLANCE_DBPASS';
> GRANT ALL ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'GLANCE_DBPASS';

# su -s /bin/sh -c "glance-manage db_sync" glance

# keystone user-create --name=glance --pass=GLANCE_PASS --email=glance@example.com
# keystone user-role-add --user=glance --tenant=service --role=admin

# keystone service-create --name=glance --type=image --description="Glance Image Service"
# keystone endpoint-create --service-id=$(keystone service-list | awk '/ image / {print $2}') --publicurl=http://[Controller_IP]:9292 --internalurl=http://[Controller_IP]:9292 --adminurl=http://[Controller_IP]:9292

# service glance-registry restart
# service glance-api restart

Step 3.      Adding and Verifying the Image Service Installation
# glance image-create --name="cirros-0.3.2-x86_64" --disk-format=qcow2 --container-format=bare --is-public=true --copy-from http://cdn.download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img

# glance image-list


Day 27 ending