読者です 読者をやめる 読者になる 読者になる

最近 OpenStack でハマったこと

本エントリーは OpenStack Advent Calendar 2013 JP : ATND 21日目です。

 

OpenStack Havana がリリースされてしばらく経ちます。

私が Ubuntu 13.10 と OpenStack Havana の組み合わせで最近ハマったことを履歴として残してみます。

 

OpenStack Havanaで最近ハマったところとは以下3点でした。

  1. libvirt の Audit message が出力できない
  2. インスタンスから Cinder のボリュームがマウントできない
  3. Cinder で cinder-manage db sync が出来ない

 

 

1. libvirt の Audit message が出力できない

インスタンス起動時に起動の監査ログを残すらしいのですがこれが出力されずエラーになっていました。

インスタンス起動時、/var/log/syslog に以下のようなログが出力されます。ここから libvirt からの CAP_AUDIT_WRITE が拒否されていることが分かります。

Dec 19 20:23:09 com dnsmasq[2871]: cleared cache
Dec 19 20:23:09 com dnsmasq-dhcp[2871]: read /opt/stack/data/neutron/dhcp/f70c9564-fbf3-454d-a76b-c9412d1b5bd5/host
Dec 19 20:23:09 com dnsmasq-dhcp[2871]: read /opt/stack/data/neutron/dhcp/f70c9564-fbf3-454d-a76b-c9412d1b5bd5/opts
Dec 19 20:23:11 com kernel: [ 7130.352305] nbd9: unknown partition table
Dec 19 20:23:11 com kernel: [ 7130.632991] kjournald starting. Commit interval 5 seconds
Dec 19 20:23:11 com kernel: [ 7130.679540] EXT3-fs (nbd9): using internal journal
Dec 19 20:23:11 com kernel: [ 7130.679545] EXT3-fs (nbd9): mounted filesystem with ordered data mode
Dec 19 20:23:12 com kernel: [ 7131.839038] block nbd9: NBD_DISCONNECT
Dec 19 20:23:12 com kernel: [ 7131.839127] block nbd9: Unexpected reply (ffff880027643c88)
Dec 19 20:23:12 com kernel: [ 7131.839213] block nbd9: shutting down socket
Dec 19 20:23:12 com kernel: [ 7131.839216] block nbd9: queue cleared
Dec 19 20:23:13 com ovs-vsctl: 00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=2 set Port tap9bf902d0-da tag=1
Dec 19 20:23:16 com kernel: [ 7135.588203] IPv6: ADDRCONF(NETDEV_UP): qvb1e8292ff-11: link is not ready
Dec 19 20:23:17 com kernel: [ 7135.962264] device qvb1e8292ff-11 entered promiscuous mode
Dec 19 20:23:17 com kernel: [ 7136.316057] IPv6: ADDRCONF(NETDEV_CHANGE): qvb1e8292ff-11: link becomes ready
Dec 19 20:23:17 com kernel: [ 7136.666083] device qvo1e8292ff-11 entered promiscuous mode
Dec 19 20:23:18 com kernel: [ 7137.577404] qbr1e8292ff-11: port 1(qvb1e8292ff-11) entered forwarding state
Dec 19 20:23:18 com kernel: [ 7137.577411] qbr1e8292ff-11: port 1(qvb1e8292ff-11) entered forwarding state
Dec 19 20:23:19 com ovs-vsctl: 00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl -- --may-exist add-port br-int qvo1e8292ff-11 -- set Interface qvo1e8292ff-11 external-ids:iface-id=1e8292ff-119a-4427-b894-427eb49262a6 external-ids:iface-status=active external-ids:attached-mac=fa:16:3e:4a:01:06 external-ids:vm-uuid=89478a93-5756-4a20-b037-d035e5ba5ea3
Dec 19 20:23:23 com kernel: [ 7141.999940] type=1400 audit(1387452203.047:44): apparmor="STATUS" operation="profile_load" parent=875 profile="unconfined" name="libvirt-89478a93-5756-4a20-b037-d035e5ba5ea3" pid=876 comm="apparmor_parser"
Dec 19 20:23:23 com kernel: [ 7142.008426] type=1400 audit(1387452203.059:45): apparmor="DENIED" operation="capable" parent=1 profile="/usr/sbin/libvirtd" pid=28042 comm="libvirtd" pid=28042 comm="libvirtd" capability=29 capname="audit_write"
Dec 19 20:23:23 com kernel: [ 7142.018978] device tap1e8292ff-11 entered promiscuous mode
Dec 19 20:23:23 com kernel: [ 7142.019155] qbr1e8292ff-11: port 2(tap1e8292ff-11) entered forwarding state
Dec 19 20:23:23 com kernel: [ 7142.019159] qbr1e8292ff-11: port 2(tap1e8292ff-11) entered forwarding state
Dec 19 20:23:23 com kernel: [ 7142.021369] type=1400 audit(1387452203.071:46): apparmor="DENIED" operation="capable" parent=1 profile="/usr/sbin/libvirtd" pid=28042 comm="libvirtd" pid=28042 comm="libvirtd" capability=29 capname="audit_write"
Dec 19 20:23:28 com ovs-vsctl: 00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=2 set Port qvo1e8292ff-11 tag=1

 

特にここです。

Dec 19 20:23:23 com kernel: [ 7142.008426] type=1400 audit(1387452203.059:45): apparmor="DENIED" operation="capable" parent=1 profile="/usr/sbin/libvirtd" pid=28042 comm="libvirtd" pid=28042 comm="libvirtd" capability=29  capname="audit_write"

 

 

 /var/log/libvirtd.log でもいくつかのエラーが出力されます。

2013-12-19 11:23:20.409+0000: 28038: info : qemuDomainDefineXML:6217 : Creating domain 'instance-00000005'
2013-12-19 11:23:23.059+0000: 28042: warning : virAuditSend:135 : Failed to send audit message virt=qemu vm="instance-00000005" uuid=89478a93-5756-4a20-b037-d035e5ba5ea3 vm-ctx=libvirt-89478a93-5756-4a20-b037-d035e5ba5ea3 img-ctx=libvirt-89478a93-5756-4a20-b037-d035e5ba5ea3 model=apparmor: Operation not permitted
2013-12-19 11:23:23.059+0000: 28042: warning : virAuditSend:135 : Failed to send audit message virt=qemu vm="instance-00000005" uuid=89478a93-5756-4a20-b037-d035e5ba5ea3 vm-ctx=110:117 img-ctx=110:117 model=dac: Operation not permitted
2013-12-19 11:23:23.059+0000: 28042: info : virNetDevProbeVnetHdr:122 : Enabling IFF_VNET_HDR
2013-12-19 11:23:23.072+0000: 28042: warning : virAuditSend:135 : Failed to send audit message virt=qemu resrc=net reason=open vm="instance-00000005" uuid=89478a93-5756-4a20-b037-d035e5ba5ea3 net=fa:16:3e:4a:01:06 path="/dev/net/tun" rdev=0A:C8: Operation not permitted
2013-12-19 11:23:24.309+0000: 28042: warning : virAuditSend:135 : Failed to send audit message virt=qemu resrc=cgroup reason=deny vm="instance-00000005" uuid=89478a93-5756-4a20-b037-d035e5ba5ea3 cgroup="/sys/fs/cgroup/devices/machine/instance-00000005.libvirt-qemu/" class=all: Operation not permitted
2013-12-19 11:23:24.309+0000: 28042: warning : virAuditSend:135 : Failed to send audit message virt=qemu resrc=cgroup reason=allow vm="instance-00000005" uuid=89478a93-5756-4a20-b037-d035e5ba5ea3 cgroup="/sys/fs/cgroup/devices/machine/instance-00000005.libvirt-qemu/" class=path path="/opt/stack/data/nova/instances/89478a93-5756-4a20-b037-d035e5ba5ea3/disk" rdev=? acl=rw: Operation not permitted
2013-12-19 11:23:24.309+0000: 28042: warning : virAuditSend:135 : Failed to send audit message virt=qemu resrc=cgroup reason=allow vm="instance-00000005" uuid=89478a93-5756-4a20-b037-d035e5ba5ea3 cgroup="/sys/fs/cgroup/devices/machine/instance-00000005.libvirt-qemu/" class=path path="/opt/stack/data/nova/instances/_base/8b0cad167ce11bdb9cdaa1152318c9d3a805c3e3" rdev=? acl=rw: Operation not permitted
2013-12-19 11:23:24.309+0000: 28042: warning : virAuditSend:135 : Failed to send audit message virt=qemu resrc=cgroup reason=allow vm="instance-00000005" uuid=89478a93-5756-4a20-b037-d035e5ba5ea3 cgroup="/sys/fs/cgroup/devices/machine/instance-00000005.libvirt-qemu/" class=major category=pty maj=88 acl=rw: Operation not permitted
2013-12-19 11:23:24.309+0000: 28042: info : virSecurityDACSetOwnership:255 : Setting DAC user and group on '/opt/stack/data/nova/instances/89478a93-5756-4a20-b037-d035e5ba5ea3/disk' to '110:117'
2013-12-19 11:23:24.309+0000: 28042: info : virSecurityDACSetOwnership:255 : Setting DAC user and group on '/opt/stack/data/nova/instances/_base/8b0cad167ce11bdb9cdaa1152318c9d3a805c3e3' to '110:117'
2013-12-19 11:23:24.309+0000: 28042: info : virSecurityDACSetOwnership:255 : Setting DAC user and group on '/opt/stack/data/nova/instances/89478a93-5756-4a20-b037-d035e5ba5ea3/console.log' to '110:117'
2013-12-19 11:23:24.309+0000: 28042: info : virSecurityDACSetOwnership:255 : Setting DAC user and group on '/opt/stack/data/nova/instances/89478a93-5756-4a20-b037-d035e5ba5ea3/kernel' to '110:117'
2013-12-19 11:23:24.309+0000: 28042: info : virSecurityDACSetOwnership:255 : Setting DAC user and group on '/opt/stack/data/nova/instances/89478a93-5756-4a20-b037-d035e5ba5ea3/ramdisk' to '110:117'
2013-12-19 11:23:24.796+0000: 28042: warning : virAuditSend:135 : Failed to send audit message virt=qemu resrc=disk reason=start vm="instance-00000005" uuid=89478a93-5756-4a20-b037-d035e5ba5ea3 old-disk="?" new-disk="/opt/stack/data/nova/instances/89478a93-5756-4a20-b037-d035e5ba5ea3/disk": Operation not permitted
2013-12-19 11:23:24.796+0000: 28042: warning : virAuditSend:135 : Failed to send audit message virt=qemu resrc=net reason=start vm="instance-00000005" uuid=89478a93-5756-4a20-b037-d035e5ba5ea3 old-net=? new-net=fa:16:3e:4a:01:06: Operation not permitted
2013-12-19 11:23:24.796+0000: 28042: warning : virAuditSend:135 : Failed to send audit message virt=qemu resrc=mem reason=start vm="instance-00000005" uuid=89478a93-5756-4a20-b037-d035e5ba5ea3 old-mem=0 new-mem=65536: Operation not permitted
2013-12-19 11:23:24.796+0000: 28042: warning : virAuditSend:135 : Failed to send audit message virt=qemu resrc=vcpu reason=start vm="instance-00000005" uuid=89478a93-5756-4a20-b037-d035e5ba5ea3 old-vcpu=0 new-vcpu=1: Operation not permitted
2013-12-19 11:23:24.796+0000: 28042: warning : virAuditSend:135 : Failed to send audit message virt=qemu op=start reason=booted vm="instance-00000005" uuid=89478a93-5756-4a20-b037-d035e5ba5ea3 vm-pid=1203: Operation not permitted

 

そうです。原因は AppArmor です。

AppArmor はザックリというとパスベースのMACを提供する機能です。

(ちなみに SELinux はラベルベースのMACを提供します。)

AppArmor は libvirt と連携(sVirt)し、起動したインスタンスが他のインスタンスやホスト側のリソースに対して不正な操作ができないよう隔離します。

 

この問題への対処方法は3つです。

  • AppArmor 自体を無効化する
  • AppArmor の libvirt ポリシーだけを無効化する
  • AppArmor にポリシーを追加する

 

この時は AppArmor にポリシーを追加しました。

/etc/apparmor.d/usr.sbin.libvirtd を開き、許可するポリシーを追加します。

  capability setpcap,
  capability mknod,
  capability fsetid,
capability ipc_lock,
  capability audit_write, ←これを追加

 

書き終えたら、ポリシーをリロードします。

# apparmor_parser -r apparmor_parser -r usr.sbin.libvirtd 

この状態で新規にインスタンスを立ち上げれば先ほどの拒否されていたエラーは出力されなくなります(…ところでAUDITはどこに出力されるんです?)

 

/var/log/syslog

Dec 19 20:35:01 com dnsmasq[2871]: cleared cache
Dec 19 20:35:01 com dnsmasq-dhcp[2871]: read /opt/stack/data/neutron/dhcp/f70c9564-fbf3-454d-a76b-c9412d1b5bd5/host
Dec 19 20:35:01 com dnsmasq-dhcp[2871]: read /opt/stack/data/neutron/dhcp/f70c9564-fbf3-454d-a76b-c9412d1b5bd5/opts
Dec 19 20:35:03 com kernel: [ 7842.704619]  nbd15: unknown partition table
Dec 19 20:35:04 com kernel: [ 7843.244897] kjournald starting.  Commit interval 5 seconds
Dec 19 20:35:04 com kernel: [ 7843.278379] EXT3-fs (nbd15): using internal journal
Dec 19 20:35:04 com kernel: [ 7843.278383] EXT3-fs (nbd15): mounted filesystem with ordered data mode
Dec 19 20:35:05 com kernel: [ 7844.899239] block nbd15: NBD_DISCONNECT
Dec 19 20:35:05 com kernel: [ 7844.899320] block nbd15: Unexpected reply (ffff880004043c88)
Dec 19 20:35:05 com kernel: [ 7844.899403] block nbd15: shutting down socket
Dec 19 20:35:05 com kernel: [ 7844.899406] block nbd15: queue cleared
Dec 19 20:35:06 com ovs-vsctl: 00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=2 set Port tap9bf902d0-da tag=1
Dec 19 20:35:10 com kernel: [ 7849.888102] IPv6: ADDRCONF(NETDEV_UP): qvb2a273d08-5b: link is not ready
Dec 19 20:35:11 com kernel: [ 7850.681189] device qvb2a273d08-5b entered promiscuous mode
Dec 19 20:35:12 com kernel: [ 7851.227540] IPv6: ADDRCONF(NETDEV_CHANGE): qvb2a273d08-5b: link becomes ready
Dec 19 20:35:12 com kernel: [ 7851.730020] device qvo2a273d08-5b entered promiscuous mode
Dec 19 20:35:13 com kernel: [ 7852.652621] qbr2a273d08-5b: port 1(qvb2a273d08-5b) entered forwarding state
Dec 19 20:35:13 com kernel: [ 7852.652627] qbr2a273d08-5b: port 1(qvb2a273d08-5b) entered forwarding state
Dec 19 20:35:14 com ovs-vsctl: 00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl -- --may-exist add-port br-int qvo2a273d08-5b -- set Interface qvo2a273d08-5b external-ids:iface-id=2a273d08-5bd5-461e-8908-18a8ea987fa8 external-ids:iface-status=active external-ids:attached-mac=fa:16:3e:4f:5b:01 external-ids:vm-uuid=83386dfb-f2ee-4de5-8441-c704f1cf62bd
Dec 19 20:35:19 com kernel: [ 7858.734903] type=1400 audit(1387452919.779:67): apparmor="STATUS" operation="profile_load" parent=6079 profile="unconfined" name="libvirt-83386dfb-f2ee-4de5-8441-c704f1cf62bd" pid=6080 comm="apparmor_parser"
Dec 19 20:35:19 com kernel: [ 7858.764622] device tap2a273d08-5b entered promiscuous mode
Dec 19 20:35:19 com kernel: [ 7858.764719] qbr2a273d08-5b: port 2(tap2a273d08-5b) entered forwarding state
Dec 19 20:35:19 com kernel: [ 7858.764723] qbr2a273d08-5b: port 2(tap2a273d08-5b) entered forwarding state
Dec 19 20:35:26 com ovs-vsctl: 00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=2 set Port qvo2a273d08-5b tag=1

 

/var/log/libvirt/libvirtd.log

2013-12-19 11:35:15.624+0000: 28042: info : qemuDomainDefineXML:6217 : Creating domain 'instance-00000007'
2013-12-19 11:35:19.799+0000: 28038: warning : virAuditSend:135 : Failed to send audit message virt=qemu vm="instance-00000007" uuid=83386dfb-f2ee-4de5-8441-c704f1cf62bd vm-ctx=libvirt-83386dfb-f2ee-4de5-8441-c704f1cf62bd img-ctx=libvirt-83386dfb-f2ee-4de5-8441-c704f1cf62bd model=apparmor: Operation not permitted
2013-12-19 11:35:19.803+0000: 28038: warning : virAuditSend:135 : Failed to send audit message virt=qemu vm="instance-00000007" uuid=83386dfb-f2ee-4de5-8441-c704f1cf62bd vm-ctx=110:117 img-ctx=110:117 model=dac: Operation not permitted
2013-12-19 11:35:19.806+0000: 28038: info : virNetDevProbeVnetHdr:122 : Enabling IFF_VNET_HDR
2013-12-19 11:35:21.492+0000: 28038: info : virSecurityDACSetOwnership:255 : Setting DAC user and group on '/opt/stack/data/nova/instances/83386dfb-f2ee-4de5-8441-c704f1cf62bd/disk' to '110:117'
2013-12-19 11:35:21.504+0000: 28038: info : virSecurityDACSetOwnership:255 : Setting DAC user and group on '/opt/stack/data/nova/instances/_base/8b0cad167ce11bdb9cdaa1152318c9d3a805c3e3' to '110:117'
2013-12-19 11:35:21.504+0000: 28038: info : virSecurityDACSetOwnership:255 : Setting DAC user and group on '/opt/stack/data/nova/instances/83386dfb-f2ee-4de5-8441-c704f1cf62bd/console.log' to '110:117'
2013-12-19 11:35:21.505+0000: 28038: info : virSecurityDACSetOwnership:255 : Setting DAC user and group on '/opt/stack/data/nova/instances/83386dfb-f2ee-4de5-8441-c704f1cf62bd/kernel' to '110:117'
2013-12-19 11:35:21.508+0000: 28038: info : virSecurityDACSetOwnership:255 : Setting DAC user and group on '/opt/stack/data/nova/instances/83386dfb-f2ee-4de5-8441-c704f1cf62bd/ramdisk' to '110:117'

 

おまけ

インスタンスのAppArmorポリシーはインスタンス生成時に自動的に作成されます。

 

インスタンスにどのポリシーが適用されているかは ps コマンドで確認できます。

一番左の行に unconfoned 以外の文字があるプロセスはAppAromrの管理下にあります。

$ ps axZ | grep libvirt
unconfined                       1941 pts/22   S+     0:00 tailf libvirtd.log l
unconfined                       3267 pts/15   S+     0:00 sg libvirtd /usr/local/bin/nova-compute --config-file /etc/nova/nova.conf
libvirt-83386dfb-f2ee-4de5-8441-c704f1cf62bd 6308 ? Sl   0:21 /usr/bin/qemu-system-x86_64 -name instance-00000007 -S -machine pc-i440fx-1.5,accel=tcg,usb=off -m 64 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 83386dfb-f2ee-4de5-8441-c704f1cf62bd -smbios type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=2013.2.2,serial=86ef6c5c-b4d8-4b40-a26f-bfb506604396,uuid=83386dfb-f2ee-4de5-8441-c704f1cf62bd -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-00000007.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -kernel /opt/stack/data/nova/instances/83386dfb-f2ee-4de5-8441-c704f1cf62bd/kernel -initrd /opt/stack/data/nova/instances/83386dfb-f2ee-4de5-8441-c704f1cf62bd/ramdisk -append root=/dev/vda console=tty0 console=ttyS0 -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/opt/stack/data/nova/instances/83386dfb-f2ee-4de5-8441-c704f1cf62bd/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=24,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:4f:5b:01,bus=pci.0,addr=0x3 -chardev file,id=charserial0,path=/opt/stack/data/nova/instances/83386dfb-f2ee-4de5-8441-c704f1cf62bd/console.log -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
unconfined                       7680 pts/23   S+     0:00 grep --color=auto libvirt
/usr/sbin/libvirtd              28036 ?        Sl     0:00 /usr/sbin/libvirtd -d
unconfined                      28086 ?        S      0:00 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf

 

2. インスタンスから Cinder のボリュームがアタッチできない

これも同様に AppArmor の問題でした。

@hagix9 さんからボリュームがアタッチできない問題を聞いていましたがすっかり忘れていて見事にハマりました。

 

Cinder ボリュームをインスタンスから利用する際、ホストが 対象ボリュームをiSCSI でマウントします。この時、ホストからはアタッチしたボリュームを /dev/sdX として見るようになりますが、それをAppArmor管理下にあるインスタンスが利用できずにアタッチに失敗してしまいます。

 

/var/log/syslog

Dec 19 20:53:44 com iscsid: Connection1:0 to [target: iqn.2010-10.org.openstack:volume-3265594b-6d7a-4363-ab5f-de7a89b3a833, portal: 192.168.56.123,3260] through [iface: default] is operational now
Dec 19 20:53:48 com kernel: [ 8967.481524] type=1400 audit(1387454028.531:68): apparmor="STATUS" operation="profile_replace" parent=11080 profile="unconfined" name="libvirt-83386dfb-f2ee-4de5-8441-c704f1cf62bd" pid=11082 comm="apparmor_parser"
Dec 19 20:53:48 com kernel: [ 8967.637505] type=1400 audit(1387454028.683:69): apparmor="DENIED" operation="open" parent=1 profile="libvirt-83386dfb-f2ee-4de5-8441-c704f1cf62bd" name="/dev/sdb" pid=6308 comm="qemu-system-x86" requested_mask="r" denied_mask="r" fsuid=110 ouid=110
Dec 19 20:53:48 com kernel: [ 8967.637600] type=1400 audit(1387454028.683:70): apparmor="DENIED" operation="open" parent=1 profile="libvirt-83386dfb-f2ee-4de5-8441-c704f1cf62bd" name="/dev/sdb" pid=6308 comm="qemu-system-x86" requested_mask="r" denied_mask="r" fsuid=110 ouid=110
Dec 19 20:53:48 com kernel: [ 8967.638186] type=1400 audit(1387454028.683:71): apparmor="DENIED" operation="open" parent=1 profile="libvirt-83386dfb-f2ee-4de5-8441-c704f1cf62bd" name="/dev/sdb" pid=6308 comm="qemu-system-x86" requested_mask="rw" denied_mask="rw" fsuid=110 ouid=110
Dec 19 20:53:50 com rsyslogd-2177: imuxsock lost 65 messages from pid 4451 due to rate-limiting
Dec 19 20:53:51 com kernel: [ 8970.839063] type=1400 audit(1387454031.887:72): apparmor="STATUS" operation="profile_replace" parent=11096 profile="unconfined" name="libvirt-83386dfb-f2ee-4de5-8441-c704f1cf62bd" pid=11097 comm="apparmor_parser"
Dec 19 20:53:52 com kernel: [ 8971.866572] sd 3:0:0:1: [sdb] Synchronizing SCSI cache
Dec 19 20:53:53 com kernel: [ 8972.155885]  connection1:0: detected conn error (1020)
Dec 19 20:53:53 com iscsid: Connection1:0 to [target: iqn.2010-10.org.openstack:volume-3265594b-6d7a-4363-ab5f-de7a89b3a833, portal: 192.168.56.123,3260] through [iface: default] is shutdown.

 

/var/log/libvirt/libvirtd.log

2013-12-19 11:53:48.539+0000: 28041: info : virSecurityDACSetOwnership:255 : Setting DAC user and group on '/dev/disk/by-path/ip-192.168.56.123:3260-iscsi-iqn.2010-10.org.openstack:volume-3265594b-6d7a-4363-ab5f-de7a89b3a833-lun-1' to '110:117'
2013-12-19 11:53:48.714+0000: 28041: error : qemuMonitorTextAddDrive:2604 : operation failed: open disk image file failed
2013-12-19 11:53:51.900+0000: 28041: info : virSecurityDACRestoreSecurityFileLabel:299 : Restoring DAC user and group on '/dev/disk/by-path/ip-192.168.56.123:3260-iscsi-iqn.2010-10.org.openstack:volume-3265594b-6d7a-4363-ab5f-de7a89b3a833-lun-1'
2013-12-19 11:53:51.903+0000: 28041: info : virSecurityDACSetOwnership:255 : Setting DAC user and group on '/dev/sdb' to '0:0'

 

/var/log/nova/nova-compute.log (エラーのみ)

2013-12-19 20:53:53.881 DEBUG nova.virt.libvirt.volume [req-8aa9dff4-cd31-47ab-9f1a-bf49c0336fd4 demo demo] iscsiadm ('--op', 'delete'): stdout= stderr= from (pid=3272) _run_iscsiadm /opt/stack/nova/nova/virt/libvirt/volume.py:220
2013-12-19 20:53:53.883 ERROR nova.compute.manager [req-8aa9dff4-cd31-47ab-9f1a-bf49c0336fd4 demo demo] [instance: 83386dfb-f2ee-4de5-8441-c704f1cf62bd] Failed to attach volume 3265594b-6d7a-4363-ab5f-de7a89b3a833 at /dev/vdb
2013-12-19 20:53:53.883 TRACE nova.compute.manager [instance: 83386dfb-f2ee-4de5-8441-c704f1cf62bd] Traceback (most recent call last):
2013-12-19 20:53:53.883 TRACE nova.compute.manager [instance: 83386dfb-f2ee-4de5-8441-c704f1cf62bd]   File "/opt/stack/nova/nova/compute/manager.py", line 3681, in _attach_volume
2013-12-19 20:53:53.883 TRACE nova.compute.manager [instance: 83386dfb-f2ee-4de5-8441-c704f1cf62bd]     encryption=encryption)
2013-12-19 20:53:53.883 TRACE nova.compute.manager [instance: 83386dfb-f2ee-4de5-8441-c704f1cf62bd]   File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 1100, in attach_volume
2013-12-19 20:53:53.883 TRACE nova.compute.manager [instance: 83386dfb-f2ee-4de5-8441-c704f1cf62bd]     raise exception.DeviceIsBusy(device=disk_dev)
2013-12-19 20:53:53.883 TRACE nova.compute.manager [instance: 83386dfb-f2ee-4de5-8441-c704f1cf62bd] DeviceIsBusy: The supplied device (vdb) is busy.
2013-12-19 20:53:53.883 TRACE nova.compute.manager [instance: 83386dfb-f2ee-4de5-8441-c704f1cf62bd]

 

2013-12-19 20:53:55.463 ERROR nova.openstack.common.rpc.amqp [req-8aa9dff4-cd31-47ab-9f1a-bf49c0336fd4 demo demo] Exception during message handling
2013-12-19 20:53:55.463 TRACE nova.openstack.common.rpc.amqp Traceback (most recent call last):
2013-12-19 20:53:55.463 TRACE nova.openstack.common.rpc.amqp   File "/opt/stack/nova/nova/openstack/common/rpc/amqp.py", line 461, in _process_data
2013-12-19 20:53:55.463 TRACE nova.openstack.common.rpc.amqp     **args)
2013-12-19 20:53:55.463 TRACE nova.openstack.common.rpc.amqp   File "/opt/stack/nova/nova/openstack/common/rpc/dispatcher.py", line 172, in dispatch
2013-12-19 20:53:55.463 TRACE nova.openstack.common.rpc.amqp     result = getattr(proxyobj, method)(ctxt, **kwargs)
2013-12-19 20:53:55.463 TRACE nova.openstack.common.rpc.amqp   File "/opt/stack/nova/nova/exception.py", line 90, in wrapped
2013-12-19 20:53:55.463 TRACE nova.openstack.common.rpc.amqp     payload)
2013-12-19 20:53:55.463 TRACE nova.openstack.common.rpc.amqp   File "/opt/stack/nova/nova/exception.py", line 73, in wrapped
2013-12-19 20:53:55.463 TRACE nova.openstack.common.rpc.amqp     return f(self, context, *args, **kw)
2013-12-19 20:53:55.463 TRACE nova.openstack.common.rpc.amqp   File "/opt/stack/nova/nova/compute/manager.py", line 243, in decorated_function
2013-12-19 20:53:55.463 TRACE nova.openstack.common.rpc.amqp     pass
2013-12-19 20:53:55.463 TRACE nova.openstack.common.rpc.amqp   File "/opt/stack/nova/nova/compute/manager.py", line 229, in decorated_function
2013-12-19 20:53:55.463 TRACE nova.openstack.common.rpc.amqp     return function(self, context, *args, **kwargs)
2013-12-19 20:53:55.463 TRACE nova.openstack.common.rpc.amqp   File "/opt/stack/nova/nova/compute/manager.py", line 271, in decorated_function
2013-12-19 20:53:55.463 TRACE nova.openstack.common.rpc.amqp     e, sys.exc_info())
2013-12-19 20:53:55.463 TRACE nova.openstack.common.rpc.amqp   File "/opt/stack/nova/nova/compute/manager.py", line 258, in decorated_function
2013-12-19 20:53:55.463 TRACE nova.openstack.common.rpc.amqp     return function(self, context, *args, **kwargs)
2013-12-19 20:53:55.463 TRACE nova.openstack.common.rpc.amqp   File "/opt/stack/nova/nova/compute/manager.py", line 3649, in attach_volume
2013-12-19 20:53:55.463 TRACE nova.openstack.common.rpc.amqp     context, instance, mountpoint)
2013-12-19 20:53:55.463 TRACE nova.openstack.common.rpc.amqp   File "/opt/stack/nova/nova/compute/manager.py", line 3644, in attach_volume
2013-12-19 20:53:55.463 TRACE nova.openstack.common.rpc.amqp     mountpoint, instance)
2013-12-19 20:53:55.463 TRACE nova.openstack.common.rpc.amqp   File "/opt/stack/nova/nova/compute/manager.py", line 3691, in _attach_volume
2013-12-19 20:53:55.463 TRACE nova.openstack.common.rpc.amqp     connector)
2013-12-19 20:53:55.463 TRACE nova.openstack.common.rpc.amqp   File "/opt/stack/nova/nova/compute/manager.py", line 3681, in _attach_volume
2013-12-19 20:53:55.463 TRACE nova.openstack.common.rpc.amqp     encryption=encryption)
2013-12-19 20:53:55.463 TRACE nova.openstack.common.rpc.amqp   File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 1100, in attach_volume
2013-12-19 20:53:55.463 TRACE nova.openstack.common.rpc.amqp     raise exception.DeviceIsBusy(device=disk_dev)
2013-12-19 20:53:55.463 TRACE nova.openstack.common.rpc.amqp DeviceIsBusy: The supplied device (vdb) is busy.
2013-12-19 20:53:55.463 TRACE nova.openstack.common.rpc.amqp

 

今回 Ubuntu 13.10 + Havana (libvirt 1.1.1)で試したところこの問題にぶちあたりました。しかし、Ubuntu 12.04 + Folsom (libvirt 0.9.8) ではこの問題は起きませんでした。

調べてみると、Ubuntu 12.04 ではボリュームアタッチ/デタッチの際、利用するデバイスを動的に /etc/apparmor.d/libvirt/libvirt-UUID.files に追加/削除していました。

Ubuntu 13.10 でも/etc/apparmor.d/libvirt/libvirt-UUID.filesにポリシーを追加してやればボリュームのアタッチができるようになりますが毎回手動で変更を加えるわけにも行かないので別の手を考えます。

 

根本原因を追いましたが残念ながら私では時間切れになってしまいだめでした…。virt-aa-helper あたりが怪しい気はしています。

とはいえどうにもしないわけにいかないので諦めて libvirt を AppArmor 管理下から外すことにします。不本意ながらこれでボリュームのアタッチが出来るようになります。

# ln -s /etc/apparmor.d/usr.lib.libvirt.virt-aa-helper /etc/apparmor.d/disable/usr.lib.libvirt.virt-aa-helper
# ln -s /etc/apparmor.d/usr.sbin.libvirtd /etc/apparmor.d/disable/usr.sbin.libvirtd
# service apparmor restart
 * Reloading AppArmor profiles
Skipping profile in /etc/apparmor.d/disable: usr.lib.libvirt.virt-aa-helper
Skipping profile in /etc/apparmor.d/disable: usr.sbin.libvirtd
Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd
# service libvirt-bin restart

 

注意: 無効前に起動しているインスタンスには既にポリシーが適用されているため、ボリュームのアタッチに失敗します。新規インスタンスを作成し、ボリュームのアタッチを行ってみてください。

 

この問題は最近 Bug にも登録されましたがまだ誰も手を出していないようです。(もしかして、OpenStack ではなく Ubuntu 側でやっている?)

Bug #1259860 “Apparmor doesn't allow volume to attach in 13.10 U...” : Bugs : OpenStack Compute (nova)

 

3. Cinder で cinder-manage db sync が出来ない

気づけばなんということはない問題なのですが、ハマりました。

「cinder-manage db sync」を実行した際、設定した MySQL 上のデータベースを初期化してくれず、以下のログを出すだけでした。

2013-12-19 21:26:53.774 DEBUG cinder.utils [-] backend <module 'cinder.db.sqlalchemy.migration' from '/opt/stack/cinder/cinder/db/sqlalchemy/migration.pyc'> from (pid=19023) __get_backend /opt/stack/cinder/cinder/utils.py:429
2013-12-19 21:26:53.797 DEBUG migrate.versioning.repository [-] Loading repository /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo... from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/repository.py:76
2013-12-19 21:26:53.824 DEBUG migrate.versioning.script.base [-] Loading script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/001_cinder_init.py... from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:27
2013-12-19 21:26:53.824 DEBUG migrate.versioning.script.base [-] Script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/001_cinder_init.py loaded successfully from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:30
2013-12-19 21:26:53.824 DEBUG migrate.versioning.script.base [-] Loading script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/002_quota_class.py... from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:27
2013-12-19 21:26:53.824 DEBUG migrate.versioning.script.base [-] Script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/002_quota_class.py loaded successfully from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:30
2013-12-19 21:26:53.824 DEBUG migrate.versioning.script.base [-] Loading script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/003_glance_metadata.py... from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:27
2013-12-19 21:26:53.824 DEBUG migrate.versioning.script.base [-] Script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/003_glance_metadata.py loaded successfully from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:30
2013-12-19 21:26:53.824 DEBUG migrate.versioning.script.base [-] Loading script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/004_volume_type_to_uuid.py... from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:27
2013-12-19 21:26:53.824 DEBUG migrate.versioning.script.base [-] Script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/004_volume_type_to_uuid.py loaded successfully from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:30
2013-12-19 21:26:53.824 DEBUG migrate.versioning.script.base [-] Loading script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/005_add_source_volume_column.py... from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:27
2013-12-19 21:26:53.824 DEBUG migrate.versioning.script.base [-] Script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/005_add_source_volume_column.py loaded successfully from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:30
2013-12-19 21:26:53.824 DEBUG migrate.versioning.script.base [-] Loading script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/005_sqlite_downgrade.sql... from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:27
2013-12-19 21:26:53.825 DEBUG migrate.versioning.script.base [-] Script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/005_sqlite_downgrade.sql loaded successfully from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:30
2013-12-19 21:26:53.825 DEBUG migrate.versioning.script.base [-] Loading script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/006_snapshots_add_provider_location.py... from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:27
2013-12-19 21:26:53.825 DEBUG migrate.versioning.script.base [-] Script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/006_snapshots_add_provider_location.py loaded successfully from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:30
2013-12-19 21:26:53.825 DEBUG migrate.versioning.script.base [-] Loading script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/007_add_volume_snapshot_fk.py... from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:27
2013-12-19 21:26:53.825 DEBUG migrate.versioning.script.base [-] Script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/007_add_volume_snapshot_fk.py loaded successfully from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:30
2013-12-19 21:26:53.826 DEBUG migrate.versioning.script.base [-] Loading script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/007_sqlite_downgrade.sql... from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:27
2013-12-19 21:26:53.826 DEBUG migrate.versioning.script.base [-] Script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/007_sqlite_downgrade.sql loaded successfully from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:30
2013-12-19 21:26:53.826 DEBUG migrate.versioning.script.base [-] Loading script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/008_add_backup.py... from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:27
2013-12-19 21:26:53.827 DEBUG migrate.versioning.script.base [-] Script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/008_add_backup.py loaded successfully from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:30
2013-12-19 21:26:53.827 DEBUG migrate.versioning.script.base [-] Loading script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/009_add_snapshot_metadata_table.py... from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:27
2013-12-19 21:26:53.827 DEBUG migrate.versioning.script.base [-] Script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/009_add_snapshot_metadata_table.py loaded successfully from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:30
2013-12-19 21:26:53.828 DEBUG migrate.versioning.script.base [-] Loading script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/010_add_transfers_table.py... from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:27
2013-12-19 21:26:53.828 DEBUG migrate.versioning.script.base [-] Script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/010_add_transfers_table.py loaded successfully from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:30
2013-12-19 21:26:53.828 DEBUG migrate.versioning.script.base [-] Loading script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/011_sqlite_downgrade.sql... from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:27
2013-12-19 21:26:53.829 DEBUG migrate.versioning.script.base [-] Script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/011_sqlite_downgrade.sql loaded successfully from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:30
2013-12-19 21:26:53.829 DEBUG migrate.versioning.script.base [-] Loading script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/011_add_bootable_column.py... from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:27
2013-12-19 21:26:53.835 DEBUG migrate.versioning.script.base [-] Script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/011_add_bootable_column.py loaded successfully from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:30
2013-12-19 21:26:53.835 DEBUG migrate.versioning.script.base [-] Loading script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/012_add_attach_host_column.py... from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:27
2013-12-19 21:26:53.838 DEBUG migrate.versioning.script.base [-] Script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/012_add_attach_host_column.py loaded successfully from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:30
2013-12-19 21:26:53.839 DEBUG migrate.versioning.script.base [-] Loading script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/012_sqlite_downgrade.sql... from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:27
2013-12-19 21:26:53.839 DEBUG migrate.versioning.script.base [-] Script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/012_sqlite_downgrade.sql loaded successfully from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:30
2013-12-19 21:26:53.839 DEBUG migrate.versioning.script.base [-] Loading script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/013_add_provider_geometry_column.py... from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:27
2013-12-19 21:26:53.840 DEBUG migrate.versioning.script.base [-] Script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/013_add_provider_geometry_column.py loaded successfully from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:30
2013-12-19 21:26:53.846 DEBUG migrate.versioning.script.base [-] Loading script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/013_sqlite_downgrade.sql... from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:27
2013-12-19 21:26:53.848 DEBUG migrate.versioning.script.base [-] Script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/013_sqlite_downgrade.sql loaded successfully from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:30
2013-12-19 21:26:53.849 DEBUG migrate.versioning.script.base [-] Loading script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/014_add_name_id.py... from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:27
2013-12-19 21:26:53.849 DEBUG migrate.versioning.script.base [-] Script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/014_add_name_id.py loaded successfully from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:30
2013-12-19 21:26:53.849 DEBUG migrate.versioning.script.base [-] Loading script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/014_sqlite_downgrade.sql... from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:27
2013-12-19 21:26:53.849 DEBUG migrate.versioning.script.base [-] Script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/014_sqlite_downgrade.sql loaded successfully from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:30
2013-12-19 21:26:53.850 DEBUG migrate.versioning.script.base [-] Loading script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/015_drop_migrations_table.py... from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:27
2013-12-19 21:26:53.852 DEBUG migrate.versioning.script.base [-] Script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/015_drop_migrations_table.py loaded successfully from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:30
2013-12-19 21:26:53.859 DEBUG migrate.versioning.script.base [-] Loading script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/016_drop_sm_tables.py... from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:27
2013-12-19 21:26:53.864 DEBUG migrate.versioning.script.base [-] Script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/016_drop_sm_tables.py loaded successfully from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:30
2013-12-19 21:26:53.871 DEBUG migrate.versioning.script.base [-] Loading script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/017_add_encryption_information.py... from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:27
2013-12-19 21:26:53.872 DEBUG migrate.versioning.script.base [-] Script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/017_add_encryption_information.py loaded successfully from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:30
2013-12-19 21:26:53.875 DEBUG migrate.versioning.script.base [-] Loading script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/018_add_qos_specs.py... from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:27
2013-12-19 21:26:53.879 DEBUG migrate.versioning.script.base [-] Script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/018_add_qos_specs.py loaded successfully from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:30
2013-12-19 21:26:53.879 DEBUG migrate.versioning.script.base [-] Loading script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/019_add_migration_status.py... from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:27
2013-12-19 21:26:53.879 DEBUG migrate.versioning.script.base [-] Script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/019_add_migration_status.py loaded successfully from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:30
2013-12-19 21:26:53.883 DEBUG migrate.versioning.script.base [-] Loading script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/020_add_volume_admin_metadata_table.py... from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:27
2013-12-19 21:26:53.884 DEBUG migrate.versioning.script.base [-] Script /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo/versions/020_add_volume_admin_metadata_table.py loaded successfully from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/script/base.py:30
2013-12-19 21:26:53.888 DEBUG migrate.versioning.repository [-] Repository /opt/stack/cinder/cinder/db/sqlalchemy/migrate_repo loaded successfully from (pid=19023) __init__ /usr/lib/python2.7/dist-packages/migrate/versioning/repository.py:82
2013-12-19 21:26:53.896 DEBUG migrate.versioning.repository [-] Config: OrderedDict([('db_settings', OrderedDict([('__name__', 'db_settings'), ('repository_id', 'cinder'), ('version_table', 'migrate_version'), ('required_dbs', '[]')]))])

 

原因は connection オプションの定義位置でした。

 

Cinder を構築する前に cinder.conf.sample を全て見るようにしています。

このファイルは Cinder のソースコード内で定義されている変数・デフォルト値・説明を全て載せてくれている素晴らしいファイルです。(Novaには nova.conf.sample があります。)

このファイルは自動生成スクリプトを使用し、自動的に生成されます。

 

この中のオプションの1つ connect ががあり、cinder.conf.sample の中では、以下の通り指定されています。

[DEFAULT]
connection=sqlite:////cinder/openstack/common/db/$sqlite_db

 

実際には以下の用に[database]の下になければならず、ここにやられました。

[database]
connection=sqlite:////cinder/openstack/common/db/$sqlite_db

 

はじめは以下設定で回避をしたのですが、@ritchey98 さんの指摘を受け上記のように変更しました。(sql_connection は廃止予定のオプションです。)

[DEFAULT]
sql_connection=sqlite:////cinder/openstack/common/db/$sqlite_db

 

尚、実環境の設定では、SQLite ではなく MySQL を指定しています。

[database]
connection = mysql://cinder:password@localhost/cinder?charset=utf8

 

気づいた時は脱力しました。。

 


以上が最近のハマりどころ3点でした。また、何かでたら追記します。

 

明日22日目は @tmak_tw さんの「2013年、OpenStackはエンタープライズの大地に立ったのか」です。よろしくお願いします。