« Lab architecture » : différence entre les versions

De TeriaHowto
Sauter à la navigation Sauter à la recherche
Ligne 4 : Ligne 4 :


In the first availability zone :
In the first availability zone :
* 3 compute nodes (aka hypervisors) for [https://docs.openstack.org/nova/latest/ Nova]
* 3 compute nodes (aka hypervisors) for [https://docs.openstack.org/nova/latest/ Nova] wihtin a [https://docs.openstack.org/nova/latest/admin/cells.html Nova Cell] called ''cell1''
* 1 Ceph cluster for [https://docs.openstack.org/cinder/latest/ Cinder] and [https://docs.openstack.org/glance/latest/ Glance]
* 1 Ceph cluster for [https://docs.openstack.org/cinder/latest/ Cinder] and [https://docs.openstack.org/glance/latest/ Glance]
* 3 bare metal nodes for [https://docs.openstack.org/ironic/latest/ Ironic]
* 3 bare metal nodes for [https://docs.openstack.org/ironic/latest/ Ironic]


In the second availability zone :
In the second availability zone :
* 3 compute nodes
* 3 compute nodes wihtin a Nova Cell called ''cell2''
* 1 Ceph cluster (Cinder only)
* 1 Ceph cluster (Cinder only)
<br />
<br />

Version du 21 décembre 2022 à 16:33

One region, two availability zones

This Openstack Lab is composed of one region (fr1) and two availability zones (az1 and az2).

In the first availability zone :

In the second availability zone :

  • 3 compute nodes wihtin a Nova Cell called cell2
  • 1 Ceph cluster (Cinder only)


Lab architecture


Networks

The different networks are :

  • fr1-az1 : provider network in AZ1 (flat, i.e. without VLAN, shared, i.e. available for everybody and external, i.e. managed outside Openstack)
  • fr1-az2 : provider network in AZ2
  • ironic : external VLAN for bare metal nodes (only available in AZ1)
  • octavia : external VLAN for Octavia management (only available in AZ1)


Networks


Resources

For the needs of the lab, the actual hardware used is :

  • 4 VMs for Openstack controllers on a workstation
  • 3 VMs acting as hyperconverged (compute + Ceph) nodes in AZ1 on Hypervisor 1 (Supermicro X10SL7-F with Xeon E3-1265L v3, 32 GB RAM ECC DDR3)
  • 3 VMs acting as bare metal nodes in AZ1 on Hypervisor 2 (HP Elitedesk 800 g2 mini with Core i5-6500t, 32 GB RAM DDR4)
  • 3 VMs acting as hyperconverged (compute + Ceph) nodes in AZ2 on Hypervisor 3 (Supermicro X9SCM-F with Xeon E3-1265L v2, 32 GB RAM ECC DDR3)


The bare metal nodes are "simulated" thanks to Virtual BMC. Ironic uses indeed the IPMI protocol to turn on/off/change the boot order of nodes.

VM distribution