« Lab architecture » : différence entre les versions

De TeriaHowto
Sauter à la navigation Sauter à la recherche
Ligne 30 : Ligne 30 :
For the needs of the lab, the actual hardware used is :
For the needs of the lab, the actual hardware used is :
* 4 VMs for Openstack controllers on a workstation
* 4 VMs for Openstack controllers on a workstation
* 3 VMs acting as hyperconverged nodes in AZ1 on Hypervisor 1 (Supermicro X10SL7-F with Xeon E3-1265L v3, 32 GB RAM ECC DDR3)
* 3 VMs acting as hyperconverged (compute + Ceph) nodes in AZ1 on Hypervisor 1 (Supermicro X10SL7-F with Xeon E3-1265L v3, 32 GB RAM ECC DDR3)
* 3 VMs acting as bare metal nodes in AZ1 on Hypervisor 2 (HP Elitedesk 800 g2 mini with Core i5-6500t, 32 GB RAM DDR4)
* 3 VMs acting as bare metal nodes in AZ1 on Hypervisor 2 (HP Elitedesk 800 g2 mini with Core i5-6500t, 32 GB RAM DDR4)
* 3 VMs acting as hyperconverged nodes in AZ2 on Hypervisor 3 (Supermicro X9SCM-F with Xeon E3-1265L v2, 32 GB RAM ECC DDR3)
* 3 VMs acting as hyperconverged (compute + Ceph) nodes in AZ2 on Hypervisor 3 (Supermicro X9SCM-F with Xeon E3-1265L v2, 32 GB RAM ECC DDR3)
<br />
<br />
The bare metal nodes are "simulated" thanks to [https://github.com/openstack/virtualbmc Virtual BMC]. Ironic uses indeed the IPMI protocol to turn on/off/change the boot order of nodes.
The bare metal nodes are "simulated" thanks to [https://github.com/openstack/virtualbmc Virtual BMC]. Ironic uses indeed the IPMI protocol to turn on/off/change the boot order of nodes.
<br />
<br />
[[Fichier:VM distribution.png|900px|vignette|centré|VM distribution]]
[[Fichier:VM distribution.png|900px|vignette|centré|VM distribution]]

Version du 21 décembre 2022 à 11:30

One region, two availability zones

This Openstack Lab is composed of one region (fr1) and two availability zones (az1 and az2).

In the first availability zone :

In the second availability zone :

  • 3 compute nodes
  • 1 Ceph cluster (Cinder only)


Lab architecture


Networks

The different networks are :

  • fr1-az1 : provider network in AZ1 (flat, i.e. without VLAN, shared, i.e. available for everybody and external, i.e. managed outside Openstack)
  • fr1-az2 : provider network in AZ2
  • ironic : external VLAN for bare metal nodes (only available in AZ1)
  • octavia : external VLAN for Octavia management (only available in AZ1)


Networks


Resources

For the needs of the lab, the actual hardware used is :

  • 4 VMs for Openstack controllers on a workstation
  • 3 VMs acting as hyperconverged (compute + Ceph) nodes in AZ1 on Hypervisor 1 (Supermicro X10SL7-F with Xeon E3-1265L v3, 32 GB RAM ECC DDR3)
  • 3 VMs acting as bare metal nodes in AZ1 on Hypervisor 2 (HP Elitedesk 800 g2 mini with Core i5-6500t, 32 GB RAM DDR4)
  • 3 VMs acting as hyperconverged (compute + Ceph) nodes in AZ2 on Hypervisor 3 (Supermicro X9SCM-F with Xeon E3-1265L v2, 32 GB RAM ECC DDR3)


The bare metal nodes are "simulated" thanks to Virtual BMC. Ironic uses indeed the IPMI protocol to turn on/off/change the boot order of nodes.

VM distribution