« Lab architecture » : différence entre les versions

De TeriaHowto
Sauter à la navigation Sauter à la recherche
Aucun résumé des modifications
Ligne 28 : Ligne 28 :
== Resources ==
== Resources ==


For the needs of the lab, the actual hardware used is :
* Availability Zone 1 :
* 4 VMs for Openstack controllers on a workstation (10 GB of RAM and 3 vCPUs for ''controller1'' and ''controller2'', 2 GB of RAM and 1 vCPU for ''ctrl-cell1'' and ''ctrl-cell2'')
** HP Elitedesk 800 g5 mini (Intel Core i5-9500, 64 GB RAM DDR4, 500 Go NVME)
* 3 VMs acting as hyperconverged (compute + Ceph) nodes in AZ1 on Hypervisor 1 (Supermicro X10SL7-F with Xeon E3-1265L v3, 32 GB RAM ECC DDR3). Each VM has 10 GB of RAM, 2 vCPUs, one virtual disk for OS and one virtual disk for [https://docs.ceph.com/en/latest/man/8/ceph-osd/ Ceph OSD]
*** 3 VMs acting as baremetal nodes (1 vCPU, 4 GB RAM, 20 GB disk for each VM)
* 3 VMs acting as bare metal nodes in AZ1 on Hypervisor 2 (HP Elitedesk 800 g2 mini with Core i5-6500t, 32 GB RAM DDR4)
*** 2 VMs for Openstack controllers (2 vCPUs, 16 GB RAM, 40 GB disk for each VM)
* 3 VMs acting as hyperconverged (compute + Ceph) nodes in AZ2 on Hypervisor 3 (Supermicro X9SCM-F with Xeon E3-1265L v2, 32 GB RAM ECC DDR3)
*** 1 VM for Openstack cell controller (1 vCPU, 2 GB RAM, 20 GB disk)
*** 1 VM for Openstack network controller (1 vCPU, 2 GB RAM, 20 GB disk)
** Supermicro X9SCM-F (Intel Xeon E3-1265L v2, 32 GB RAM ECC DDR3)
*** 3 VMs acting as hyperconverged nodes (compute + Ceph RBD storage). Each VM has 10 GB of RAM, 2 vCPUs, one virtual disk for OS and one virtual disk for [https://docs.ceph.com/en/latest/man/8/ceph-osd/ Ceph OSD]
 
* Availability Zone 2:
** HP Elitedesk 800 g2 mini (Intel Core i5-6500T, 32 GB RAM DDR4, 500 Go NVME)
*** 1 VM for Openstack cell controller (1 vCPU, 2 GB RAM, 20 GB disk)
*** 1 VM for Openstack network controller (1 vCPU, 2 GB RAM, 20 GB disk)
** Supermicro X10SL7-F (Intel Xeon E3-1265L v3, 32 GB RAM ECC DDR3)
*** 3 VMs acting as hyperconverged nodes (compute + Ceph RBD storage). Each VM has 10 GB of RAM, 2 vCPUs, one virtual disk for OS and one virtual disk for [https://docs.ceph.com/en/latest/man/8/ceph-osd/ Ceph OSD]
 
<br />
<br />
The bare metal nodes are "simulated" thanks to [https://github.com/openstack/virtualbmc Virtual BMC]. Ironic uses among others the IPMI protocol to turn on/off/change the boot order of nodes.
The bare metal nodes are "simulated" thanks to [https://github.com/openstack/virtualbmc Virtual BMC] and [https://docs.openstack.org/sushy-tools/latest/index.html Sushy-tools]. Ironic uses among others the IPMI or Redfish protocol to turn on/off/change the boot order of nodes.
<br />
<br />
[[Fichier:VM distribution.png|900px|vignette|centré|VM distribution]]
[[Fichier:VM distribution.png|900px|vignette|centré|VM distribution]]

Version du 10 mars 2023 à 13:24

One region, two availability zones

This Openstack Lab is composed of one region (fr1) and two availability zones (az1 and az2).

In the first availability zone :

In the second availability zone :

  • 3 compute nodes wihtin a Nova Cell called cell2
  • 1 Ceph cluster (Cinder only)


Lab architecture


Networks

The different networks are :

  • fr1-az1 : provider network in AZ1 (flat, i.e. without VLAN, shared, i.e. available for everybody and external, i.e. managed outside Openstack)
  • fr1-az2 : provider network in AZ2
  • ironic : external VLAN for bare metal nodes (only available in AZ1)
  • octavia : external VLAN for Octavia management (only available in AZ1)


Networks


Resources

  • Availability Zone 1 :
    • HP Elitedesk 800 g5 mini (Intel Core i5-9500, 64 GB RAM DDR4, 500 Go NVME)
      • 3 VMs acting as baremetal nodes (1 vCPU, 4 GB RAM, 20 GB disk for each VM)
      • 2 VMs for Openstack controllers (2 vCPUs, 16 GB RAM, 40 GB disk for each VM)
      • 1 VM for Openstack cell controller (1 vCPU, 2 GB RAM, 20 GB disk)
      • 1 VM for Openstack network controller (1 vCPU, 2 GB RAM, 20 GB disk)
    • Supermicro X9SCM-F (Intel Xeon E3-1265L v2, 32 GB RAM ECC DDR3)
      • 3 VMs acting as hyperconverged nodes (compute + Ceph RBD storage). Each VM has 10 GB of RAM, 2 vCPUs, one virtual disk for OS and one virtual disk for Ceph OSD
  • Availability Zone 2:
    • HP Elitedesk 800 g2 mini (Intel Core i5-6500T, 32 GB RAM DDR4, 500 Go NVME)
      • 1 VM for Openstack cell controller (1 vCPU, 2 GB RAM, 20 GB disk)
      • 1 VM for Openstack network controller (1 vCPU, 2 GB RAM, 20 GB disk)
    • Supermicro X10SL7-F (Intel Xeon E3-1265L v3, 32 GB RAM ECC DDR3)
      • 3 VMs acting as hyperconverged nodes (compute + Ceph RBD storage). Each VM has 10 GB of RAM, 2 vCPUs, one virtual disk for OS and one virtual disk for Ceph OSD


The bare metal nodes are "simulated" thanks to Virtual BMC and Sushy-tools. Ironic uses among others the IPMI or Redfish protocol to turn on/off/change the boot order of nodes.

VM distribution