« Lab architecture » : différence entre les versions

De TeriaHowto
Sauter à la navigation Sauter à la recherche
mAucun résumé des modifications
Aucun résumé des modifications
 
(11 versions intermédiaires par le même utilisateur non affichées)
Ligne 4 : Ligne 4 :


In the first availability zone :
In the first availability zone :
* 3 compute nodes (aka hypervisors) for [https://docs.openstack.org/nova/latest/ Nova]
* 3 compute nodes (aka hypervisors) for [https://docs.openstack.org/nova/latest/ Nova] wihtin a [https://docs.openstack.org/nova/latest/admin/cells.html Nova Cell] called ''cell1''
* 1 Ceph cluster for [https://docs.openstack.org/cinder/latest/ Cinder]
* 1 Ceph cluster for [https://docs.openstack.org/cinder/latest/ Cinder] and [https://docs.openstack.org/glance/latest/ Glance]
* 3 bare metal nodes for [https://docs.openstack.org/ironic/latest/ Ironic]
* 3 bare metal nodes for [https://docs.openstack.org/ironic/latest/ Ironic]


In the second availability zone :
In the second availability zone :
* 3 compute nodes
* 3 compute nodes wihtin a Nova Cell called ''cell2''
* 1 Ceph cluster
* 1 Ceph cluster (Cinder only)
<br />
<br />
[[Fichier:Lab architecture.png|center|thumb|900px|Lab architecture]]
[[Fichier:Lab architecture.png|center|thumb|900px|Lab architecture]]
Ligne 28 : Ligne 28 :
== Resources ==
== Resources ==


For the needs of the lab, the actual hardware used is :
* Availability Zone 1 :
* 4 VMs for Openstack controllers
** HP Elitedesk 800 g5 mini (Intel Core i5-9500, 64 GB RAM DDR4, 500 GB NVME)
* 3 VMs acting as hyperconverged nodes in AZ1
*** 3 VMs acting as baremetal nodes (1 vCPU, 4 GB RAM, 20 GB disk for each VM)
* 3 VMs acting as bare metal nodes in AZ1
*** 2 VMs for Openstack controllers (2 vCPUs, 16 GB RAM, 40 GB disk for each VM)
* 3 VMs acting as hyperconverged nodes in AZ2
*** 1 VM for Openstack cell controller (1 vCPU, 2 GB RAM, 20 GB disk)
*** 1 VM for Openstack network controller (1 vCPU, 2 GB RAM, 20 GB disk)
** Supermicro X9SCM-F (Intel Xeon E3-1265L v2, 32 GB RAM ECC DDR3)
*** 3 VMs acting as hyperconverged nodes (compute + Ceph RBD storage). Each VM has 10 GB of RAM, 2 vCPUs, one virtual disk for OS and one virtual disk for [https://docs.ceph.com/en/latest/man/8/ceph-osd/ Ceph OSD]
 
* Availability Zone 2:
** HP Elitedesk 800 g2 mini (Intel Core i5-6500T, 32 GB RAM DDR4, 500 GB NVME)
*** 1 VM for Openstack cell controller (1 vCPU, 2 GB RAM, 20 GB disk)
*** 1 VM for Openstack network controller (1 vCPU, 2 GB RAM, 20 GB disk)
** Supermicro X10SL7-F (Intel Xeon E3-1265L v3, 32 GB RAM ECC DDR3)
*** 3 VMs acting as hyperconverged nodes (compute + Ceph RBD storage). Each VM has 10 GB of RAM, 2 vCPUs, one virtual disk for OS and one virtual disk for [https://docs.ceph.com/en/latest/man/8/ceph-osd/ Ceph OSD]
 
<br />
The bare metal nodes are "simulated" thanks to [https://docs.openstack.org/virtualbmc/latest/index.html Virtual BMC] and [https://docs.openstack.org/sushy-tools/latest/index.html Sushy-tools]. Ironic uses among others the IPMI or Redfish protocol to turn on/off/change the boot order of nodes.
<br />
[[Fichier:VM distribution.png|900px|vignette|centré|VM distribution]]

Dernière version du 10 mars 2023 à 13:52

One region, two availability zones

This Openstack Lab is composed of one region (fr1) and two availability zones (az1 and az2).

In the first availability zone :

In the second availability zone :

  • 3 compute nodes wihtin a Nova Cell called cell2
  • 1 Ceph cluster (Cinder only)


Lab architecture


Networks

The different networks are :

  • fr1-az1 : provider network in AZ1 (flat, i.e. without VLAN, shared, i.e. available for everybody and external, i.e. managed outside Openstack)
  • fr1-az2 : provider network in AZ2
  • ironic : external VLAN for bare metal nodes (only available in AZ1)
  • octavia : external VLAN for Octavia management (only available in AZ1)


Networks


Resources

  • Availability Zone 1 :
    • HP Elitedesk 800 g5 mini (Intel Core i5-9500, 64 GB RAM DDR4, 500 GB NVME)
      • 3 VMs acting as baremetal nodes (1 vCPU, 4 GB RAM, 20 GB disk for each VM)
      • 2 VMs for Openstack controllers (2 vCPUs, 16 GB RAM, 40 GB disk for each VM)
      • 1 VM for Openstack cell controller (1 vCPU, 2 GB RAM, 20 GB disk)
      • 1 VM for Openstack network controller (1 vCPU, 2 GB RAM, 20 GB disk)
    • Supermicro X9SCM-F (Intel Xeon E3-1265L v2, 32 GB RAM ECC DDR3)
      • 3 VMs acting as hyperconverged nodes (compute + Ceph RBD storage). Each VM has 10 GB of RAM, 2 vCPUs, one virtual disk for OS and one virtual disk for Ceph OSD
  • Availability Zone 2:
    • HP Elitedesk 800 g2 mini (Intel Core i5-6500T, 32 GB RAM DDR4, 500 GB NVME)
      • 1 VM for Openstack cell controller (1 vCPU, 2 GB RAM, 20 GB disk)
      • 1 VM for Openstack network controller (1 vCPU, 2 GB RAM, 20 GB disk)
    • Supermicro X10SL7-F (Intel Xeon E3-1265L v3, 32 GB RAM ECC DDR3)
      • 3 VMs acting as hyperconverged nodes (compute + Ceph RBD storage). Each VM has 10 GB of RAM, 2 vCPUs, one virtual disk for OS and one virtual disk for Ceph OSD


The bare metal nodes are "simulated" thanks to Virtual BMC and Sushy-tools. Ironic uses among others the IPMI or Redfish protocol to turn on/off/change the boot order of nodes.

VM distribution