« Lab architecture » : différence entre les versions
Sauter à la navigation
Sauter à la recherche
m (→Resources) |
m (→Resources) |
||
Ligne 29 : | Ligne 29 : | ||
For the needs of the lab, the actual hardware used is : | For the needs of the lab, the actual hardware used is : | ||
* 4 VMs for Openstack controllers on a workstation | * 4 VMs for Openstack controllers on a workstation (10 GB of RAM and 3 vCPUs for ''controller1'' and ''controller2'', 2 GB of RAM and 1 vCPU for ''ctrl-cell1'' and ''ctrl-cell2'') | ||
* 3 VMs acting as hyperconverged (compute + Ceph) nodes in AZ1 on Hypervisor 1 (Supermicro X10SL7-F with Xeon E3-1265L v3, 32 GB RAM ECC DDR3) | * 3 VMs acting as hyperconverged (compute + Ceph) nodes in AZ1 on Hypervisor 1 (Supermicro X10SL7-F with Xeon E3-1265L v3, 32 GB RAM ECC DDR3). Each VM has 10 GB of RAM, 2 vCPUs, one virtual disk for OS and one virtual disk for [https://docs.ceph.com/en/latest/man/8/ceph-osd/ Ceph OSD] | ||
* 3 VMs acting as bare metal nodes in AZ1 on Hypervisor 2 (HP Elitedesk 800 g2 mini with Core i5-6500t, 32 GB RAM DDR4) | * 3 VMs acting as bare metal nodes in AZ1 on Hypervisor 2 (HP Elitedesk 800 g2 mini with Core i5-6500t, 32 GB RAM DDR4) | ||
* 3 VMs acting as hyperconverged (compute + Ceph) nodes in AZ2 on Hypervisor 3 (Supermicro X9SCM-F with Xeon E3-1265L v2, 32 GB RAM ECC DDR3) | * 3 VMs acting as hyperconverged (compute + Ceph) nodes in AZ2 on Hypervisor 3 (Supermicro X9SCM-F with Xeon E3-1265L v2, 32 GB RAM ECC DDR3) |
Version du 21 décembre 2022 à 16:46
One region, two availability zones
This Openstack Lab is composed of one region (fr1) and two availability zones (az1 and az2).
In the first availability zone :
- 3 compute nodes (aka hypervisors) for Nova wihtin a Nova Cell called cell1
- 1 Ceph cluster for Cinder and Glance
- 3 bare metal nodes for Ironic
In the second availability zone :
- 3 compute nodes wihtin a Nova Cell called cell2
- 1 Ceph cluster (Cinder only)
Networks
The different networks are :
- fr1-az1 : provider network in AZ1 (flat, i.e. without VLAN, shared, i.e. available for everybody and external, i.e. managed outside Openstack)
- fr1-az2 : provider network in AZ2
- ironic : external VLAN for bare metal nodes (only available in AZ1)
- octavia : external VLAN for Octavia management (only available in AZ1)
Resources
For the needs of the lab, the actual hardware used is :
- 4 VMs for Openstack controllers on a workstation (10 GB of RAM and 3 vCPUs for controller1 and controller2, 2 GB of RAM and 1 vCPU for ctrl-cell1 and ctrl-cell2)
- 3 VMs acting as hyperconverged (compute + Ceph) nodes in AZ1 on Hypervisor 1 (Supermicro X10SL7-F with Xeon E3-1265L v3, 32 GB RAM ECC DDR3). Each VM has 10 GB of RAM, 2 vCPUs, one virtual disk for OS and one virtual disk for Ceph OSD
- 3 VMs acting as bare metal nodes in AZ1 on Hypervisor 2 (HP Elitedesk 800 g2 mini with Core i5-6500t, 32 GB RAM DDR4)
- 3 VMs acting as hyperconverged (compute + Ceph) nodes in AZ2 on Hypervisor 3 (Supermicro X9SCM-F with Xeon E3-1265L v2, 32 GB RAM ECC DDR3)
The bare metal nodes are "simulated" thanks to Virtual BMC. Ironic uses among others the IPMI protocol to turn on/off/change the boot order of nodes.