« Bifrost » : différence entre les versions
m (→The direct way) |
m (→The direct way) |
||
Ligne 234 : | Ligne 234 : | ||
baremetal node manage <NODE> | baremetal node manage <NODE> | ||
baremetal node clean <NODE> --clean-steps '[{"interface": "raid", "step": "delete_configuration"}, {"interface": "deploy", "step": "erase_devices_metadata"}, {"interface": "raid", "step": "create_configuration"}]' | baremetal node clean <NODE> --clean-steps '[{"interface": "raid", "step": "delete_configuration"}, {"interface": "deploy", "step": "erase_devices_metadata"}, {"interface": "raid", "step": "create_configuration"}]' | ||
baremetal node provide <NODE> | |||
</syntaxhighlight> | </syntaxhighlight> | ||
Version du 19 avril 2023 à 14:22
" The mission of Bifrost is to provide an easy path to deploy ironic in a stand-alone fashion ".
One use case of Bifrost could be to provision bare-metal nodes for a new Openstack cluster.
Installation
There are different ways to deploy Bifrost (cf https://docs.openstack.org/bifrost/latest/install/index.html) but the easiest (I think) is through to a dedicated (pre-built) container.
docker pull quay.io/openstack.kolla/bifrost-deploy:zed-rocky-9
docker run -it --net=host -v /dev:/dev -d \
--privileged --name bifrost_deploy \
quay.io/openstack.kolla/bifrost-deploy:zed-rocky-9
docker exec -it bifrost_deploy bash
Within the container :
mkdir -p /etc/bifrost
cat > /etc/bifrost/bifrost.yml << EOF
ansible_python_interpreter: /var/lib/kolla/venv/bin/python
enabled_hardware_types: ipmi,redfish
enabled_deploy_interfaces: direct,ramdisk,anaconda
cleaning: false
network_interface: ens3
mysql_username: root
mysql_password:
create_image_via_dib: false
dib_image_type: vm
create_ipa_image: false
dnsmasq_router: <@IP_router>
dnsmasq_dns_servers: <@IP_nameserver>
dnsmasq_ntp_servers: <@IP_ntp_server>
use_firewalld: false
default_boot_mode: uefi
dhcp_pool_start: <@IP_dhcp_pool_start>
dhcp_pool_end: <@IP_dhcp_pool_end>
dhcp_lease_time: 12h
dhcp_static_mask: <netmastk>
EOF
cd /bifrost/playbooks
ansible-playbook -vvvv \
-i /bifrost/playbooks/inventory/target \
/bifrost/playbooks/install.yaml \
-e @/etc/bifrost/bifrost.yml \
-e skip_package_install=true
A few points of attention :
- network_interface is the network interface of the host running the container
- create_ipa_image is set to false in order to use pre-build IPA (Ironic Python Agent) kernel / initramfs
- use_firewalld is set here to false because it prevents accessing the host with SSH by default ...
Enroll node(s)
To enroll one or several nodes, an inventory is used.
cd /bifrost/playbooks
export OS_CLOUD=bifrost
export BIFROST_INVENTORY_SOURCE=/tmp/baremetal.json
ansible-playbook -vvvv -i inventory/ enroll-dynamic.yaml
Some examples of /tmp/baremetal.json are given below.
The IPMI way
With a cloud image (Almalinux 8.7) and cloud-init
Create a JSON file (e.g. /tmp/baremetal.json) :
{
"baremetal1": {
"name": "baremetal1",
"driver": "ipmi",
"driver_info": {
"ipmi_address": "<@IP_IPMI_BMC>",
"ipmi_port": "<PORT_IPMI_BMC>",
"ipmi_username": "<USER_IPMI_BMC>",
"ipmi_password": "<PASSWORD_IPMI_BMC>",
},
"ipv4_address": "<@IP_node>",
"ipv4_subnet_mask": "<netmask_node>",
"ipv4_gateway": "<@IP_router>",
"ipv4_nameserver": "<@IP_nameserver>",
"inventory_dhcp": true,
"nics": [
{
"mac": "<@MAC>"
}
],
"properties": {
"cpu_arch": "x86_64"
},
"instance_info": {
"image_source": "https://repo.almalinux.org/almalinux/8/cloud/x86_64/images/AlmaLinux-8-GenericCloud-8.7-20221111.x86_64.qcow2",
"image_checksum": "b2b8c7fd3b6869362f3f8ed47549c804",
"configdrive": {
"meta_data": {
"public_keys": {"0": "<SSH_PUBLIC_KEY_CONTENT>"},
"hostname": "baremetal1.domain.ld"
},
"user_data": "#cloud-config\npackage_update: true\npackage_upgrade: true\npackages:\n - git\n - httpd\n"
}
}
}
}
To generate user_data, this example could help :
cat > /tmp/cloud << EOF
#cloud-config
package_update: true
package_upgrade: true
packages:
- git
- httpd
EOF
jq -Rs '.' /tmp/cloud
rm -f /tmp/cloud
ipv4_address, ipv4_subnet_mask, ipv4_gateway, ipv4_nameserver, inventory_dhcp are only useful if a static IP configuration is required.
With anaconda (and kickstart)
Create a JSON file (e.g. /tmp/baremetal.json) :
{
"baremetal1": {
"name": "baremetal1",
"driver": "ipmi",
"driver_info": {
"ipmi_address": "<@IP_IPMI_BMC>",
"ipmi_port": "<PORT_IPMI_BMC>",
"ipmi_username": "<USER_IPMI_BMC>",
"ipmi_password": "<PASSWORD_IPMI_BMC>",
},
"ipv4_address": "<@IP_node>",
"ipv4_subnet_mask": "<netmask_node>",
"ipv4_gateway": "<@IP_router>",
"ipv4_nameserver": "<@IP_nameserver>",
"inventory_dhcp": true,
"nics": [
{
"mac": "<@MAC>"
}
],
"properties": {
"cpu_arch": "x86_64"
},
"instance_info": {
"image_source": "http://mirror.rackspeed.de/almalinux/8/BaseOS/x86_64/os/",
"kernel": "http://mirror.rackspeed.de/almalinux/8/BaseOS/x86_64/os/images/pxeboot/vmlinuz",
"ramdisk": "http://mirror.rackspeed.de/almalinux/8/BaseOS/x86_64/os/images/pxeboot/initrd.img",
"ks_template": "<kickstart_URL>"
}
}
}
ks_template is an URL pointing to a kickstart which must respect mandatory sections (cf https://opendev.org/openstack/ironic/src/branch/master/ironic/drivers/modules/ks.cfg.template)
The Redfish way
Create a JSON file (e.g. /tmp/baremetal.json) :
{
"baremetal1": {
"name": "baremetal1",
"driver": "redfish",
"driver_info": {
"redfish_address": "http(s)://<@IP>:<PORT>",
"redfish_system_id": "/redfish/v1/Systems/<UUID>",
"redfish_username": "<USERNAME>",
"redfish_password": "<PASSWORD>"
},
"ipv4_address": "<@IP_node>",
"ipv4_subnet_mask": "<netmask_node>",
"ipv4_gateway": "<@IP_router>",
"ipv4_nameserver": "<@IP_nameserver>",
"inventory_dhcp": true,
"nics": [
{
"mac": "<@MAC>"
}
],
"properties": {
"cpu_arch": "x86_64"
},
"instance_info": {
"image_source": "https://repo.almalinux.org/almalinux/8/cloud/x86_64/images/AlmaLinux-8-GenericCloud-8.7-20221111.x86_64.qcow2",
"image_checksum": "b2b8c7fd3b6869362f3f8ed47549c804",
"configdrive": {
"meta_data": {
"public_keys": {"0": "<SSH_PUBLIC_KEY_CONTENT>"},
"hostname": "baremetal1.domain.ld"
},
"user_data": "#cloud-config\npackage_update: true\npackage_upgrade: true\npackages:\n - git\n - httpd\n"
}
}
}
}
Deploy node(s)
The direct way
Direct way means that a cloud-image will be used and deployed thanks to the IPA on the first available disk on the node.
cd /bifrost/playbooks
export OS_CLOUD=bifrost
export BIFROST_INVENTORY_SOURCE=/tmp/baremetal.json
ansible-playbook -vvvv -i inventory/bifrost_inventory.py deploy-dynamic.yaml
If software RAID should be used, it's more complicated :
- Set the RAID config (more information provided by the CERN tech blog and official documentation)
baremetal node set --raid-interface agent <NODE>
baremetal node set <NODE> --target-raid-config '{ "logical_disks": [ { "raid_level": "1", "size_gb": "MAX", "controller": "software", "is_root_volume": true } ]}'
- Clean up and build the software RAID configuration of the node :
baremetal node manage <NODE>
baremetal node clean <NODE> --clean-steps '[{"interface": "raid", "step": "delete_configuration"}, {"interface": "deploy", "step": "erase_devices_metadata"}, {"interface": "raid", "step": "create_configuration"}]'
baremetal node provide <NODE>
Be careful, software RAID installations does not work with all generic cloud images. Almalinux generic cloud image does not support software RAID for example.
The anaconda way
The anaconda way is an option for highly customized deployments, thanks to a custom kickstart. It works only with Red Hat based Linux distributions.
cd /bifrost/playbooks
export OS_CLOUD=bifrost
export BIFROST_INVENTORY_SOURCE=/tmp/baremetal.json
baremetal node set <NODE_NAME> --deploy-interface anaconda
ansible-playbook -vvvv -i inventory/bifrost_inventory.py deploy-dynamic.yaml \
-e network_interface=<BIFROST_HOST_NETWORK_INTERFACE> -e ssh_public_key_path=<SSH_PUBLIC_KEY_PATH>
network_interface and ssh_public_key_path are required by the playbook in the anaconda case, in order to build and provide the configdrive (which may or may not be used in this case ... Depending the kickstart content !)