-
Notifications
You must be signed in to change notification settings - Fork 2
User manual
Initial OCP OPF firmware distribution software is targeting the low cost entry level bananapi-r4 router (more info (ext)). It is configuring the router into a managed switch mode, and doesn't provide specific routing capabilities by default. As to be managed, the switch is going to issue a DHCP request during its boot process and will gather all switch ports into a single switch entity. Virtually any ports could be used to create uplink connectivity except 10G SFP LAN port which is dedicated to cluster heartbeat features and DRBD synchronization process when activated in cluster mode.
We do recommend to use the 10G SFP WAN port as uplink, through an SFP+ connectivity or by using an RJ45 adapter leaving remaining RJ45 ports for BMC boots.
The Bananapi-R4 router can support an nvme M2 drive. We do recommend to install one with a minimum size of 128GB. The drive will be used to host the OpenBMC images, ROM images and all firmware images and revisions which will be distributed through the switch to the client.
Even if it is possible to use exclusively a microSD card to run the O/S and storage area of the device, it is recommended to use the nvme setup as to enhance performances. The iSCSI targets are going to be created into the available r/w storage area within the /var/lib/iscsi_disks mount point. If no nvme are detected all clients are going to read and write from the microSD card leading to poor I/O performances and increased wear of the device.
Download a pre-compiled image bpi-r4_jammy_6.8.0.img.gz to a system with an microSD card reader. Install into the reader a microSD card which has a minimum size of 16GB (The current image is around 7GB) and issue the following commands after having identified the device number of your SD card reader (into our example rdisk2).
gunzip bpi-r4_jammy_6.8.0.img.gz
dd bs=4M status=progress conv=notrunc,fsync if=bpi-r4_jammy_6.8.0.img of=/dev/rdisk2
The first boot is requiring initial configuration and needs the serial port of the switch to be connected to a terminal as to provide basic setup parameters. The questions asked covers cluster mode, and HTTP Proxy requirement into the network to which the switch is connected to.
In the case of a cluster configuration. The 10Gbp SFP LAN of both switch must be connected to each other to ensure heartbeat feature. After setup questions are answered the system will pursue boot and auto configure the storage devices. You can connect to the console as administrator with default credentials (root/bananapi) as to monitor filesystem creation and services setup.
The device will be automatically mounted and first boot systemd services are launched. This includes:
- Retrieval of the ROM images from the Internet
- Extraction of the ROM images and preparation for distribution to clients
This phase duration is highly dependent of the internet connection and associated speed.
All services are setup after the coredhcp service is in running state. The lanbr0 IP address is the public management IP address of the switch and the webUI/API will answer to that address when all services are setup. No encryption is available as of today, and this is part of ongoing development. Please use HTTP protocol only.
When fully started the UI splash screen will look like this
And the switch is ready to be used.
The OpenBMC tarball to be provided to the switch must contain 3 files:
- VERSION which will contain the firmware version contained into the tarball (5.10.17+git0+3c7d832a99-r0-proliant-g11-20241205225546 as an example).
- boot.mtd which will contain a signed FIT image which will be provided through TFTP boot infrastructure.
- iscsi.tgt which contains a r/w ext4 formatted flat file system which will be used as an iSCSI target input.
When successfully unpacked by the switch each files will be stored into the following location as reference point:
- /var/lib/iscsi_disks/images where a directory per image is created and named with the version number extracted from the tarball.
The tarball must be compressed by using the gzip format and will be post-processed by the API. A drag and drop can be performed to inject a new tarball. The drop zone is the chip pictogram which is available on the splash screen of the web interface.
When the whole process is successful, the newly uploaded OpenBMC image revision appears into the WebUI.
There is no default image. You can select one by clicking into the default column on the image version you are willing to promote.
To remove an image, simply ensure that the target image is not selected as default and click on the ID number of the image you are willing to remove into the WebUI.
Default ROM for HPE servers are downloaded from the Internet during the initial boot of the switch. The process is performed by using 2 scripts which are available into /usr/bin/rom and generate output into /var/lib/iscsi_disks/roms. For each manufacturer we have a set of scripts, one which is dedicated at downloading initial ROM file and one which is dedicated at post processing these files to extract required data to be transferred to the BMC. The intent of the ROM distribution solution is to provide a ready to flash or ready to use image which can be transferred to host by using eSPI protocol as an example. Usually ROM distribution files contains meta data which we are expecting to remove during that process.
So for HPE servers the /usr/bin/rom/hpe contains both script which can be re-executed after initial setup to download potential ROM updates. They won't be run frequently by the system and the user is in charge at running them based on his/her requirement.
These scripts can be used as example to support other manufacturers.
Client must support a uboot environment which can boot from the network with this basic set of command: dhcp, tftp and optionally wget. The dhcp command will be used to retrieve a default IP address on the management vlan, while the tftp command will be used to load a the fit image to the client.
You need to setup into uboot client environment 2 variables
- your loadaddr, where the FIT image is going to be loaded and started into RAM. That address is board dependent and on HPE platform it must be set to 0x50000000 for gen11 machines: setenv loadaddr 0x50000000
- your vlan number which must be set to 100: setenv vlan 100
Then you can issue a DHCP command from uboot
initial calls will take time and that is why you can see multiple broadcast. The switch configure the OpenBMC environment for a new client, and copy the relevant FIT image and iSCSI target into a dedicated area within its local storage device, as to ensure upcoming boot operations even if the OpenBMC image associated to that client is deleted from the repository of images.
You can then issue a bootm command which will starts the kernel from memory.
The client appears as new into the UI up to the time it is booted. While being into the new state (Red state), it can be removed to be reconfigured.
When booted the web UI will report the client as connected and being up.
Each client can be identified by a human readable label. The label can be edited at anytime within the web ui just by clicking on the current one and typing in the new one.
While being into the new state or being shutdown(Red state), a client can be removed to be reconfigured or fully deleted by clicking on the ID number of the client.