Introduction to fru-tracker: Event-Driven Hardware Inventory

fru-tracker: A Service Generated by Fabrica

fru-tracker is a new OpenCHAMI inventory API developed with help from Fabrica. Rather than relying on standard CRUD operations where clients manually manage individual hardware components, fru-tracker utilizes an event-driven reconciliation model to manage hardware state. The primary function of this service is to track hardware inventory—such as nodes, CPUs, and DIMMs—and maintain an accurate representation of the physical state over time as discovery snapshots are ingested.

Running the Service

For version 0.2.1, the service can be run locally using a containerized SQLite deployment. Note that the command below is just an example of running the standalone image; the image can be deployed to match individual site needs, such as via Kubernetes, Podman systemd quadlets, or other orchestration methods.

To start the service locally with Docker:

docker run -p 8080:8080 -v $(pwd)/data:/data ghcr.io/openchami/fru-tracker:0.2.1 serve --database-url="file:/data/fru-tracker.db?cache=shared&_fk=1"

This command binds port 8080 and mounts a local data directory to persist the SQLite database.

Using the Redfish Collector

The fru-tracker service is passive. It does not actively poll hardware; instead, it relies on an external collector to push state data to it. The repository includes a reference Redfish collector written in Go that targets a BMC.

When executed against a BMC IP, the collector authenticates and walks the Redfish /Systems tree. It extracts hardware data, mapping the Redfish properties into an array of DeviceSpec structs.

The collector then uses the Fabrica Go client SDK to package this data into a DiscoverySnapshot resource and posts it to the API.

go run ./cmd/collector/main.go --ip <BMC_IP_ADDRESS>

The console output will verify the number of devices found (e.g., 1 Node, 2 CPUs, 4 DIMMs) and confirm the successful creation of the snapshot on the server.

Building Your Own Collector

Because fru-tracker is agnostic to the collection method, you can build custom collectors tailored to your specific environment using any protocol (e.g., Redfish, SSH, dmidecode, etc.).

The core requirement is formatting your discovered inventory into an array of DeviceSpec objects and wrapping them inside a DiscoverySnapshot.

One of the most powerful features of the DeviceSpec model is the properties field. This is an arbitrary key-value map that allows you to store any custom attributes discovered by your collector (as long as the keys are lowercase alphanumeric, using underscores for spaces and dots for namespaces).

Here is a snippet from the reference Go collector showing how raw Redfish properties are mapped into the DeviceSpec, utilizing the properties map to store the unique redfish_uri keys needed by the reconciler:

// mapCommonProperties maps Redfish fields to the API's DeviceSpec struct.
func mapCommonProperties(rfProps CommonRedfishProperties, deviceType, redfishURI, parentURI, parentSerial string) *v1.DeviceSpec {
	partNum := rfProps.PartNumber
	if partNum == "" {
		partNum = rfProps.Model
	}
    
	uriBytes, _ := json.Marshal(redfishURI)
	parentURIBytes, _ := json.Marshal(parentURI)
    
	// Store arbitrary or protocol-specific data in the properties map
	props := map[string]json.RawMessage{
		"redfish_uri":        uriBytes,
		"redfish_parent_uri": parentURIBytes,
	}

	return &v1.DeviceSpec{
		DeviceType:         deviceType,
		Manufacturer:       rfProps.Manufacturer,
		PartNumber:         partNum,
		SerialNumber:       rfProps.SerialNumber,
		Properties:         props,
		ParentSerialNumber: parentSerial,
	}
}

Once you have an array of these specifications, you use the generated Fabrica SDK to post them as a single snapshot payload:

	// Create the request using the SDK's generated request type
	createReq := fabricaclient.CreateDiscoverySnapshotRequest{
		Metadata: fabrica.Metadata{
			Name: fmt.Sprintf("snapshot-%s-%d", bmcIP, time.Now().Unix()),
		},
		Spec: v1.DiscoverySnapshotSpec{
			RawData: json.RawMessage(snapshotData),
		},
	}

	// Use the SDK to create the snapshot resource
	createdSnapshot, err := sdkClient.CreateDiscoverySnapshot(ctx, createReq)

By following this pattern, you can ingest any hardware metric or identifier relevant to your data center.

Reconciliation and State Tracking

Once the DiscoverySnapshot is posted, the API accepts the payload and the underlying event system automatically triggers the DiscoverySnapshotReconciler.

This reconciler executes a two-pass process in the background:

  1. Ingestion: The reconciler parses the raw JSON payload and performs a “get-or-create” operation for each device. It uses the redfish_uri within the properties map as a unique primary key to ensure components without serial numbers are tracked accurately.
  2. Relationship Linking: The reconciler evaluates the parentSerialNumber (provided by the collector) for child components like CPUs and DIMMs. It locates the corresponding parent Node in the database and updates the child’s parentID field with the Node’s specific database UUID.

Instead of requiring clients to compute diffs between raw hardware snapshots, this architecture natively tracks state over time. When a subsequent DiscoverySnapshot is pushed reflecting a physical modification—such as a DIMM replacement—the reconciler compares the new payload against the existing database records. It automatically updates the specific Device records to match the current physical state.