Speed Change Related Issue #1
- Repeated speed changes can result in the link not coming up to the intended targeted speed.
- A follow-on attempt should bring the link back. In extremely rare scenarios, a full reboot might be required.
Speed Change Related Issue #2
- In extremely rare cases repeated link Rate changes might
also result in the following:
- PCIe access becoming unresponsive.
- While traffic is going on in system and PM D3 is also enabled with rate changes, the host might receive completion timeout for the read when the pre-read is done before the PM D3 sequence is targeted to the EP ECAM space.
- In the case of PM D3, AMD recommends
that any valid EP address be used except ECAM space in the pre-read before
initiating PM D3 sequence.
In all other cases, waiting approximately 20 msec after the link rate and before attempting any PCIe access can help.
However, in scenarios where the transaction still does not complete, a full reboot (power cycle and re-programming image) would be required.
Speed Change Related Issue #3
- In RP configuration with core clock of 1 GHz, PCIe link rate changes from Gen1/Gen2 to Gen3/Gen4/Gen5, it can fail to reach the intended speed or link can go down in rare cases.
- An additional write with value 1 to the Perform Equalization bit in Link Control 3 register on the Root complex PCIe configuration space is required when the rate change is performed to Gen3, Gen4, or Gen5 speeds from Gen1/Gen2.
Speed Change Related Issue #4
- In rare cases where DMA traffic occurs and repeated speed changes are performed, it is possible that MSIX interrupt may not be generated.
- Remove the queue and add the queue after the speed change is complete.
Link Autonomous Bandwidth Status (LABS) Bit
- As a Root Complex when performing the link width/rate
changes, the link width change works as expected. However, the PCIe protocol
requires a LABS bit which is not getting set after the link width/rate
change.Note: This is an informational bit and does not impact actual functionality.
- Ensure the software / application ignores the LABS bit as
this is an informational bit and does not impact functionality.Note: For any application, AMD recommends that you make sure the link is quiesced and no transactions are pending before performing any link rate changes.
QDMA data transfer ordering
While the PCIe Bridge master follows PCIe ordering rules, there is no ordering enforcement between the PCIe AXI Bridge Master path and the internal DMA registers or DMA data paths. In some cases this may cause race condition between AXI Bridge Master and DMA register transfers. The following is a workaround:
- You should assign separate BAR to access QDMA queues space registers and set the steering to route it to NOC. You can then loop back AXI Master transfers on to AXI slave interface. Set the BAR size at 256K.
- You should not make any BAR as DMA BAR rather make it as a separate AXI BAR to map QDMA base registers and set the steering to route it to NOC, making DMA BAR internally terminates this and ordering is not maintained. To workaround this issue, it needs to go to AXI Master which should then be loop back to AXI slave interface.
- Address offset of queues space registers are listed under AXI Slave register space section. For controller 0, DMA registers are at 0x6_1000_0000. For controller 1 its at 0x7_1000_0000.
SRI-OV will not be supported in Bridge mode.
Relaxed Ordering in Bridge Setup
- With any read request from Slave bridge the request TLP will not have Relaxed order bit set.
- Only an MPS size of up to 512 is supported in DMA and Bridge modes.
Secondary Bus Reset (SBR)
- If SBR is issued on H10 devices, 10ms of additional delay is required after SBR de-assert.
Master Bridge AER Errors
- If a packets is dropped in the PCIe domain because of some reason, AER is logged. But if a packet is dropped in the AXI-MM domain for decode error or slave error, AER is not logged.
Slave Bridge Transaction Ordering
- Ordering between writes, reads, and side band transactions
is not strictly enforced at the AXI Slave Bridge and
- If strict ordering is required, users should wait for the appropriate AXI response before issuing the dependent transaction.
Power Management - ASPM L1/L0s/PM D3
- Enabling ASPM L0s/ASPM L1 could show correctable errors being reported on the link by both link partners such as, replay timer timeout, replay timer rollover, and receiver error.
- A PCIe Endpoint device might also log errors when Configuration PM D3 transition request comes in during non-quiesced traffic mode.
- A PCIe Root Port device does not support ASPM L1 or L0s.
- It is recommended that the application disables correctable error reporting or ignores correctable errors reported in event of link transitioned to ASPM L0s / ASPM L1.
- For transition to D3Hot, software needs to make sure that the link is quiesced. To ensure Memory Write packets are finished, issue a Memory Read request to the same location. When the completion packet is received, it indicates that the link is quiesced and PM D3 request can be issued.
Concurrent MSI-X Capability and MSI Capability
- CPM5 cannot be configured at compile time with both MSI-X Internal capability and MSI capability enabled.
- For XDMA and AXI4 Bridge modes, MSI-X Internal capability is used, therefore no workaround is available. The choice to enable either MSI-X or MSI capability must be made when configuring CPM5 IP.
- This limitation does not apply to QDMA mode as MSI Interrupt is not supported.