AMD, Xilinx Claim World Record for Machine Learning Inference

bit_user

Titan
Ambassador

The original blog post specifies a batch size of 1. This caught my attention, as it's not usually necessary to use such a small batch size. I have to wonder if their performance lead quickly evaporates, as batch size goes up.