I've long felt it was more rewarding to design an algorithm to solve the rubix cube than to solve it myself. I guess this is sort of like that.
Bots can obviously spoil or enhance a game, depending on who is using them and how. It just depends on the game.
I'd argue that there's a lot more downside to bots than upside, if the game is well-designed to begin with. IMO, it's not good design for games to require a lot of grind to level up or acquire resources. That's just game designers being lazy, or perhaps creating incentives for in-game purchases.
As a developer, I can see how having bots that can play your game could be useful for QA purposes, but only if the bots were quick & easy to train. Even then, they could never fully replace play testers.
On that last point (i.e. being "quick & easy to train"), we sort of come full-circle to my first point about it being rewarding to design an algorithm to win a game. Except, in this case, what strikes me as interesting is how they used a sort of 2-stage training strategy. In the graphic from the article, it shows how they trained one model to label a much larger dataset for training another algorithm. The basic concept isn't super novel, but I don't know enough to comment on which algorithms they used or how novel their application is, in this context.