Presumably, since you have to flash your eye balls every time you transact, there are lots of opportunities for apps or even fake terminals to intercept the data without a user knowing.
It's no easier than intercepting a password. The problem is that you have only one set of eyeballs, whereas a password can be changed. Also, if this ends up being a universal authentication system, as they seem to be planning, then it raises the stakes for such exploits vs. grabbing just a couple passwords for random websites.
That's a whole lot easier than the kid napping route.
The key question is exactly
what is sent to the authentication provider? Either they need to distribute the deep learning model to every app & device capable of authenticating, so that it can do the feature-extraction, or they just upload the high-res image to the cloud-based authentication service. If it's the latter, then your PC or phone wouldn't be able to intercept the actual key.
Even having a snapshot of someone's iris would be bad, since you could just upload that to authenticate them. However, it'd be interesting if they went so far as to store a signature computed over each image submitted for authentication, so they could detect if you were trying to reuse an old image.