One example could be a doorbell that's able to recognize your face or voice locally, without the need to transmit that data to the internet for remote inferencing. This would also address one of the main concerns about today's AI implementations. Without needing to transmit our data to remote servers for inferencing, we can rest knowing that our conversations stay inside our homes' walls.
Oh, they'll still be networked. That's the whole point of smart devices - the doorbell cam recognizes who's at the door and sends a notification to your phone (via the cloud). And, when you enroll new faces for it to recognize, that data will go the the cloud, so that multiple devices can share it (among other reasons).
Pushing inferencing out to the edge is mostly a cost-savings, so that device makers don't have to shoulder expensive fees from processing all of the devices' video on the cloud. The benefit to consumers is that you don't need as much bandwidth (although we're
told 5G will solve that) and your device can still work when you're in a dead spot or have other connectivity issues.
Although this
can be pro-privacy, I don't believe it will. Device makers have too much to gain by scraping, collecting, mining, and selling people's data. Consumer also benefit from connectivity and automatically getting updated deep learning models that better fit their device's usage.