I think Tango is mostly a stepping stone for proper AR glasses.
It's difficult for me to see benefits that justify the added cost of depth sensors in the average phone. Even in high-end phones, I think consumers would opt to forego them to reduce weight, size, cost, and battery drain.
Part of it, I think. I don't have any experience with Hololens, nor have I used Tango's Unity interface. But, with those caveats, what Tango will do is:
■ Let you capture a depth image & RGB image.
■ Learn an area, and give you the absolute position & orientation within it (termed the tablet's "pose").
■ Capture & reconstruct scene geometry (only recently exposed feature).
The technique for doing 2 & 3 is called SLAM (Simultaneous Localization And Mapping), and it's not particularly new. What Tango does is package it up in a nice API and hardware platform, so that developers can start to build apps atop it, and independent device makers can make devices on which the apps will run. The API is pretty agnostic to specifically what combination of sensors is used, so they could be: structured light, ToF, stereo, or (typically) some combination.
This is a natural first step towards building a generalized AR platform. I am waiting for some AR glasses to support it, whether it's Google Glass 2, Magic Leap, Sulon, or somebody else.
In fact, Intel has a Tango Phone (which may never see the light of day), and has demo'd a GearVR-style face mount, in order to use it as non-transparent AR glasses. I think Google even has a version of Cardboard for use with their tablet.
BTW, one difference between Tango and Hololens might be how easily you can insert objects in the scene. From what I've seen, Tango doesn't (yet) make it easy to place a 3D object in the physical world and have automatically rendered, it whenever that location is visible. Maybe in the Unity API, but not (yet) in the C or Java APIs.