HTC Vive Going Wireless In 2017 With TPCAST Wireless Tether Upgrade Kit

Status
Not open for further replies.

MrBonk

Commendable
Oct 13, 2016
7
0
1,510
Leaves questions though.
1.Latency
2.Signal interruption a possibility?
3.image quality loss. (Is it just trasmitting the uncomrpessed signal as is? Or is it encoding it lossy on the fly?)
 

Jeff Fx

Reputable
Jan 2, 2015
328
0
4,780


There's a little more info on Reddit. https://www.reddit.com/r/Vive/comments/5cdy6p/some_informations_about_vive_wireless_adapter/
 

bit_user

Polypheme
Ambassador
From the article:
TPCAST said it would offer multiple different capacity battery packs that provide 2-5 hours of tether-free VR.

I dislike the fact that it adds weight to the HMD. I'd much rather have it in a backpack or belt pack.
 

9th

Distinguished
Nov 11, 2010
18
0
18,510
I hope they're not using a Samsung batteries. Having something potentially explosive strapped to the back of your head sounds like a good way to commit suicide. I would rather have it maybe attached to my hip and maybe encased in a blast-proof container. Less weight added to your head as well.
 


Actually, adding some weight at the back of the head is not so bad. It helps balance out the front-heavy VR headset, meaning more of the weight can be held by the strap over the head without clamping the headset against the face.
 

arneberg

Distinguished
Jul 1, 2014
10
1
18,510
adds up to 15ms latency so originally 11+15 and you get lag +1 1/2 H battery time 5years to early if nothing drastic happens with wireless systems
 

bit_user

Polypheme
Ambassador
Source?

I actually think it's possible, but it would require a purpose-built wireless protocol. The mistake everyone seems to be making is to use existing protocols that weren't designed for such ultra-low latency. Then, both the video and control subsystems will need to be adjusted to interpolate/extrapolate, in the event of errors.

I don't see a bolt-on wireless solution being effective. The HMD needs to be engineered around it, from the ground up.
 

arneberg

Distinguished
Jul 1, 2014
10
1
18,510


From TP cast webside for the product
 

caustin582

Distinguished
Oct 30, 2009
95
3
18,635
Wow, can't wait for this to become available (or something similar from a different company). My number one annoyance when using my Vive is having to mind the cord. It pretty much kills the experience in faster paced games that require a lot of movement. I don't mind the extra weight of the battery, but I hope the wireless transmission doesn't add too much extra lag.

I wonder why more companies aren't working on something like this, considering how almost everyone who owns a Vive seems to want a wireless solution.
 


Lots of other companies are working on this. Just none of them have released products.

It's possible TPCAST has beaten everyone else to the punch, achieving usable latency. But it's also possible that they've just lowered the standards and are releasing a product with too much latency to really provide a good VR experience.

I'm anxious to see reviews of this. It could be a huge success or a huge facepalm.
 

bit_user

Polypheme
Ambassador
According to their own specs, the latency is pretty bad. 15 ms is no better than Intel's prototype (which used some existing wireless display protocol I didn't bother to try and remember).

For good latency, you need to achieve significantly higher data rates than necessary for normal video display @ this resolution. To display 2160 x 1200 x 24 bpp @ 90 Hz, you need 5.6 Gbps. However, at that data rate, it would take over 11 ms to transmit a frame* from the PC to the HMD (which must complete before it can be displayed). If you wanted to get that latency down to about 1.1 ms, then you'd need to increase the data rate to 56 Gbps.

That's why it's not so simple, and why MIT is fussing about with millimeter waves. I think it's also pretty safe to say that some simple form of video compression will be instrumental. I also expect ATW will migrate into the HMD, itself.

* Presumably, they could halve this by alternately transmitting & displaying frames for the left & right eyes.
 


Compression is not the answer, since that adds latency too. Foveated rendering would help, as the total rendered resolution would be reduced.

Alternating between left and right eye would probably not be a comfortable solution either.
 

bit_user

Polypheme
Ambassador
Well, I say that having seen the insides of x264 (a popular, opensource H.264 encoder) and modifying it. There are techniques which don't offer the same benefits of H.264, but are much cheaper & lower-latency. If you're only looking to get a factor of 2 or 3, it's doable with minimal quality loss and only a few scanlines worth of extra latency.

As for alternating L/R, this is how LCD shutter glasses work. But they traditionally use a far lower framerate. If we're talking about 90 Hz, it seems to me that should be viable. Ideally, you'd render the eyes separately, and send each one as it finishes. I realize this involves a bit more work, especially with Nvidia having made a big fuss about how they can now simultaneously project geometry onto multiple image planes. But the work to actually shade the pixels doesn't really change, whether you render both eyes simultaneously or alternately.

Even if the eyes are rendered simultaneously, the ATW really should happen in the HMD, right before the image is displayed to the respective eye. This would be beneficial no matter whether you're using a wire or not. The ATW can actually happen as the data is received by the HMD, using the extrapolated pose at the time that display is expected to happen.

BTW, I thought of mentioning foveated rendering, but figured that'll probably happen regardless. I think FOVE said it results in approximately a 2x speedup. Of course, the potential exists for greater improvements, but depends on even more accurate eye tracking & modeling.

In summary: the potential for low-latency wireless exists, but it's not trivial and needs to be engineered as an end-to-end system. I expect to see a good effort from at least one of the big players, in 2017.
 

bit_user

Polypheme
Ambassador
So, I did just end my last post with the statement "it's not trivial and needs to be engineered as an end-to-end system."

That said, I expect the APIs provided by popular engines, if not also Steam and Oculus, allow for ATW to migrate from the PC to the HMD. Obviously, the HMD hardware would need to be substantially beefed up with an ASIC designed for this purpose. I'm pretty sure I read some statement from one of them, saying they were looking at it.

That's the obvious place for it, though. It would push HMD prices in the wrong direction, but it'd offload PCs from doing it, lowering the minimum HW spec. It's also better suited to an ASIC than a GPU. And since you want to do it at the last possible microsecond, using the very latest sensor data, the HMD is really where it belongs.
 
ATW, as well as ASW, don't require much performance, so taking it away from the PC doesn't really change the hardware specs required. Putting it in the headset adds cost for duplicate hardware and leaves the current devices out in the cold. Not sure that's a worthwhile tradeoff, along with all the other difficulties it involves.
 

bit_user

Polypheme
Ambassador
I actually thought of a really good example: texture compression. I'm not saying to use it, as is, but it's a good example of a simple, low-latency compression technique that delivers pretty impressive improvements.

Again, not sure why you're even talking about old devices. There should be no problem with software supporting them, and we're talking about new hardware.

The HMDs need some amount of hard-wired logic. I imagine this might be integrated into their display controller. And the main point is that if you're interested in combating latency, it's an easy & big win. That's why I expect it'll happen.
 
Yeah texture compression is something that already happens in the regular rendering loop. What I mean is compressing the video output, which at least normally adds significant latency (significant for VR, not noticeable for video or probably even for normal monitor gaming). If there are aspects of video compression that can be adopted for VR without adding latency, that's obviously a good idea. But the interframe compression will probably not work, it'll be more like compressing a sequence of separate images. That will weaken the compression considerably... but something's better than nothing.

We're talking about upgrade kits for old hardware as well here. And a requirement for not just new VR devices but also new GPUs etc. would be a big problem for adoption.
 

bit_user

Polypheme
Ambassador
Just using it as an example of a compression technique which doesn't add latency.

Why not? Any portion of the image could reference anything before it, either within the same frame (so, to the left and above, if normal raster order) or the entire previous frame. However, there's no way they'll do motion estimation, due to the added latency & computational load. So, I think we're talking about just the neighboring blocks in the current & previous frame. In this way, you can do even better than texture compression (in which adjacent blocks are independent, to facilitate random access), however it must also meet a higher quality standard than TC.

No, not I. I started this whole exchange by claiming that wireless-done-right can't be just an add-on kit.

I imagine there'd be a wireless adapter box/antenna you attach to your DisplayPort connector. Perhaps future GPUs will integrate some of this functionality, and you'll only need to connect an antenna to it.
 
Status
Not open for further replies.