alextheblue :
Sounds good as long it it doesn't have the "cripple" non Intel cpu function that will only allow Intel cpu's to use newer instruction sets (AVX, AVX2, SSE4, ect..)
Or like PhysX, where you intentionally cripple it in software mode for ALL CPUs by using deprecated x87 code! But no, Havok seems to be fairly platform agnostic. Their middleware has been in use on both PCs and consoles for ages which these days means AMD, Intel, and PowerPC. I'd still like to see some benching just to be sure.
Physx has been used for ages on consoles with no effect also
It is in many games on consoles.
https://en.wikipedia.org/wiki/PhysX
"At GDC 2015 Nvidia made PhysX free with source code available on GitHub"
Should get easier to optimize for now that source code is out (maybe...). It's been used in over 500 games, so it's a lot more popular than most think. I don't think it's as crippling as you claim either, or nobody would be using it on a console. Of course not talking PC here - silly to compete with your gpus and further would have been silly for AMD to give access to mantle. Mantle's only problem is the company trying to put it out was too broke to push a new API and had far too little gpu share. It would have been a great idea in 2005, not 2015 in AMD's condition. If Nvidia had done this, AMD would have been sunk. In fact, you could almost say gameworks (came 2014) was as response to MANTLE...ROFL. I mean if you do something to speed your stuff up, I guess the response might be the other guy does something to PRETTY his side up thereby creating differentation
So TrueAudio and Mantle probably cripple NV/Intel hardware too correct? Oh wait, both of those are proprietary and won't run on anybody's stuff and not even a large portion of AMD stuff...LOL. Both sides do this, get over it. I for one, have ZERO problems with them adding stuff to games that make whatever hardware I bought a better deal than it was before they added it. It is not company X's job to help company Y look special, no matter who we're talking about. If you pay the expense of R&D, keep it to yourself if it makes you feel good. IT IS YOURS and YOURS alone to do with as you please. Making your products SPECIAL and stick out more than the guy next to you is how you make...wait for it...
PROFITS.
Which AMD hasn't really made since ~2000 when they made a Billion in profits (net) and had a leading CPU. Hopefully ZEN will finally get them back there. They've lost almost 7B in the last 15yrs (6B+ in the last 12). Maybe AMD should start shipping more "special sauce" stuff
Note Havok doesn't give ALL of the source code, just parts and it also has been in 500+ games. My guess is Intel has had this all along and was waiting for an AMD cpu move...LOL. Note the new version here is CPU based only, and as such will likely give INTEL an advantage on PC's in some way shape or form, but be unhindered on consoles. Since like NV, they'll want to spread the API in places that don't DIRECTLY compete with their desktop stuff. I also think Intel is prepping the software for a major jump in perf (if ZEN does it's job, think Core/Core2 jumps), and you have to find a way to EAT that extra perf up or why would we buy faster cpus. Intel already has that problem right now, so they need something to suck up the power in games. Here it is. DX12/Vulkan will ALSO be making crap cpus better, so again they need to figure out how to (artificially?) slow down your cpus so you want faster ones...LOL. They weren't sitting there doing nothing while AMD dropped out of the race for the last ~3-4 years (been behind longer than that but gave up ~may2012). Intel continued to develop stuff, they just don't give it to us (much like NV). Business 101 - give only what you need to until you have to. Andy Grove didn't write the book on "only the paranoid survive" for nothing
SIPS
I'm not saying I won't like the new effects, just stating how/why I think it's happening all of the sudden after Zen and somewhat probably vulkan/dx12 pushing more on gpu. Gotta use up the cpu cycles somehow.
One more note: Are you really crippling others if you're just using your current gen stuff to max capability? IE, Hairworks taxes tessellation greatly, which as it turns out doesn't harm NV's top maxwell cards much, but tanks the rest of their own cards & AMD. Granted the Developer should have made an easy way in games with issues (witcher 3) to turn that down easily (but you can do it in config files or AMD's drivers overriding the app anyway as AMD has shown), but is there anything wrong with tapping out your best stuff in any way possible? Should you NOT turn on ALL of your latest cards features just because some other guy (or even your own old gen) can't hack it? You build that power in to USE IT correct? That is a differentiating feature that compounded with others, might make me want your card more. I don't quite get why people don't see it for what it is: Nvidia making the most of their current card. I do think witcher 3 will put out 1.05 (went live today) or whatever that will easily allow this modification in the game instead of via AMD's drivers, or config file editing (works for both sides). But whatever...You should get the point. Maybe it's AMD's job, as I don't see a fix for perf in the list of 1.05.
I want EVERY feature on that can be turned on, especially if I've paid $500+ for a card! I don't usually even play a game until 1. a few months of patches are out (or all DLC done & GOTY edition or something) & 2. I have a card that can MAX the crap out of everything in the game I want to play (usually that is, some you just have to fire up anyway...LOL). Is it Nvidia's fault the dev didn't make it EASY to modify tessellation in the witcher 3 to 32/16/8 etc? Who knows, but it's their JOB to show you ways to MAXIMIZE every feature of the card you plunked down your cash for...No doubt about that! AMD can just as easily put out a driver that has a profile (raptr fix, whatever) for the game to auto-override with a usable setting. From the pics I've seen in one page side by side so you can tell easily, there isn't much difference between 64 & 16 (and 16 works fine for AMD) and not a ton of difference with 8 though I can see it then. I don't see anything wrong with Intel maximizing their cpu's usage as long as I can turn off feature X to not be affected if I don't own Intel (IE, you can turn on hairworks easily).