AMD CPU speculation... and expert conjecture

Page 578 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
AMD confirms 20nm in 2015
http://www.fudzilla.com/home/item/35271-amd-confirms-20nm-in-2015
Rory also answered a question from John Pitzer of Credit Suisse, who inquired about the 20nm rollout. Rory told him that there is a lot of life in 28nm and that we will see some mix of both 28nm and 20nm chips for AMD. AMD will probably offer 32nm, 28nm and 20nm chips at the same time, ....
so tsmc's 20nm struggle is confirmed. is glofo's 20nm issues subtly confirmed too? i wonder.... that brings me to:

TSMC to lose out to competitors in 14/16nm process segment in 2015, says chairman
http://www.digitimes.com/news/a20140716PD216.html
i am sad for the gpus but it still makes me a bit happy. :p

AMD SVP John Byrne named turnaround exec of the year
http://www.fudzilla.com/home/item/35278-amd-svp-john-byrne-named-turnaround-exec-of-the-year
and x-men: dofp did great business too, so good news all around. :whistle:
 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780
These aren't done in real time, but they are old. This gives you a taste of Bullet Physics with OpenCL

https://www.youtube.com/watch?v=143k1fqPukk https://www.youtube.com/watch?v=FIPu9_OGFgc

Bullet is pretty great and it sort of puts PhysX to shame. And it's FOSS software.

https://en.wikipedia.org/wiki/Bullet_%28software%29

Bullet was used in GTA4 though and we didn't get OpenCL. Hopefully we get some OpenCL goodness and fracturing and destruction with GTA5.

AMD might get a really good software ecosystem going if they can get Bullet to run well on AMD cards and they have Mantle too. Mantle is going to let rendering scale to many cards without relying on AFR and then we could end up with OpenCL physics scaling on other cards.

Maybe, and just maybe, the ability of software to take advantage of AMD hardware on this scale will let AMD release a solid HEDT platform with room for a lot of PCIe/HT slots. I could envision a system where 3 AMD graphics cards are worth it. One for main rendering, one for global illumination, and another for OpenCL physics.

I am excited about a setup like this. It means I can upgrade my main GPU and keep my 7970 with the waterblock on it for other calculations. Not having to rely on AFR and all GPUs being similar is going to be a massive game changer for HEDT systems.

Having AMD hardware and software ecosystem revolve around being able to keep your old card and have it function as something useful will be a really good thing for AMD. I just hope that we can see these technologies optionally show up in more products than PhysX gets. But Mantle is already off to a much stronger start that PhysX or Gameworks.
 

colinp

Honorable
Jun 27, 2012
217
0
10,680
Modern intel chips are no slouches at opencl too. So if you have a mixed intel "apu" and amd or nvidia gpu set up, how do you control where is best for your physics and where for your gfx? Open question, by the way. Can you select which opencl device is used?
 

jdwii

Splendid
http://www.dsogaming.com/interviews/rebellion-talks-sniper-elite-3-tech-tessellation-mantle-dx12-multicore-cpus-obscurance-fields-shader/

Nice read "Rebellion Talks Sniper Elite 3 Tech – Tessellation, Mantle, DX12, Multicore CPUs, Obscurance Fields Shader"
 

etayorius

Honorable
Jan 17, 2013
331
1
10,780


I like PhysX, but Bullet is superior in every way and i love it. Havok is cool too.

We need more frigging Physics on Games.

 

blackkstar

Honorable
Sep 30, 2012
468
0
10,780


Honestly, it should be possible. I don't think you'd get as much out of it as having a discreet card for it. But it'd be a lot better than nothing at all.

It depends how tightly knit AMD makes things. If they have some sort of advantage of a lower latency bus connection APU + dGPU + dCPU? together than PCIe, the AMD solution will have a bigger advantage, specially if the Intel one is still using a more traditional setup with PCIe.
 


Cloth actually is a pain for modelling, since if you aren't REALLY careful, the minute you get a stiff breeze, you have the cloth constantly bouncing off the player model, which then gets processed by the physics engine, and eats your GPU time to death processing very small interactions. That's why layers are still painted on, and yes, why textures go right through body parts.

I meen, sheesh, we still have hardcoded damage values for bullets in FPS. Why not have a physics engine determine if a bullet went through that wood barricade, calculate trajectory changes and velocity loss, and then figure out if the bullet pierced the body armor, and if so, how much damage it did? Nope, bullet to the chest, 5HP worth of damage.

Heck, think of how this changes grenades? You can actually process the individual pieces of shrapnel, and yes, you could have a teammate dive on one to mitigate the blast. Nope, just cone of effect and hardcoded damage values. We've come so far in 25 years...



MSFT actually kills us here, due to the "one display driver at a time" rule. So unless you have AMD/Intel/NVIDIA play nice within their drivers to allow cross device communication (and we all know the odds of that happening), this isn't likely to happen cross vendor. You should be able to do it within a specific vendor via drivers, but a lot of the groundwork isn't there yet. NVIDIA technically allows this for CUDA, but I don't know if you can do it for OpenCL yet.

So within a vendor, yes, via drivers. Cross vendor, no.
 
Well you could do it by treating it as a coprocessor device from the OS point of view. Would require a specific device driver and an API layer but it's possible. There would be a penalty from having to use a system call to submit a batched workload to the device.
 


That's an icky way to handle it though, and things really get icky once you have more then one installed. The way I envision it will eventually happen is individual GPUs (or whatever they end up being called going forward) will have specific tasks they will be allowed to perform. For example, you could set within the OS GPU's 1 and 2 for rendering (DX/OGL APIs) and GPU's 3 and 4 for physics (OpenCL/CUDA/Physx/Bullet/etc APIs). Right now, such a setup is possible, but theres no easy (for feasible) way to accomplish it.

In any case, I really can't see this being implemented aside from the Vendors driver. I really can't see the OS being easily able to handle something like this in a way that works.
 

colinp

Honorable
Jun 27, 2012
217
0
10,680
What happens technically with switchable graphics, e.g. in an Intel+nvidia laptop? Is one driver unloaded before the second one is loaded?
 


Special case that handled within the Intel/NVIDIA drivers. Not sure exactly how it was implemented, but I know it needs a special SW package.

Hybrid SLI is only supported on OEM notebooks with specially developed
software released for specific notebook configurations. Loading generic drivers
on a Hybrid SLI notebook will cause hybrid features to disappear.

Per the Hybrid-SLI technical document.

The AMD-AMD case is simpler, since you can drive both an APU and dGPU from the same driver package.
 

Cazalan

Distinguished
Sep 4, 2011
2,672
0
20,810


DRAM has had a good run and will still be the primary mass RAM for another decade probably. It is fundamentally less efficient than various permanent R/W storage technologies since it uses capacitors that have to be refreshed periodically.

The stacked DRAM technologies are extending the traditional DRAM even further, HBM/HMC/WideIO2, but even those will have limits due to minimum sizes of the arrays. Capacitors can only be made so small, theoretically, until they get so small they have to be refreshed almost continuously.

ReRAM looks the most promising to me but the manufacturing isn't as far along as the MRAM or STT-RAM types. HP has been pushing the ReRAM the most but there are 20 or so companies with patents on that type of RAM. Including basically every memory vendor and hard drive vendor.

You can imagine one day when memories are as efficient as E-ink displays. Only using power when the screen needs changing. Laptops with a battery life of a month or more.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Intel has studied eight memory technologies (SRAM, DRAM, Flash, PCRAM, STTRAM, FeRAM, MRAM, RRAM) in a seven dimensional space (capacity, performance, energy, endurance, volatibility, scalability, maturity) and concluded that the best options for future products (up to year 2020) are SRAM/DRAM for performance and Flash/PCRAM for capacity. Nvidia, AMD, and Fujitsu got the same conclusion. Here "Flash" means 3D NAND.

Beyond 2020, we would expect system memory (DRAM) and storage (SSD) to be replaced by 'universal' memory: PCRAM and STTRAM look like the more promising approaches.
 
You can imagine one day when memories are as efficient as E-ink displays. Only using power when the screen needs changing. Laptops with a battery life of a month or more.

Nope. The power required to refresh the memory is miniscule compared to what's needed to power the CPU and most importantly the Display. Seriously displays are easily responsible for ~50% of the energy usage on a display and that won't change. Your eyes receive light and translate that into images, in order for you to receive the light something else must first emit that light which is what your display is doing. LCD displays require a back light to constantly emit white light that is then selectively blocked by the LCD elements. LED/OLED displays emit light at a per element level but need to emit a lot of it due to the lack of a power back light, this is what limits their life span.
 
Status
Not open for further replies.