Breaking: Nvidia Computex Press Conference

Status
Not open for further replies.
Is it me, or am I the only one underwhelmed with the comparison of an "old laptop" and a "new laptop".
Couldn't we say exactly the same thing about mobile phones as well as many other pieces of equipment, ever made....
 
May 7, 2018
15
0
20
I think he answered a long time from now for the next gen gpus to dissuade more questions like that from being asked.
 

bit_user

Polypheme
Ambassador

up to 3x the performance of the original PS4 (not PS4 Pro). Not all Max-Q models are that fast.
 

bit_user

Polypheme
Ambassador

No, I think he was specifically talking about laptops with the same GPU model. Of course, his "old model" was certainly one of the bulkier ones.
 

bit_user

Polypheme
Ambassador
Jensen is outlining the DGX-2 supercomputer. "The World's Largest GPU." Two petaflops of performance in one node, 512GB frame buffer.
I'll believe it when typical games actually support it. Until then, just calling it a single GPU doesn't mean it is one.
 

bit_user

Polypheme
Ambassador
Jetson Xavier. Six processors, 12nm FinFet, Single largest project in Nvidia's history.
And yet less rendering horsepower than a GTX 1050.

Only 512 shaders. The specs have been out for a while. Unlike their desktop GPUs, they have to announce their embedded chips well in advance.

So this will not power the next-gen Nintendo Switch, or anything like it. It's primarily an AI chip.
 
Jun 4, 2018
1
0
10
It seems like I was the only one who said that nVidia won't launch a new series this year...
Everybody just was against me (doesn't have to be at an aggressive way, but just didn't side me) , no matter if it was VideoCardz, WCCF, Reddit or my national forun (FXP), people just didn't 100% agree with me...some did say that maybe I'm right but we should rather wait (well, I do understand them of course - I'm not Jensen and j can't truly know the truth so...) But at the very end - I was right...as always.
I have a tendency (a battle tendency....sorry, hard JoJo's fan) to argue so...
 

Dosflores

Reputable
Jul 8, 2014
147
0
4,710


If it's not called Tegra, it's not meant for rendering tasks. Jetson Xavier is an exciting product for its intended use cases. Cars and robots shouldn't need to render lots of things.

By the way, I don't think Nintendo is in desperate need for a next-gen Switch. It's just one year and three months old. The standard PS4 is nearly five years old, and it isn't showing signs of not being able to run great games anymore (just take a look at God of War). If new Geforce architectures can be delayed for months, new console generations can be delayed for years.
 

bit_user

Polypheme
Ambassador

"At the very end"? According to my calendar, the year is not even half over (just 5/12ths).

Of course he wants people to think it won't be for a while! They just ramped up production and now have lots of Pascal GPUs they need to sell.
 

bit_user

Polypheme
Ambassador

Technically, TX2 is not Tegra-branded, yet it's definitely faster at rendering than Tegra TX1.

The fact is, they've dropped their Tegra branding... and have shown no signs of interest in following it up with anything comparable. Because they need to announce SoCs well in advance of their launch, it's unlikely a surprise is waiting just around the corner. They'll let us know it's coming, if it ever does.


Sure. I didn't say anything about those. I just wanted to warn people off thinking it was going to be some awesome graphics chip.


Just using it as a point of reference, since it's probably the best known Tegra TX1 device.
 

dimar

Distinguished
Mar 30, 2009
1,041
63
19,360
It's so f.. strange how the monitors I like the most have FreeSync tech, and the GPU I like and have is nVidia 1080 Ti. This is so ridiculous. nVidia needs to be somehow forced to support VESA Adaptive-Sync. I'm actually sending them email once a year to request it. Hopefully others do too.
 
May 29, 2018
1
0
10
They are going to skip to 10nm for the next GPU. They will not allow AMD to even come close to one-upping them, regardless of the fact that upcoming 10nm Vega is rumored to be on par with the 2 year old 1080.
 

DDWill

Distinguished
Aug 3, 2009
28
0
18,530
I had a very strong feeling there would be no next gen Geforce announcement at Computex, and I doubt E3 soon.

I think at this stage of Jensen Huang’s carer, he now has much larger ambitions than just accelerating gaming, HPC and pro app’s, and would like his legacy to be spearheading advancements and acceleration in AI, Autonomous vehicles and machines, medical breakthroughs and scientific discoveries, and who wouldn’t in his position.

So I guess Nvidia is growing up, and unfortunately that means gamers are now back of the queue, or at least it feels like it… if you have watched any of Jensen Huang’s key notes over the last year, especially the last 3 this year, then you would have to agree with me. His eyes are now gleaming with the bigger picture.

Although Nvidia should always remember that it was decades of gamers and their hard earned cash that helped fund Nvidia to get where they are today…

That being said, I am 100% sure that there will be next gen Geforce cards on the way soon‘ish, of courses there will, and that they are nearing the end of their development.

There have been a few other factors other than Nvidia reorganising its priorities, like mining and a lack of competition at the high end from AMD, so for Nvidia there has been no real incentive to accelerate a new launch while their Pascal sales are still going so well.

I don't believe the talk about Ampere, or Turning as there is not an official roadmap in sight from Nvidia mentioning these architectures that I have seen, and it would not be cost effective developing a new architecture along side Volta, when Nvidia have not profited yet from consumer users with Volta, which from what I understand has had the biggest R&D budget to date from Nvidia, so those expenditures need to be recuperated from all possible sources, from HPC to pro to mainstream, so I am 100% sure the next GTX/RTX consumer cards will be Volta based.

I have a new theory to add to the rumour mill as to why Nvidia will be delaying its next gen Geforce cards for a looooong tiiiiiiimmmme…. Along with the reasons already mentioned above…

Nvidia’s NVLink2 combined with NVswitch are way ahead of the planned pcie-4.0/5.0 in terms of bandwidth, so the delay in consumer Volta's may be because the next generation of GTX/RTX cards will require a new type of motherboard similar to what has been seen with the new DGX-2 stations, but I am guessing with only 2 to 4 GPU mounts, consumer and workstation respectively. Hmmmm?

I would imagine if this were the case, obviously everyone would need a new motherboard to take full advantage of the increased speed and information sharing between GPU’s, but Nvidia would still launch pcie versions with severely reduced bandwidth to keep everyone happy, and I guess the new motherboards would have to accommodate a couple of pcie x16 slots for a smoother initial transition.

In fact I am sure this thought has already struck fear into the designers of pcie-4.0/5.0 who could potentially never get to see their hard work ever reach the mainstream, but to be honest they dragged their heels too long, pcie 4.0 should have come and gone by now, but not a motherboard in sight, and Nvidia’s ambitions couldn’t wait any longer, along with the rest of us.

I would say this is actually inevitable, maybe , maybe not the next generation of Geforce, but this will happen I think sooner than later, if someone else doesn’t come up with an interface to best Nvidia’s NVLink2/NVswitch technology soon ( that is available now!), plus its my feeling that the PC motherboard has been in need of a complete redesign for a long time now…

Maybe I will be eating my words, but I think in a few years it will be safe to say R.I.P pcie.

Of course its just a theory, no insider information here sorry, no broken NDA’s! But if it were true, for anyone planning to build a new super system in the next year, its exiting times, but for anyone that’s just built a new system, or already has an older system hoping they will be able to have the highest frame rates and bragging rights when the next gen comes along, could be sorely disappointed and have to settle for a slow bandwidth good oldie pcie version, not that that’s a terrible thing, now that the yields are good, a cheaper but full fat pcie Titan V, 1180ti should make any gamer happy until their next build…

this is just a bit of speculation and fun, so don't take it too seriously :)
 

bit_user

Polypheme
Ambassador

First, let's look at the bandwidth claim.

NVLink 2.0 will get you 25 GB/sec in each direction, whereas a PCIe 4.0 x16 slot will get you 32 GB/sec. PCIe 5.0 is slated to double this, yet again.

Next, let's look at the market need. Clearly, in servers and HPC, PCIe 3.0 is lagging badly. However, I think it would be instructive for you to search out some benchmarks showing the impact of x4 vs. x8 vs. x16 connectivity on game performance with modern games & GPUs. Everything I've seen indicates only a couple %, at most. If you can find otherwise, I'll be happy to see it.

Finally, let's look at the competitive picture. Now, who makes the CPUs and motherboard chipsets that would need to implement NVLink for consumers? Might that be AMD and Intel? And didn't Intel just announce plans to get into the dGPU business? So, why would they adopt a proprietary Nvidia standard that would advantage their competitor (and probably cost them license fees), rather than stay with the open and vastly more widely-accepted standard of PCIe, especially when the 4.0 draft is finalized and delivers the goods?

In fact, the only platform implementing NVLink for CPU <-> GPU connectivity is so far IBM's POWER. And they currently hold just a small niche market share in every market where high-performance GPUs are being used. It's clear they had more to gain by riding Nvidia's coat tails, and not really anything to lose. However, it should be noted that they're also the first to implement PCIe 4.0, which you can already get on shipping POWER CPUs and systems.


Yeah, with PCIe 4.0 finally arriving and PCIe 5.0 just around the corner, why you would assert this is mystifying.

You know that PCIe underlies AMD's Infinity Fabric, right? And NVMe, Thunderbolt, SATAe, and probably a few other standards I'm not thinking of. Why would all these industries walk away from all that, just to adopt a proprietary standard by a company who's not even a good team player (unlike AMD, who collaborates and makes everything an open standard almost to a fault).

I do applaud your creative thinking. Next time, I suggest first doing a bit of homework and trying to find some data to back it up. Either you'll have a more convincing argument of a winning idea, or you'll at least learn something in the process. It's a win-win.
 

DDWill

Distinguished
Aug 3, 2009
28
0
18,530


You got it right, creative thinking, just a bit of fun to add to the rumour mill, and you are obviously taking my post way more seriously than it was ever intended. If you watched Nvidia CEO's last keynote a few days back, NV switch goes way beyond NVlink, and stacking is defiantly the future if we want to see any major performance gains per generation in the future, and go beyond Moore's law.

My apologies though, I used the word Connect, not Switch in my first post (now edited), so added to the confusion.

Its happened in the past, there was a time when people had to choose between SLI or Crossfire motherboards when the two platforms were first launched, so although you completely dismissed the possibility of Nvidia also creating a new motherboard interface with NVswitch for the consumer market, they are already doing it for the pro market , so why not? Like I said in my post, just a theory, but not entirely impossible....


pci-e 4.0 / 5.0 are long overdue which pushed Nvidia to make their own new interface in the first place, and there is still no sign in any of the future processor spec's of pci-e 4.0 / 5.0 compatibility / integration anywhere for planned CPU's coming this year and the next.

Everything about Nvidia's next generation gaming cards is currently just speculation, whether that's forum member rumours, or tech site article authors, my post is just speculation like all the rest....


NVlink2, NVswitch all show that Nvidia are not standing still, waiting for long overdue interfaces to finally come out of the shadows, and I cant see how anyone cannot be impressed by NVswitch and be thinking that the technology could soon be seen in consumer motherboards, not just the pro boards as it is now. I think, very likely in fact!
 

bit_user

Polypheme
Ambassador

You have a way with facts, eh?

They did not use NVLink for the "pro" market. Their Quadro cards still use PCIe to talk with the CPU.

The footnote here is that the most expensive ($9000) now use NVLink2 over-the-top (SLI-style) for two cards to communicate with each other.

https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/documents/quadro-volta-gv100-us-nv-623049-r10-hr.pdf

Their own DGX-station doesn't even use NVLink to the CPU - it's still limited to GPU-GPU communication.

https://www.nvidia.com/en-us/data-center/dgx-station/

But that's basically where the "pro" market ends. Above that you have datacenter solutions. You can't call those graphics cards, because they aren't. They're clearly not designed for desktop PCs, in any way:

data-center-tesla-v100-nvlink-625-ud@2x.jpg



They weren't overdue, they were unnecessary. Then, NVMe SSDs and deep learning suddenly created a demand, which got the PCI-SIG hopping and now we have PCIe 4.0, with 5.0 due out shortly.


We don't know that, but I'm expecting to see 4.0 appear in 7 nm Vega and 7 nm EPYC. 5.0 isn't finished, but you only need 4.0 to leapfrog NVLink 2.


That doesn't mean every idea has an equal chance of being true. Moreover, you didn't only speculate about their GPUs, you speculated about a massive, yet somehow still secret industry-wide overhaul in a critical piece of PC architecture that reaches all the way into the CPUs, themselves.

If you want to throw out random ideas, that's up to you. If you want your ideas embraced, then I'd suggest putting more work into them.


Why is this suddenly a referendum on NVSwitch? I never said a bad thing about it.

Did you know that modern CPUs and chipsets have PCIe switches in them? As faster PCIe revs reach consumers, we'll enjoy similar benefits.
 

DDWill

Distinguished
Aug 3, 2009
28
0
18,530



Again you seem to be missing that I have from the start said this is only a theory, and definitely stated my prediction in future tense, although you seem to think I am referring to the now?


You keep on about NVlink, when my enthesis is on NVswitch and that it could, and in my honest option most likely will been seen 'IN THE FUTURE' on consumer motherboards. Nvlink2/3//4, again speaking in future tense, why not? Yes its too expensive now, but in a year or two, I am sure if the price is quartered, some enthusiasts, and small businesses will happily pay it to get the performance gains it brings.

Like all new technology that has an over sized R & D budget, it will usually end up in the enterprise an pro sections first, then when the yields are strong enough, and production costs go down, will eventually come to the mass markets.

These days this usually happens with a 2 years period, due to the competition in all markets being so high. So I would not be surprised at all to see a scaled down version of NVswitch come to the enthusiast and eventually mainstream markets in the near future, meaning the next 1 or 2 years.

They are using NVlink and NVlink2 for the pro market, the old GP100 and new GV100 can be linked to a second card with Nvlink/Nvlink2 respectively.

I don't remember ever saying that NVLink or SLI is to communicate with the CPU?? I am fully aware of the technology and its purposes, and have been building my own high end systems since the mid 90's, and an electronics and computer technical engineer, and catch up on technology news on a daily basis, and fully aware that modern CPU's have pcie interfaces, which is why I mentioned that there is no sign of an integrated pcie4/5 interface in any CPU spec's coming out in the next year.


From my first post I have said this is a theory, speculation ( may, may not happen ) and speaking in futures tense, although you seem to think I am expecting very expensive hardware just launched in the enterprise, pro market to be available at cheaper prices for the consumer market now? And my bandwidth claims were regarding NVswitch which, not NVlink, which currently allows 16 GPU's in the DGX2 to speak to each other simultaneously, not just share memory, and the bandwidth claims are astonishing!

You are talking about pcie4/5 that has not come out of the shadows yet, is not listed anywhere on future CPU specs yet, and comparing it to a technology already available and in use in the enterprise and pro markets, that will eventually come down in costs to the consumer market.

Obviously Intel are in on the NVswitch design, the new DGX2 motherboard has a pair of Xeon Platinum's on it, so intel are already catering for NVswitch designs, and still not readying their up and coming Xeons , i9s. i7's, i5's or i3s , due later this year/ next year with pcie4/5 interfaces. Seems like Nvidia already has its foot in the door..


Please watch Jensen Huang's keynote as Computex a few days back, especially on DGX2 and NVswitch, if you are not impressed, so be it, but my point right from the start is that there is a possibility, once costs come down that we will be seeing this tech in the consumer market, and that, because its already being utilised now, not still in the design and testing stages like pcie4/5, yet to be approved on any CPU coming wise scholar.. NVswitch may be the more appetising option.

Yes pcie on motherboards is needed to communicate with the CPU, but enterprise and pro markets are steering heavily toward GPGPU for the big tasks now, so the CPU may just become nothing more than a caretaker in the future, while the GPU's chat amongst each other, and do all the hard work. Yes both are needed now, both will be needed in the future, but currently its looking as though GPGPU will be the dominant force and do all the talking, freeing up the CPU just to keep the house in order.

I don't think there is any point continuing this debate. I've been clear from the start this is speculation, that I am talking in future tense, and that this is all just crystal ball talk.


I will come back in 2 years just to say I told you so lol.


Have a great weekend :)


Edit, sorry for my overly long post, I do tent to repeat a lot, and don't really have time for editing. I am not sure though why my fist post seemed to cause offense? I think the prediction is highly plausible.

Is it really impossible to imagine that in a few years, motherboards will have an integrated GPU using NVidia's new fabric, that supersedes pcie for communication with the CPU and storage devices....




 

bit_user

Polypheme
Ambassador

The latest IBM POWER CPUs actually do support that. But that just goes to show what a strategic feature it is.



As I mentioned POWER CPUs support PCIe 4.0, today. For some months, in fact.

According to this, EPYC 7 nm will be sampling with PCIe 4.0 support, by the end of the year:

https://wccftech.com/amd-zen-2-design-complete-7nm-epyc-2018/


It takes a bank of NVSwitches to accomplish that, burning a substantial amount of power in the process. DGX-2 only makes sense for cases where people need fast communication between > 8 GPUs. Otherwise, they're better off buying two DGX-1's.

NVswitch is not fundamentally different than PCIe switches, BTW.


No, because the NVsiwtch is only used for GPU <-> GPU communication. It has nothing to do with the CPUs. Just because it sits on the same motherboard doesn't mean Intel has to know anything about it.


How do you know that?


No, I watched exactly one product launch (Pascal GPUs) by that windbag and that's quite enough for me. I did read all of the specs and coverage of the DGX-2 and NVswitch, when it was announced several months ago. The important part is the details, not his hyperbolic presentations.


The problem with that is that the GPUs need to be fed. So, they need fast access to main memory and storage. AMD has taken the approach of equipping their EPYC with 128 lanes of PCIe, enabling not only fast GPU <-> GPU communication, but also fast GPU <-> CPU/RAM/Storage.


Sure, come back in 2 years.

I still don't see why AMD or Intel would adopt NVLink, instead of continuing with PCIe. Aside from all the other arguments in favor of PCIe that I've mentioned, it also has backward compatibility stretching back 15 years. Why throw all that away, do something hugely disruptive to their ecosystem, and give Nvidia the strategic high ground?
 

bit_user

Polypheme
Ambassador
If we're speculating about GPU delays, the most plausible reason I can see is probably related to ramping up GDDR6 production.

Especially if they want to keep costs down, on the GTX 1180, they'd probably like to stick with a 256-bit interface. That pretty much means GDDR6, production of which only started earlier this year.
 

DDWill

Distinguished
Aug 3, 2009
28
0
18,530


The latest IBM POWER CPUs actually do support that. But that just goes to show what a strategic feature it is.


Although IBM may be adopting this, and although I am talking about enterprise technology being used later in the consumer markets once production costs reduce, which always happens, I don't think the majority of users here can relate to using IBM power servers, and are much more interested in seeing compatibility with mainstream, enthusiast and for some Xeon / Threadripper/ EPYC CPU's, 3ds artists etc.

As I mentioned POWER CPUs support PCIe 4.0, today. For some months, in fact.

According to this, EPYC 7 nm will be sampling with PCIe 4.0 support, by the end of the year:

Same answer as above.... plus sampling is just another way of saying testing stages, and not actually ready for launch...

It takes a bank of NVSwitches to accomplish that, burning a substantial amount of power in the process. DGX-2 only makes sense for cases where people need fast communication between > 8 GPUs. Otherwise, they're better off buying two DGX-1's.

If you look at benchmarks like the Vray benchmark, and see at the top of the leader board there people using 8x 1080ti's on the same motherboard and CPU as someone just using 4x 1080ti's, but are getting worse results and throwing away a huge amount of cash because they didn't account for the huge amount of bottlenecking.

NVswitch solves this, and because the NVswitch on the DGX2 basically turns 16 GPU's into one huge GPU with 0.5TB of GPU ram, and over 80,000 Cuda cores with low-latency, all communicating as one GPU. For most tasks, especially if its being used for something like production rendering, the majority of scenes wont even need to access system ram once the scene is transferred to the GPU's, and the GPU's will do all the work while the CPU(s) just manage the system and storage, which was the point of NVlink/switch in the first pace, to take away most of the involvement of the CPU, speeding up the process, cutting out the middle man.. Of course the majority cannot afford a DGX2, and would need to sell off a few houses to get one, but if I won one in a lottery , I certainly wouldn't be grumbling about it, although I might when I see the electric bill... A scaled down version once it reaches consumers would probably access 4 to 6 GPUs at most, maybe up to 8 on server boards, where it will have definite advantages over existing PCIE3, and the main focus on NVlink/NVswitch is that it takes away the need for accessing the CPU as regularly, and with the huge onboard memory coming with pro Volta cards now, that will be seen in consumer cards eventually, most people, even 3d artists wont need access to system ram once everything's loaded to the GPUs' for final production renders. Even the new Quadro GV100 uses a pair of NVlink2 connectors, doubling the bandwidth, and creating a 64GB, 10,240 Cuda core monster which would defiantly satisfy my need for 3d production work, but ooch , yes very pricey.


No, because the NVsiwtch is only used for GPU <-> GPU communication. It has nothing to do with the CPUs. Just because it sits on the same motherboard doesn't mean Intel has to know anything about it.

Are you seriously saying that Nvidia and Intel didn't work together on this, and that Nvidia didn't need any kind of permissions or guidance in using Intel's proprietary technology? Of course they did. Nvidia have to work closely with their partners all the time, whether that's software, drivers or hardware integration.

How do you know that?

Because I regularly read the latest information on upcoming CPUs, but mainly enthusiast and pro markets, as my main use is for production rendering for industrial design and arch viz, and I like to evaluate the best time to upgrade to a new system, performance over the last generation, if its worth the cost of a new build , or to wait. Whitepapers are regularly available even for products not yet launched within a small timeframe window, and specs are regularly launched on tech sites showing the capabilities of the next CPU refresh. The next batch of i9's and Xeons are still only supporting pcie3.

The problem with that is that the GPUs need to be fed. So, they need fast access to main memory and storage. AMD has taken the approach of equipping their EPYC with 128 lanes of PCIe, enabling not only fast GPU <-> GPU communication, but also fast GPU <-> CPU/RAM/Storage.


I think you are presuming from your AMD sided arguments so far that I am some kind of Nvidia fanboy, but I consider these red vs green arguments a little immature. My next build may be Xeon, i9, may be Threadripper / EPYC, it all depends on what has the best performance for my budget at the time, and the software I use, so unfortunately on the GPU side, that currently means Nvidia's Cuda technology for VrayRT / Iray rendering, But the motherboard is still undecided as I probably wont build a new system until next summer, so plenty of time to evaluate. Although I may upgrade my GPU's in the next few months, and due to the industry I work for, would prefer Volta with RTX compatibility.

I am not a huge fan of Nvidia, I think their enthusiast and pro card prices are now seriously out of control, but they do dominate the industry I work in, so unfortunately I have to tolerate the very bitter taste in my mouth every time I have to upgrade my GPUs, however my aging 4x Titans can now be replaced my just one 250w card for GPU production rendering, the Titan V, so for me that's a tempting investment even at their price point, maybe two, and double my speeds, more than 2, and I would be running into the bottleneck problem on my aging system, due to two Titan V's near maxing out the aging ( launched in November 2010 ) pcie3 bandwidth. You say pcie designers haven't been dragging their heels but it was obvious both GPU and storage needs were nearing the limits of pcie3 yet they are still not ready, and taken so long that their pcie4 probably wont see the light of day, pcie5 has already superseded it, yet both have not made it onto a single consumer motherboard yet or consumer CPU integration list, nearing 8 years since the launch of pcie3, so its no wonder Nvidia took the initiative and designed their own interface, ready and in use now, that I believe will see the light into the consumer markets eventually...


As I said above, with the low latency sharing of both processing and memory shared between multiple GPUs, creating a huge Vram, most data models will not even need system ram, once all data is moved to the GPUs for final computation. One of the main reasons Threadripper and EPYC are still on my radar is because of the 64/ 128 lanes, but doing everything onboard the GPU's speeds things up enormously, so a pair of NVlinks for 2 cards or NVswitch tech for multiple cards makes this a reality for the first time due to the cores and huge Vrams on board now..

Ask any CG artist what the most annoying part of the project is, and its usually waiting to see the end results. with everything on board the GPUs, that's now a lot faster, almost instantaneous, compare to CPU.s which are now a lot better than the past but no where near the speed of GPU's for light tracing. Software like Arnold is impressive but still no where near the speed of having multiple GPUs. Yes GPUs generally burn more watts but for a lot less time per frame making them more efficient.

I still don't see why AMD or Intel would adopt NVLink, instead of continuing with PCIe. Aside from all the other arguments in favor of PCIe that I've mentioned, it also has backward compatibility stretching back 15 years. Why throw all that away, do something hugely disruptive to their ecosystem, and give Nvidia the strategic high ground?


Its called progress, and I think backward compatibility, although a good thing, can also be a bad thing in that it holds back.. again progress, and makes each new launch of software, firmware and hardware a nightmare to program for, and iron out all the bugs so that everyone's happy. On my aging system I am glad for backward compatibility, but the side of me that wants to see progress happening a lot faster to speed up my business, some of the old needs to be thrown away, to make way for new technologies that could be implemented now. Again I ask, why is it so hard to believe that one day soon, an onboard GPU will replace pcie with nvidia NV fabric, superseding pcie? Or that the next gen of GTX/RTX cards may use NVLink2, add two, double the bandwidth and create a monster GPU. I think most enthusiasts would live to see this, that is if the cost of NVlink2 comes down a great deal. But I think anyone hoping for prices to get lower , will be sorely disappointed, as R&D on these multibillion transistor chips is only going to escalate. I find it ridiculous in some ways, yes they throw in billions of dollars to design and test these things that needs to be recuperated, but at the end of the day you are left with a piece of sand, metal and plastic, with a scap value of $5, but with a retail value of a new small car, that has a much higher scrap value ??

Anyway, time well tell..





 

bit_user

Polypheme
Ambassador

They probably aren't using EPYC and NVMe storage, then.


You could say the same thing about AMD's Infinity Fabric (which is built atop PCIe).


That's exactly what I'm saying. What proprietary technology of Intel's did Nvidia use by slapping their NVswitches on a board that also had Xeons? Intel doesn't and can't know everything about every chip that anyone puts on a board that also has an Intel CPU.

So, what kind of engineering is it that you do?



That was my way of asking for a source. If you can't provide me a link to a server roadmap or other document indicating they're still using PCIe 3.0, then I don't believe you.


PCIe 4.0 is progress. Again, why not just use that?

As for backward compatibility, I'm not aware of PCIe causing any such headaches.
 

bit_user

Polypheme
Ambassador

Here is something you'd probably enjoy:

https://www.anandtech.com/show/11551/amds-future-in-servers-new-7000-series-cpus-launched-and-epyc-analysis/2

Keep in mind, this is only first-gen EPYC, which is built on PCIe 3.0. Hopefully, PCIe 4.0 will result in a doubling of these numbers.

Another thing to keep in mind is that Infinity Fabric spans the entire range of AMD products - from Vega to Ryzen APUs and CPUs - not just Threadripper and EPYC.
 
Status
Not open for further replies.