Nvidia GPUs Can Outperform Google Brain

Status
Not open for further replies.

Was there ever any doubt? Parallelizing workloads wherever parallelizing them was practical has been done for the past 30+ years in mainframes and mini-computers.

The reason desktop computing is so heavily reliant on single-thread performance is because most user-interactive code does not multi-thread easily, often does not scale well or meaningfully.
 
Depends on the workload, in HPC often the workloads are doing the same operation on lots of data, exactly what GPGPU systems are good at as they are SIMD (Single instruction multiple data) systems. I am a researcher in CFD (Computational Fluid Dynamics) and it is a field that is moving away from solving Navier Stokes on a finite volume solver (which can be sped up with CUDA to a degree) to using Lattice Boltzman techniques as they solve much quicker on GPGPU systems using CUDA (or the similar open standards but a lot currently use CUDA)
 
Well it's neat, but considering we have brains of our own until it assists ours in better ways more frequently pretty much IDGAF to be blunt about it. We need more real world practicality for this to matter much from a consumer standpoint.
 
Machine Learning is a branch of artificial intelligence that becomes smarter as more data is presented; it actually learns, giving the impression that the PC is thinking.
Please get someone to write articles that actually knows what they're talking about. Seriously, you can tell a hack wrote the article because all the facts are wrong. Machine learning is a classification method that given a set of training data classifies input data. Hence what Google brain did. Or from wikipedia:
The core of machine learning deals with representation and generalization.
Huang said that it's now possible using three GPU-accelerated servers: 12 GPUs in total, 18,432 CUDA processor cores (Google Brain has around 16,000 cores)
That's not what he said, he said you could fit the Google brain in three Titan Zs, he stops short of mentioning it has the same abilities. Besides don't you read the technical details of the articles you write? On more than one occasion it's been pointed out that processing power =/= # of cores.Thinking, like this article, actually requires a great deal more than just machine learning.
 
^^^ Parrish has been put the sword, lol. I would say cut the guy some slack but Toms Hardware is meant to be one of the biggest tech sites in the world.
 
It's Kepler. MSI's overseas announcements indicate a higher-end model (probably with the 2880x1620 display it was shown with at CeBIT) with an 870M which is Kepler-only. Speculation is they went with the Kepler 860M so they could use the same motherboard with both models.
This is pretty cutting-edge hardware. It's a quad core Haswell, which just by itself means about a $1000 laptop. On top of that it's got a gaming-worthy GPU. It's AFAIK the lightest quad core 15" laptop out there despite having a metal chassis (less than 2 kg) Unlike ultrabooks its memory and drives are all upgradeable. It actually has what looks like two M.2 SSD slots, not mSATA. So conceivably you could set it up with a ~2 GB/sec RAID-0. And you can also put in a 2.5" drive. The panel is PLA (IPS-equivalent) with the lone Russian review so far saying it covers 95%-100% of sRGB. And the keyboard has gotten rave reviews.If you're in the market for a $400 laptop, obviously it isn't being marketed it to you. But if you're like me and are willing to pay for a truly portable workstation which will let me edit my photos as well as play 3D games, it's very tempting. Cheaper and lighter than both the Macbook Pro and Dell XPS 15 with the same or better features with much better upgradability.
But editing photos are waaay better on higher res screen such as Macbook's Retina Display or QHD display on XPS, or QHD+ (3200x1800) on Samsung ATIV 9 Plus for around the same price or even cheaper. Those MSI laptops only sport FHD display and yes, laptop GPU are all overpriced. If u can afford gaming laptops, most likely u have a great gaming rig as well on ur desk. I'll just play on my bigger screen with my desktop than tiring my hands on laptop palm rest and straining my eyes and neck on laptop smaller screen in unergonomic position to play (you have to choose between comforting ur hands but hurting ur neck in the long run or comforting ur neck but weird hands placement). If u are saying "then hook up a keyboard or hook it up to a bigger screen" I'd just play on my desktop, gaming laptop is overpriced hardware. Gaming on the go? I'll go for tablets, gaming with 15" or bigger on a car, bus, or plane is not comfortable except by train and the travel duration is long. Gaming anywhere e.g cafe, or friend's place? I'll make a gaming HTPC with portable monitor since u will need to connect ur charger anyway to get the max performance from gaming laptop otherwise 2 hours is ur max gaming time.Well if u are smart like me, I'll buy a better screen laptop with decent spec for editing photos (at least if u are saying professional level, if not then go ahead) and gaming anywhere as my comment states. This applies if you don't want to waste ur money buying this overpriced hardware. Otherwise, buy it anyway.
If you were smart you wouldn't have any professional relationship with editing photos lol, you'd have a better job
 
I have the GS70 from last year. I absolutely love it. I use it primarily for work 10+ hours a day, and when I am travelling for work, I can easily game on it. It has worked flawlessly. I have dual 128GB SSD's in Raid 0 and a 750GB D; drive. Boots in about 6 seconds. The screen is great and 1080p is fine. When I am at home, I have 2x 27" 2560x1440 monitors plugged in, as well as a 22" 1080p, and the primary screen. That's 4 monitors that I can extend the desktop across.One thing I noticed is that the new version only has 1 mDP which is a disappointment. The processor is the same, as well as amount of memory and Killer wireless (which works great- extremely strong signal)
 
Was there ever any doubt? Parallelizing workloads wherever parallelizing them was practical has been done for the past 30+ years in mainframes and mini-computers.The reason desktop computing is so heavily reliant on single-thread performance is because most user-interactive code does not multi-thread easily, often does not scale well or meaningfully.
actually it's not totally true. There is some limitation to parallelization, especially with Cuda. The problem with neurons is they are asynchronous and they are not really similar to transitors in the way they work. Thus despite the claims of NVIDIA, the efficiency will not be that great simulating brain with GPUS... The number of Cores matters but is not the only consideration to have when it comes to simulation
 

The way neural networks get modeled in computer science/engineering is only very loosely based on its biological inspiration. Each node/neuron has an input and output table, internal weighing tables for each of those inputs/outputs, potentially different transfer functions and all of these get processed in discrete steps until the system converges on a solution or learning state.

The whole process is a lot like finite element analysis and the code can theoretically scale to about as many cores as there are elements in the simulation model since the way each element responds to its environment is derived from the system's previous step state.
 
Status
Not open for further replies.