Nvidia Makes Breakthrough In Reducing AI Training Time

Status
Not open for further replies.

bit_user

Polypheme
Ambassador

That's how a lot of research organizations announce their findings, these days. The blogs are generally more accessible than the actual research papers, which are aimed squarely at academic researchers and industry practitioners. However, if you want to see the details, they pretty much always have a link to the papers.

If you go back and look, many of the articles on this site that report on findings announced by researchers at Google, Nvidia, Facebook, Amazon, Microsoft, etc. actually link to blog entries as their source.
 


Is it really that breakthrough? Nvidia is beating the drum about AI but the only thing they did is using video feed for autonomous car or trying to put an algorithm when it comes to neural network. There is so many players in the field, for example did you know that Blackberry is deeply involved in the process of car network communication.

But whatever I am saying, the mighty Nvidia just discover how to make cats with an algorithm...
 

bit_user

Polypheme
Ambassador

Hmmm... First you criticize sourcing from a blog. Now it's questioning the novelty of the discoveries? What really is your issue, here?

I think it's legit. Look at the images. In one case, they developed a network capable of changing the weather in a photo or changing the species of cat. And the results look absolutely convincing. I can think of a lot of less newsworthy items covered on this site.

They also mentioned a case where a novel type of neural network was developed that can perform image segmentation (among other tasks) with industry-leading accuracy at (they claim) 100x the performance. That sure sounds like it's advancing the state of the art. In fairness, this is a bit esoteric for the typical Tom's reader.


I'm not saying you need to care about neural networks or AI, but IMO criticizing what you're clearly not even trying to understand only reflects poorly on you.

Yeah, we're sorry your EVGA graphics card blew up. Seriously, I wouldn't wish that on anyone. But... I don't really see it as a good reason to dump on everything Nvidia-related.

Where I would voice a complaint about the article is the way it headlines an improvement in training time. It's not clear to me whether the author is assuming that the GAN technique reduces training time (I'm not sure it does - just reduces the need for labelled data) or what.
 
Guys, it's a very COMPLICATED task to do AI like this. In fact, you probably shouldn't even make criticisms based on only a cursory understanding of the facts let alone reading this article alone...

Heck, did you even really READ the article because it said there was up to a "100 times" improvement due to the training that they got by setting up an unsupervised analysis...

In fact, "true AI" (not scripted, limited reactions) requires the system be setup in some way to be unsupervised even if it's just for training purposes, but you guys probably knew that already.
 


C'mon, I'm sure you're just a little bit curious what kind of lion a picture of a toaster would make.

If it can turn a baby kitten into a lion then I'm sure it can at least do something funny with a toaster.

 

bit_user

Polypheme
Ambassador

Thanks, I looked at the papers.

BTW, as impressive as the cherry-picked results are, other results are a fair bit more amusing:

https://photos.google.com/share/AF1QipNCYGAA1lDqXIzEHlE7_s7jfN7LnR3-qMoUXF9coH-0FaDwEAEZjwGPbTedzA5V3w?key=emoyWmw3eWNrdjFZd1lYc2F5QWNtNDUwNDk1dWZR


The 100 times is not a claim I saw in that paper, itself, and seems to refer to inferencing performance - not training.
 

It's worth pointing out that there are mistakes even in the "best case scenario" images featured here though. Note the grass appearing on the van's roof in the summer image, or the cars on the left turning into bushes in the rainy image, for example.
 


If that was the case than the input picture would be unnecessary.

You would simply run the show me a lion algorithm and regardless of the picture it would show you a lion.

From the pictures above it appears they take a photo and apply "special effects" to make the scene look noticeably different.

Take a photo taken during a sunny day and make it look like raining.

It would be impossible to do that with every single picture that has ever been taken without the use of AI to modify the scene.
(Or a very good Photoshop artist)

 

bit_user

Polypheme
Ambassador

Without getting into the details of adversarial networks, you could say the neural network is trained on a database of images. In the process, it creates an implicit representation of much of that information. This information gets encoded in the weights that control how information is transformed between nodes in different layers of the network. These weights can grow to several hundred MB or more, depending on the network architecture.

Then, you show the "rainy" network an input image from a sunny day and it shows you what it thinks that scene would look like on a rainy day. Same idea for turning a cat into a lion. The network implicitly performs this transformation as the data flows through it - it's not explicitly applying photoshop filters or anything like that.
 
Status
Not open for further replies.