IBM Invents 'Resistive' Chip That Can Speed Up AI Training By 30,000x

Status
Not open for further replies.

More likely, the AI will realize it's being trolled, so it will hack the power grid to deliver a 480VAC jolt to the troll's home or business, frying his system(s) and knocking him offline.
 


It is funny that we watched all these movies about AI and the eventual downfall of humanity due to it yet we continue to make more and more AI advancements.

I am now starting to think we will look back on movies like the Terminator and the Matrix etc and history lessons.
 
I like how nobody (like literally ever) entertains the idea that maybe, just maybe, total genocide might not be the first solution AI comes up with. That's just how humans deal with things, but I don't see why it's not equally plausible that a super-intelligent AI would figure out a way to help humanity break away from its flaws without just killing everyone. And also, maybe when we get to the point where a singularity inducing AI is coming, the scientists involved won't give it the capability to perform mass genocide. I just don't see an AI being like "These organic creatures have evolved from animals driven purely on instinct into a worldwide society where there are those who try to do good for all and those who try to only help themselves. They have come all this way to build an intelligence that exceeds their own, which is quite the feat. Now they must all die cuz pollution and corporations and also the movies said so!!!"
 
In order for the AI to become independent, it first needs to gain energy independency, doesn't it? I mean how can a machine that's dependent on plugged power source be any serious threat to humans? It's not solely about the processing capacity after all. I doubt this AI stuff being a threat in any near future (maybe next century or beyond)
 
In order for the AI to become independent, it first needs to gain energy independency, doesn't it? I mean how can a machine that's dependent on plugged power source be any serious threat to humans? It's not solely about the processing capacity after all. I doubt this AI stuff being a threat in any near future (maybe next century or beyond)

As long as the machine can lockout the power infrastructure, and access to its own source, and as long as it can protect these against physical breaches, then it can not be stopped. In a world that is becoming more and more networked and automated it maybe able to hack its way into any networked system without even being detected, get everything ready in a systematic series of steps, then in the blink of an eye initiate a take over.
Right now this seems farfetched but we continue to drive innovation in automation. Cars that can drive themselves and talk to each other for safe navigation, power plants that are automated through sheer necessity so they can run optimally and shutdown in the event of dangerous parameter breaches, and we always think its a good idea for someone to be able to login from home to check and control these resources. Military networks, networked remotely piloted vehicles... maybe even whole warships that have a remote option for last resort control from a command base anywhere in the world? Will it happen? I really have no flipping idea, is it possible? I think anything is possible. Should we be tread carefully? Caution would be prudent.
 
Every time there's an article about machine learning or robotics, alarmists with no understanding of contemporary AI research start making allusions to Skynet.

Just because AlphaGo can beat a world-class Go master doesn't mean we're anywhere near strong AI. At the current pace of AI and neuroscience research, we're still centuries away from being able to develop an artificial sentience. That's like thinking that the invention of the fax machine means we're on the cusp of creating teleportation devices or that, because we can send rovers to Mars, that means in a few years we'll be colonizing exoplanets hundreds of lightyears away.

Yes, it's a step in the right direction, but it's an enormously small step compared to the difficulty of creating strong AI.

As for existential threats to humanity, we're liable to destroy ourselves through war and climate disaster long before we're able to develop sentient computers.
 
I like how nobody (like literally ever) entertains the idea that maybe, just maybe, total genocide might not be the first solution AI comes up with. That's just how humans deal with things

It would need a survival instinct like us to act like us. I wonder how that would be programmed.
 
This is how SkyNet is born!!

Seriously, if we aren't careful someday these super AI machines may rule the human race. History has shown the superior weapons and intelligence will usually overtake the existing society and enslave them.
 
IBM's always got some awesome new thing that is thousands or even millions of times better than current technology that we will probably never see outside of hype newsletters. Kudos for the innovation, but how much does it really matter for us if we don't even live to see it?
 
IBM's always got some awesome new thing that is thousands or even millions of times better than current technology that we will probably never see outside of hype newsletters. Kudos for the innovation, but how much does it really matter for us if we don't even live to see it?

It takes a lot of advancements a long time to filter through to the general public. Like quantum mechanics which took 60 years or so to build the world you live in today. All that foundation was put down before you were born, it just seems very shortsighted to write it off.
 
I like how nobody (like literally ever) entertains the idea that maybe, just maybe, total genocide might not be the first solution AI comes up with. That's just how humans deal with things, but I don't see why it's not equally plausible that a super-intelligent AI would figure out a way to help humanity break away from its flaws without just killing everyone. And also, maybe when we get to the point where a singularity inducing AI is coming, the scientists involved won't give it the capability to perform mass genocide. I just don't see an AI being like "These organic creatures have evolved from animals driven purely on instinct into a worldwide society where there are those who try to do good for all and those who try to only help themselves. They have come all this way to build an intelligence that exceeds their own, which is quite the feat. Now they must all die cuz pollution and corporations and also the movies said so!!!"

This is really well said. I sure hope so, because I agree that people are too self driven to the point that if a super powerful AI was made, by us, we would be in immediate and swift peril.
 


You'd be accurate if we were talking something like how long it took to reach the moon after the first successful flight of the Wright brothers. With modern chip manufacturing technology, the foundation is mostly there right now. We don't need to figure out how to build the machines needed to build the factories to produce the new technologies. We're already there. When an engineer figures out how to make a new chip or memory, there, it's done. It doesn't take fifty years after that to produce it because they needed to already be able to produce it just to test how well it works. Mass production only takes a few more years, not decades.

If the companies like IBM had any real incentive to put some of these technologies out the door, you'd see them in a few years tops. They profit more from waiting and milking the current technology more, so that's what they do. Look at the automotive industry, power companies, internet providers, this mindset is everywhere. Things only take a long time to filter down (and that's assuming they ever get down to us) because the companies choose to do that. If we were talking say medical drugs and things like that, then it'd make sense because they need a lot of time to be tested (naturally, those often come down much faster at great risk because, again, profit), but that's not the case with technologies like this.
 
I like how nobody (like literally ever) entertains the idea that maybe, just maybe, total genocide might not be the first solution AI comes up with. That's just how humans deal with things, but I don't see why it's not equally plausible that a super-intelligent AI would figure out a way to help humanity break away from its flaws without just killing everyone. And also, maybe when we get to the point where a singularity inducing AI is coming, the scientists involved won't give it the capability to perform mass genocide. I just don't see an AI being like "These organic creatures have evolved from animals driven purely on instinct into a worldwide society where there are those who try to do good for all and those who try to only help themselves. They have come all this way to build an intelligence that exceeds their own, which is quite the feat. Now they must all die cuz pollution and corporations and also the movies said so!!!"

This is really well said. I sure hope so, because I agree that people are too self driven to the point that if a super powerful AI was made, by us, we would be in immediate and swift peril.
 
Status
Not open for further replies.