News IBM demonstrates useful Quantum computing within 133-qubit Heron, announces entry into Quantum-centric supercomputing era

Status
Not open for further replies.

Admin

Administrator
Staff member
IBM's Quantum Summit 2023 showcased an energized company that feels like it's opening up the doors towards a quantum-centric supercomputing era. That vision is built on the company's new Quantum Processing Unit, Heron, which showcases scalable quantum utility at a 133-qubit count, and already offers things that are beyond what any feasible classical system would. Breakthroughs and a revised understanding of its own roadmap have led IBM to present its quantum vision in two different roadmaps, prioritizing scalability in tandem with useful, minimum-quality rather than monolithic, hard-to-validate, high-complexity products.

IBM demonstrates useful Quantum computing within 133-qubit Heron, announces entry into Quantum-centric supercomputing era : Read more
 
  • Like
Reactions: George³

bit_user

Polypheme
Ambassador
I'll come back and read this in full, later. I just wanted to suggest that longer articles, like this, would benefit from a call-out box (not sure if that's the correct term) that gives a bullet-list summary of the article's key points. Not saying this is one of them, but there are some articles where I have just enough interest to see what it's about, but I'm not invested enough to read a piece of this length or longer.

Perhaps not unlike how the reviews put the rating and pros + cons right up front. In the best case scenario, there might be one or more items in that list which intrigues me and motives me to read at least a portion of the article, in full.
 
  • Like
Reactions: domih and Sluggotg

waltc3

Reputable
Aug 4, 2019
424
228
5,060
The two biggest marketing buzzwords in use today are "AI" in the #1 spot, followed by "Quantum Computing" in the #2 slot. So-called periodic "breakthroughs" that we see announced regularly are done primarily to achieve continuing investments in the companies making the announcements. Yet final products are still a decade or more away, much as with controlled fusion reactions, etc. Don't hold your breath waiting, is my advice...;)
 
  • Like
Reactions: Sluggotg

DougMcC

Commendable
Sep 16, 2021
135
94
1,660
The two biggest marketing buzzwords in use today are "AI" in the #1 spot, followed by "Quantum Computing" in the #2 slot. So-called periodic "breakthroughs" that we see announced regularly are done primarily to achieve continuing investments in the companies making the announcements. Yet final products are still a decade or more away, much as with controlled fusion reactions, etc. Don't hold your breath waiting, is my advice...;)
Company I work for has had AI solution in production for almost 6 months, making major cost savings. AI may not live up to every promise it has ever made, but it is already making major change happen.
 
  • Like
Reactions: bit_user and domih

waltc3

Reputable
Aug 4, 2019
424
228
5,060
OK, explain to me the difference between a good production program for your company making cost savings and your company's internal version of "AI". All "AI" is, is computer programming--garbage in, garbage out, common, ordinary computer programming. That's it. "AI" is incapable of doing something it was not programmed to do. We'll see how the "AI" situation looks a year from now, after the novelty and the hype has worn off. That's the thing I object to--this crazy idea that AI can do things far in advance of what it is programmed to do--AI is not sentient, doesn't think, has no IQ, and does only what it is programmed to do. The rest of the AI hype is pure fiction--and it's being spieled right now as hyperbolic marketing. It's so false, it often seems superstitious.
 

domih

Reputable
Jan 31, 2020
189
171
4,760
OK, explain to me the difference between a good production program for your company making cost savings and your company's internal version of "AI". All "AI" is, is computer programming--garbage in, garbage out, common, ordinary computer programming. That's it. "AI" is incapable of doing something it was not programmed to do. We'll see how the "AI" situation looks a year from now, after the novelty and the hype has worn off. That's the thing I object to--this crazy idea that AI can do things far in advance of what it is programmed to do--AI is not sentient, doesn't think, has no IQ, and does only what it is programmed to do. The rest of the AI hype is pure fiction--and it's being spieled right now as hyperbolic marketing. It's so false, it often seems superstitious.
True. ChatGPT started a giant hype 1+ year ago, with a lot of people not understanding what it actually does and journalists doing in hyperboles for gluing the readers eyeballs to their web site.

However AI is an umbrella name that covers a lot of activities.

All "AI" is, is computer programming--garbage in, garbage out, common, ordinary computer programming.

A giant no here. Neural networks in the center of deep learning algorithms is NOT ordinary computer programming. It is not imperative. It is not Object Oriented Programing. It is not functional programming. Lookup the web and/or wikipedia for "neural networks and deep learning". You'll find plenty of basic detailed information (e.g. the sigmoid function, backpropagation, cnn, rnn, fnn, dnn, ann, transformers, etc) There are quite solid mathematical frameworks behind deep learning.

That's it. "AI" is incapable of doing something it was not programmed to do.

Again a giant no here. Look up the web for "ai solving new math problems".

In a more prosaic example, if you feed a neural network with a large number of images with cats, it WILL be capable to recognize a cat it has never seen before. The NN was "programmed" to recognize cats, but NOT to recognize a cat it has never seen before, yet it does recognize it with a pretty high probability. Imperative programming cannot do that.

---

I used AI everyday. Examples:

a. I use AI to generate the documentation of my code. It saves a lot of keyboard data entry. It is usually accurate for simple functions. For more complex algorithms, it fails BUT it gives a draft template I can correct. Writing the documentation is not the only one feature of using AI for coding. In the future, I'll also probably use it to write class skeletons, again saving a lot of typing. I tried using AI for generating unit-tests, it's OK. BUT it limits itself to testing the successful execution path(s). So I still have to write the tests resulting in failures, in other words attempting to cover all the execution paths. My code coverage is currently 90%. I aim at 95+%. Why? Because the type of customers using our products want this level of code coverage, otherwise they reject the proposal. Think large corporations or government agencies looking for cybersecurity solutions.

b. I'm also developing an IT server integrating radiology AI providers with IT systems used in hospitals. Each time I run tests, radiology images (e.g. x-rays) get analyzed by one of several 3rd party AI providers. AI radiology is more and more accepted and requested by practitioners because AI may detect an anomaly that a human radiologist may not see. In the case of cancer-like illnesses, detecting earlier can be life saving for the patients. When a cancer-like anomaly is detected too late (e.g. already at the metastasis phase) the potential for death is much higher. Conversely, treating a cancer-like illness that just started is usually successful. That's the reason health imaging AI is skyrocketing. The human practitioners still make the final analysis and reports. Imaging AI is an additional tool that does not replace the humans.

c. If you use social networks (I don't) or a cell phone (I do), there are AI algorithms to recognize faces in images, or generate suggestions based on your browsing, etc. Again AI is an umbrella name that covers many sub-fields: deep learning, machine learning, image recognition, LLMs, generative AI, etc.
 
Last edited:

DougMcC

Commendable
Sep 16, 2021
135
94
1,660
OK, explain to me the difference between a good production program for your company making cost savings and your company's internal version of "AI". All "AI" is, is computer programming--garbage in, garbage out, common, ordinary computer programming. That's it. "AI" is incapable of doing something it was not programmed to do. We'll see how the "AI" situation looks a year from now, after the novelty and the hype has worn off. That's the thing I object to--this crazy idea that AI can do things far in advance of what it is programmed to do--AI is not sentient, doesn't think, has no IQ, and does only what it is programmed to do. The rest of the AI hype is pure fiction--and it's being spieled right now as hyperbolic marketing. It's so false, it often seems superstitious.

I'll give this my best shot.
Using imperative programming, we can define that if a customer case involves X, Y, Z parameters, it should be routed to Developer Dave for solutioning. Maintaining such a mapping from parameters (in potentially more than 3 dimensions) to developers is hard work and high maintenance. So we actually don't do this, we rely on relatively expert humans to do their best.

That is, until modern AI. Now a modern AI analyzes commits to our codebase, connections between components, our documentation, and our internal wiki, and develops a model I can't say I fully understand of who has expertise on various topics. It more correctly routes cases to developers than the humans did, at a fraction of the cost.
 
  • Like
Reactions: bit_user

ttquantia

Prominent
Mar 10, 2023
5
4
515
Company I work for has had AI solution in production for almost 6 months, making major cost savings. AI may not live up to every promise it has ever made, but it is already making major change happen.
Yes, sure, "AI", which is really just a little bit more clever statistics than what has been widely available before, is useful for many things.
But, it is pretty much useless, or of marginal usefulness, for 99 per cent of things people currently do with computers.
And, listening to TV, media, newspapers, AI is supposed to change everything. It will not. For the next couple of years we will continue seeing new things being done with AI which are of questionable utility, and many will slowly disappear after it becomes too obvious that it just does not work. Example: replacing programmers with LLM based methods.

Same with Quantum Computing. It is potentially useful for a very limited and narrow class of computational problems, maybe, and most likely 10 or 20 years from now. Nothing even close to justifying the current hype and investments.
 

bit_user

Polypheme
Ambassador
the thing I object to--this crazy idea that AI can do things far in advance of what it is programmed to do
Emergent capabilities are apparently a real phenomenon, in LLMs.

AI is not sentient,
Nobody (serious) has said it is. It turns out that sentience is not required, in order to perform a broad range of cognitive tasks.

doesn't think,
Define thinking.

has no IQ,
IQ is a metric. You can have a LLM take an IQ test and get a score. By definition, it does have an IQ.
 
  • Like
Reactions: DougMcC and domih

bit_user

Polypheme
Ambassador
Yes, sure, "AI", which is really just a little bit more clever statistics than what has been widely available before, is useful for many things.
But, it is pretty much useless, or of marginal usefulness, for 99 per cent of things people currently do with computers.
And, listening to TV, media, newspapers, AI is supposed to change everything. It will not. For the next couple of years we will continue seeing new things being done with AI which are of questionable utility, and many will slowly disappear after it becomes too obvious that it just does not work.
This is almost exactly backwards. AI gets better as the amount of compute power and memory increases. Also, as people develop better approaches and generally gain experience with it.

Furthermore, the digitization of society is exactly what makes AI so powerful. The availability of vast pools of data has made it very easy to train AI, and the preponderance of web APIs has made it easy to integrate AI into existing systems.

The AI revolution is only getting started. Yes, there was a hype bubble around LLMs, but the industry is continuing to develop and refine the technology and it's only one type of AI method being developed and deployed.

Same with Quantum Computing. It is potentially useful for a very limited and narrow class of computational problems,
Oh, but some of those are incredibly high-value problems! Lots of drug-related and material science problems fall into the category of things you can tackle only with quantum computing.

Not only that, but the implications for things like data encryption are pretty staggering.
 
  • Like
Reactions: domih

DougMcC

Commendable
Sep 16, 2021
135
94
1,660
Yes, sure, "AI", which is really just a little bit more clever statistics than what has been widely available before, is useful for many things.
But, it is pretty much useless, or of marginal usefulness, for 99 per cent of things people currently do with computers.
And, listening to TV, media, newspapers, AI is supposed to change everything. It will not. For the next couple of years we will continue seeing new things being done with AI which are of questionable utility, and many will slowly disappear after it becomes too obvious that it just does not work. Example: replacing programmers with LLM based methods.

Same with Quantum Computing. It is potentially useful for a very limited and narrow class of computational problems, maybe, and most likely 10 or 20 years from now. Nothing even close to justifying the current hype and investments.
Whether you are right to disparage it as 'clever statistics', or regardless of how wrong people are to describe it as 'AI', the reality is that this software technique is solving a previously unsolved class of problems, today.

And sure, there is hype, because people see what it can do today, and extrapolate to other, similar problems, and imagine that refinements in this technique will make those problems attackable. And it is more likely that they are right than that they are wrong. It is hard to guess just how many such problems will be approachable with AI within 20 years, which means within one generation it is VERY likely that a GREAT many changes will happen, due to this specific innovation.

Quantum computing is in an even better position. The problems it can solve are completely known, and exceedingly valuable. All that is needed is the hardware.
 
  • Like
Reactions: bit_user

bit_user

Polypheme
Ambassador
Whether you are right to disparage it as 'clever statistics',
I know you're not defending this point, but I think such claims should be contested.

Computing statistics is something involving only a single iteration of forward-propagation. The magic of deep learning happens in the backward-propagation pass + iteration. That's how neural networks gain the ability to detect patterns in the data. As networks become bigger and more complex, they can detect more complex patterns by building atop simpler ones. This abstraction continues until they have the ability to model rules & concepts.

One example might be like how image generators "know" what a cat looks like, without anyone ever breaking it down and trying to explain to them the different elements of a cat. You can't depict a cat in novel situations and doing novel things, unless you know some basic rules about how they look and behave, but nobody ever spelled out these rules for the AI. It deduced them, by seeing enough examples.

the reality is that this software technique is solving a previously unsolved class of problems, today.
Exactly. You can't deny that it's solving problems which have defied conventional techniques. However, the average person on the street probably doesn't see anything in their life changing markedly. They have no idea what sorts of problems researchers and engineers are using it to solve, behind the scenes.

And sure, there is hype, because people see what it can do today, and extrapolate to other, similar problems, and imagine that refinements in this technique will make those problems attackable. And it is more likely that they are right than that they are wrong. It is hard to guess just how many such problems will be approachable with AI within 20 years,
What's notable is how even the foremost AI researchers have been caught off-guard by the progress made in the past couple years. That's why some of them are sounding alarms that we might be closer to building something beyond our control than we thought.
 
Dec 20, 2023
1
0
10
Emergent capabilities are apparently a real phenomenon, in LLMs.


Nobody (serious) has said it is. It turns out that sentience is not required, in order to perform a broad range of cognitive tasks.


Define thinking.


IQ is a metric. You can have a LLM take an IQ test and get a score. By definition, it does have an IQ

You know what he is saying. You are simply being pedantic. Those "emergent capabilities" also can produce completely bizarre responses and "answers" too. Don't cherry pick. Define thinking? I can't, but I can say what it isn't: It isn't taking ones and zeros and comparing them billions of times in mere seconds and outputting an answer that a human compares to his truth and clicks a "yea" or "nay", thus guiding this AI into closer responses. Without the guiding, you get garbage. Just remember when they let the first chatbots out there: the trolls ruined their "consciences" within mere days. This is STILL a problem, especially with GPTs. Yeah, and the IQ answer...smh.
 

bit_user

Polypheme
Ambassador
You know what he is saying. You are simply being pedantic.
Sorry if it seems that way, but I don't accept that someone can say "it's not thinking", without having a functional definition of "thinking". How can you know that X is not Y, without ever defining Y?

With a definition, we can devise tests and actually try to see if what these models do fits some functional definition of "thinking". Without a definition, that assertion is merely an article of faith.

Those "emergent capabilities" also can produce completely bizarre responses and "answers" too. Don't cherry pick.
I never said current AI models are all good. I just don't like seeing the spread of misinformation about their capabilities or potential. Whether you're embracing or wary of AI, it's in your interest to have good information about it.

Define thinking? I can't, but I can say what it isn't: It isn't taking ones and zeros and comparing them billions of times in mere seconds
Wet brains basically function by propagating waves of electrical spikes between neurons. How is that so different than ones & zeros in a computer?

outputting an answer that a human compares to his truth and clicks a "yea" or "nay", thus guiding this AI into closer responses.
How do you train a child or an animal? Reinforcement learning is a powerful tool.

Also, you're overly reductive in your description. What trains AI is computing error terms on the aggregate of its answers, and feeding those backwards. It's a little like when a kid in school takes a test. By seeing which answers were incorrect and working back to the point in their thinking where they got off track, they can correct their mental model and hopefully learn how to solve the problems correctly.

Without the guiding, you get garbage.
Yes, whether it's a wet brain or a digital approximation of one, you need to train it on what knowledge or skills you want it to have.
 
Status
Not open for further replies.