News Apple debuts M4 processor in new iPad Pros with 38 TOPS on neural engine

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Nov 14, 2023
65
79
110
1 On inventors is not understanding: it is happening. All these modern models are just being fed raw data. They train themselves. And they are getting increasingly better at getting results training themselves. There are several models that can accurately and explicitly describe most everything in a general picture that it has never been inputted before. It can vividly, describe the color, texture, style, size of a jacket that was not in his training data based on its self learning. Critically, even though the creators don’t understand how this is happening. They are nevertheless, getting significantly better with it with each model release.
When the engineers who build these models claim that they "don't understand how the model works" they're being academically dishonest. What they really mean is that they don't necessarily know the particular path through the generative model that the particular query took, or exactly how the training set gets stored into the model in any particular training run. This is a far cry from saying "look, we built the damn thing but we have no idea how it has this emergent functionality!" In fact, they know exactly how it works, because they wrote the algorithm, and the machine is doing exactly what the algorithm tells it to do. This is the very essence of designing non-deterministic algorithms. However, it is extremely academically dishonest to pretend that a non-deterministic algorithm has emergent capability. The engineers at Google are notorious for this kind of ridiculous behavior, but OpenAI isn't immune either.
 

JamesJones44

Reputable
Jan 22, 2021
714
668
5,760
Yeah, we disagree on timeframe and endgame. I am looking at the same thing you are, I don’t know why it’s not painfully obvious, but if you can look at this and see something different, oh well. I just think you are putting yourself in position for your reality to be jarred. So be it.
From what stand point are you trying to make this argument? I use and help train ML models every day, that includes NN, LLM, GAN, etc.. If you understand how they work they you would quickly understand the assertions are being made by you and some others don't align with the technology. The "jar" is likely to come when the realization occurs that current technology can't do what was believed.

Personally, based on your responses, I don't think you truly understand the technology. I would encourage you read up on various models. learn how to use them, create some content yourself, learn what they CAN do and what the CAN'T do, then form an opinion. I feel like a lot of your statements flow from marketing jargon used to sell product and not from technical understanding.
 
I think what's going to jarring to AI bros is just how little the average person is going to care about AI once the novelty wears off and how little they will actually want AI in their lives. Even if everything prophesied by the pro-AI/AGI crowd comes true, many will simply reject using it. Essentially, if the product has a market, it will continue to have a market after AI breakthroughs. If the product doesn't have a market, it will continue to not have a market after AI breakthroughs.

The benefit of "AI" for the average joe is in the Human Machine Interface (HMI) aspect of living our lives. Currently it's touch screen or keyboard / mouse interaction, and the entire field of UI/UX developed around making that interaction as seamless as possible. Being able to extract useful action verb statements from regular human language is going to have an effect on day to day living. Right now it's primarily typing chat bot style, they will move it to a verbal system where you just say something and the machine use's the language models to extract your intentions and then execute on those intentions. Outside of that, it'll likely accelerate the introduction of personal virtual assistants, which can be very useful for professionals with busy calendars but who aren't important enough to warrant hiring a human.
 
The benefit of "AI" for the average joe is in the Human Machine Interface (HMI) aspect of living our lives. Currently it's touch screen or keyboard / mouse interaction, and the entire field of UI/UX developed around making that interaction as seamless as possible. Being able to extract useful action verb statements from regular human language is going to have an effect on day to day living. Right now it's primarily typing chat bot style, they will move it to a verbal system where you just say something and the machine use's the language models to extract your intentions and then execute on those intentions. Outside of that, it'll likely accelerate the introduction of personal virtual assistants, which can be very useful for professionals with busy calendars but who aren't important enough to warrant hiring a human.
Personally, I often think of this acting out like Jarvis did in the Ironman movies without all of the weird personified idiosyncrasies. Personal AI assistants are imo, one of the things that can become a useful product born from all this AI research with what technology we have now. The thing that really holds humans back from embracing the tech we have at our fingertips is the speed of the interface of communication between us and the technology. If an AI assistant can increase the I/O speed of our use of information, then it's going to become successful, imo.
 
Jarvis did in the Ironman movies without all of the weird personified idiosyncrasies.

"Jarvis" was a real actor combining a script with his own emotional language.

'AI' is just a mathematical model used to store relationships between different constructs, words or tokens. There is no emotion, no feeling, no thought, no intelligence, nothing but B follows from A and C has a 80% chance to follow B. There is no personality because there is no person, it would take a human introducing such a thing by deliberately programing it in.

Only around 20% of human communication is verbal, the rest is in the emotional expression by our voices, face and body gestures.
 
"Jarvis" was a real actor combining a script with his own emotional language.
Thats why I said like Jarvis but without the personifications and idiosyncrasies. I am specifically referring to the functions of Jarvis, nothing else.

'AI' is just a mathematical model used to store relationships between different constructs, words or tokens. There is no emotion, no feeling, no thought, no intelligence, nothing but B follows from A and C has a 80% chance to follow B. There is no personality because there is no person, it would take a human introducing such a thing by deliberately programing it in.
As I said before, it is of my opinion that your characterizations of how AI works are incorrect, but no need to rehash unless you want to. Of course there is no human element to AI and its inner workings except maybe the bias of information used to feed its dataset for training.

Only around 20% of human communication is verbal, the rest is in the emotional expression by our voices, face and body gestures.
I think I disagree. Human communication is largely verbal and the context and even implied intent comes from nonverbal expression in the form of body language, tone of voice, timing, or specific delivery of the language. Neither would exist without the other, so to ascribe a percentage to what is more or less "communication" as a whole, verbal or nonverbal seems like a bit of a stretch to me. As a supporting argument, there are languages that have no nonverbal components and deliver perfect information like programming languages. Though I do not have strong attachment because I am not exactly an expert on this subject, this is just how I intuitively and anecdotally feel about the subject.