News Apple debuts M4 processor in new iPad Pros with 38 TOPS on neural engine

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
You are wrong, but clearly I can’t convince you here. Irrefutable developments are incoming. Barring some tragedy, I’ll still be on this site.
Okay. I just think it's an absurd belief from a purely philosophical perspective that someone, anyone, could create a machine that can somehow do a task that the original inventor of the machine himself cannot understand. If you just think about that for a while I think you'll see how little sense it makes. Computers are advanced machines that utilize physics to perform mathematics. Most "believers" in AGI (I use that term because believing AGI is possible requires faith) think that the brain is a simple computer, therefor computers can be intelligent. My prediction is that the whole AI race is going to prove how wrong both of these assumptions are.
 
How does Tesla’s driving models get drive a car down a busy freeway with no hands. By having the appropriate electronics and mechanical devices. And access to the internet. Yes, AI models are going to be given tools.
Did you see the one where a Tesla in "AutoPilot" mode didn't recognize the median in a roundabout and drove right through it (this was recent BTW)? The reason for this, today's "AI" doesn't do what many people think it does, it doesn't think, it does not have cognitive abilities, it simply acts on data it has been trained with. When it encounters an unknown it can't "think" about the right action, the action with the best weight is executed with zero thought of what the outcome might be.

In technical terms the cause was the ML model wasn't train on data for that type of roundabout and the mapping system it uses didn't have the updated intersection configuration. Thus the result was the car just driving over and through the median because it didn't have the data and the algorithm to pick the right action simply picked the highest weighted one which was "the map says the road is strait, so go strait".

This is also why 10 years later, Musk's promise of "Robo Taxis" is still not possible. ML/"AI" can do a lot, but it's far from being able to be applied like you've described many times.

You can listen to interviews and talks from Sam Altman, Zuckerberg, Meta’s Yann LeCun, OpenAI’s Ilya Sutskever, Noam Brown, and Shane Legg. So many more.
You can quote these people all you like, heck throw Musk in there too. It doesn't change the fact that these guys have been A) promising these things for forever B) Are drumming up hype for their products. Apple is a perfect example of B). What Apple announced today was fairly underwhelming, yet just throwing the mere possibility of "AI" got everyone saying oooooh awwww.

Now they have several hundred billion dollars in investments.

What is going on with AI is very reminiscent of Block Chain. I saw game companies diving into Block Chain with zero clear use for it. Simply because people throw money at it, doesn't mean it well make some great leap, see Block Chain as the latest example (maybe throw VR in there too). ML training of neural networks has been around in a near current form since 90s and was heralded as the gateway ruling the world with computers, yet neural networks today are still pretty much the same as they were 30 years ago capability wise. Easier to train, but still no where near the promises. GAN models are that all over again, here we are 10 years down the road and it's going to take over the world, yet GANs today is pretty much the same as it was 10 years ago with all of the same limitations. The big change is the amount of data, but I don't think data alone is going to get "AI" to where you want it to be. IMO we are likely another 20 or 30+ years away before we have what you have described in several posts. If data alone could win the day with GAN models Google would have already won. That's why there is still heavy research being done on cognitive models and trying to iron out the limitations for them. If they can do that, then maybe we get to where you describe, but until them I don't see GANs being able to deliver on these promises.
 
I hate apple. I hate their OSes. I love using my own software, or at least my own choice of software, like Brave browser, and running desktop software on a tablet (like Fusion 360, IDEs, etc). But man, RIP to Intel and AMD. Apple silicon is crushing, and Windows 11 is now worse than MacOS. Even if Qualcomm comes thru with all their promises, thats still a chip thats 1 generation behind Apple AND is married to Windows and Microsoft. Its infuriating.

I have AMD, Intel, and Qualcomm Windows devices, and my work machine is an M1 macbook. I've use all of them. Hats off to Apple, for everything but gaming its not even a choice.
This is why I'm rooting for MediaTek. Competition is good.
 
I’m having a hard time making links, but Google this Ars Technica article: “

The real research behind the wild rumors about OpenAI’s Q* project”​


This is already far beyond just responding to questions, far beyond copying and pasting. More importantly, the achievements since 2014 and AlexNet are huge and the pace of innovation is accelerating by remarkable amounts. So much was done by unpaid students using mid-tier graphics cards. Now they have several hundred billion dollars in investments.
I just wasted 10 minutes reading the article. There is no "there" there.

I'll ask again - can you provide something quantifiable, rather than a word salad.
 
The number of tech site commenters who don’t see this coming really surprises me.
It's largely due to selection bias. This site's demographic skews older. Also, most of the folks who have expertise in AI are too busy participating in the AI boom, leaving something of a knowledge vacuum around here.

Worse, AI suffers from something of a "black box" problem, where it's not always obvious when neural network-based technologies are in use. So, most people have no idea how much our world is already being shaped by it.

Finally, the implications of AI are scary and it's a lot more comfortable for people to downplay it and take comfort in the idea that the future will continue to closely resemble what they know. Acknowledging a powerful, new technology you don't really understand can also leave one feeling enfeebled and powerless. It's a form of status threat.

I'm not saying I fully endorse all of your positions, BTW. I do think it's easy to be so amazed by how much progress AI has made that the remaining gaps seem a lot less significant than they really are.

I never realized how many John Henrys there were on this site.
Thank you! I was reaching for that same analogy, but the name escaped me. I got him mixed up with Paul Bunyan!

You are wrong, but clearly I can’t convince you here. Irrefutable developments are incoming.
Doubters gonna doubt. The amount of evidence it would take is quite large and rare is the individual who invests time and energy into invalidating their own beliefs and preconceptions. I mean, there are people in modern society who still cling to the belief that the Earth is flat, in this day and age! If ever there was something you'd think we could all agree on...
 
Last edited:
  • Like
Reactions: helper800
It doesn't really work like that. ML/LLM is not going to invent new science to say cure cancer simply by putting in data points. It does not have that capability, it does not have cognitive abilities. It can mimic reasoning, but it can't think.
You really have to ask yourself: at what point does "mimicking reasoning" become so good that it's indistinguishable from actual reasoning?

Also, can you define "thinking"? If not, then how can you really say it's not?

ML today requires a human to train it by programing algorithms and then tell it to train on x data.
When a child is born, they don't know any of the skills we use as adults, in the professional workforce. A child must be trained, in order to gain the necessary skills and knowledge. Is AI really so different?

While that is big, it's not replacing a college student or college research anytime soon.
It can pass the Bar exam.
 
Last edited:
No it's just copy pasting. AI models use a multidimensional array to represent the mathematical relationship between all elements of that array. Chat-GPT for example use's an array of 57 thousand dimensions, bigger models use bigger arrays. Each element of that array is a unique word, phrase or construct,
Okay, first of all, the context is a vector, not a multi-dimensional array!

In general, your characterization is too reductive, as usual. Sure, it learns patterns in the data, but those patterns aren't restricted in any sense. No one tells it what sorts of patterns or structures to look for, which means it's capable of modelling and applying abstract concepts, like we do.

the training model then "trains" on data by incrementing the value whenever one identified component follows another and so forth. When you get done you are left with a mathematical construct that represents the relationship between all unique objects trained on. Then during processing the algorithm can reference that model to predict the probability of any construct following another construct. Example would be that after processing A then B, it can reference that array for [A,B] then chose the value with the highest probability of following it.
It can't do what it does merely with pair-wise probabilities. Not even joint probability sequences of greater length.

One way you know it's not just a statistical model is that statistics can be computed in one shot. Deep learning models require iteration, in order to form and solidify robust features.

There is absolutely nothing special about "AI", there is no intelligence,
One might look at the nervous system in a simple worm and come to the same conclusion. "All I see is a tangled mess of cells. Where's the spirituality?"

The plagerization absolutely occurs because the model only references objects by unique values, that are then indexed and put together.
That's not accurate. You can't cite a good source on that, because it's not true.

IMO, much of the plagiarism problem is an example of over-fitting. They made the model too big relative to the training data, which essentially leads to memorization rather than generalization.

Well it can only respond with something that another person has already figured out. It is completely incapable of creating a new solution, there is no intelligence or anything remotely resembling problem solving going on.
It can generate novel solutions, but if that's not how it's trained, then it won't prefer to do so.
 
Last edited:
  • Like
Reactions: helper800
This is something the AI-Bros' just do not get. "Generative AI" is just pasting stuff it's already run across and indexed as a value. That "finger" would be object #24457 and have a 50% probability of following object #38043. We have five fingers so the probability of one finger following another is rather high, and the reason you see generative AI made put six or seven fingers on a hand before another object has a higher probability of appearing. You ask for "blond head" and it just determines there is a 67% probability you need object #12830, then a 84% probability that object #4390 follows object #12830 and so forth until it's finished.
No, that's not how image generators work. They don't have an explicit notion of objects, like you claim. You should try to find a good explanation of stable diffusion, written by someone who has expertise in the field, and not a fellow skeptic. Or maybe just go straight to the academic literature, if you prefer.
 
Okay. I just think it's an absurd belief from a purely philosophical perspective that someone, anyone, could create a machine that can somehow do a task that the original inventor of the machine himself cannot understand.
Not sure how old you are, but when I was a teen, subjects like fractals, chaos theory, and emergent complexity (prime example: Conway's Game of Life) were all the rage.

Gospers_glider_gun.gif

 
  • Like
Reactions: helper800
Did you see the one where a Tesla in "AutoPilot" mode didn't recognize the median in a roundabout and drove right through it
IMO, Tesla is a pretty bad example. Honestly, given how primitive their early self-driving hardware was, I'm amazed there weren't way more accidents. And I mean principally the sensors.

This is also why 10 years later, Musk's promise of "Robo Taxis" is still not possible.
Oh, they do exist. I heard about some cities deploying them already, though I forget which. There's a remote operator who can step in, if needed, but they very rarely are.
 
  • Like
Reactions: helper800
I saw the Apple event yesterday and they claimed that the M4 has the world's most power NPU in a mobile chip.

How can this info be accurate when AMD announced their Zen 5 and XDNA 2 performance metrics before M4?

Or maybe they are claiming this cuz they will launch M4 to the market before Zen 5?
 
Last edited:
AI right now is like those kids in school who can learn by rote the entire textbook and vomit out answers to standard questions in an exam and get top marks. They can even learn 3 or 4 text books on a topic and and adapt to differently worded questions well enough. But the moment you substitute the standard blue spoon with an orange one or a spork, they lose their composure. Its even worse if the text they referred to had an error.
 
The disconnect is this statement is not true. You can't not be a developer an have generative AI create a real application for example. Sure, it can output a template and can function, but you can do that in 5 minutes without being a developer either because there are 1000s of websites that have working templates.

Well that's all "generative AI" is doing in the first place. It's already crawled and indexed those templates with object ID's and whenever a text prompt vaguely sounds like its asking for those, then it just pulls them from storage and presents them to the user. All without citing the original creator of the work, aka plagiarism.

It is utterly impossible for "generative AI" to create something new, it can only predict the next object in a requested sequence based on previously encountered content. It's just mixing older content in different sequences based on text prompts. To normal humans it looks new, but only because those humans have not been copied and indexed the contents of the internet, including copywrited material.
 
B) Are drumming up hype for their products.

This, so darn much. Right now "AI EVERYTHING" is a gold rush, anyone and everyone is trying to find some way to put the letters "AI" on their products to cash in on this gold rush. Our CIO had to talk to our executive team and calm them down because tech-enthusiasts, who have zero idea how the technology works, have been trying to convince them to invest millions of USD into various "AI" products. We recently had a vendor call with Dell "AI Bro" who used the same word salad buzzword-bingo terms thrown around in this thread to try to sell us on new products.

It's the dot-com, net-centric and blockchain bubbles all over again. A good 80~90% of these "AI All The Things" products are going to fail, people are going to burn giant piles of cash out of FOMO and the only real winners are going to be the charlatans selling the snake oil cure-all.
 
  • Like
Reactions: SSGBryan
IMO, Tesla is a pretty bad example. Honestly, given how primitive their early self-driving hardware was, I'm amazed there weren't way more accidents. And I mean principally the sensors.
The specific instance I referenced happened this year with a recent model and upgraded hardware. There was also an instance recently where it drove into flood waters. These aren't 10 year old models and 10 year old hardware.

Oh, they do exist. I heard about some cities deploying them already, though I forget which. There's a remote operator who can step in, if needed, but they very rarely are.

Wamo is allowed too, but they on operate purely different principle, so that must prove the statement, right?? Hardly. Heck the previous quoted comment just called Tesla a bad example for crying out loud.

When a child is born, they don't know any of the skills we use as adults, in the professional workforce. A child must be trained, in order to gain the necessary skills and knowledge. Is AI really so different?

This is attempting to take narrow view of learning in order to draw an argument that supports the thinking. The issue is this is ignoring specific aspects of the overall subject. A child can decide if it wants to learn that content or not, ML models can not. A child can simply decide to learn law so you can pass the Bar exam, ML must be directed. A similar angled argument would be, one can train a monkey to recognize colors, does this mean monkeys have human level intelligence?

It can pass the Bar exam.
This is a pretty bad example. Honestly, a 17 year old out of high school was able to pass the bar exam and there have been many others of similar age and experience that were able to pass the Bar. One doesn't even need a college degree or even take a training course to take and pass the Bar exam.

Also, a large part of the Bar exam is just applying learned knowledge. A simple Neural Network model with a simple language interpreter could easily pass the multiple choice section of the test because Neural Networks are extremely good at picking the best content based on trained data and know paths. (heck you could do this part of the test with clever programming and well formed SQL queries). The essay section would require some generation model, but it's still simply answering a question it has from data. I don't consider this even a remotely difficult task for even basic ML models.

You really have to ask yourself: at what point does "mimicking reasoning" become so good that it's indistinguishable from actual reasoning?

Also, can you define "thinking"? If not, then how can you really say it's not?
This statement conflates two different things. Being able to pick the right choice from a list of know entries and being able to come up with a unique entry, are two distinctly different things. Simply put "thinking" is Congestive abilities.

A simple illustration of the differences:

Take an LLM and a GAN model. Train them with the wrong order of operations for a mathematic equation where + has higher order than multiplication. Feed it this equation 3x + 3 times 2 = 9 (3 times 2 is 6. 3 times 1 is 3. 6 + 3 = 9). It will happily tell you x = 1 and that 3 times 1 + 3 times 2 is 24 and be done. It wouldn't be able to recognize without additional training that 24 is not equal to 9. It also, wouldn't be able to figure out what it did wrong, it would need to be trained to try different paths and even then it couldn't validate that result against other equations to know if it just got lucky or actually figured out what was wrong with its dataset. If models had cognitive abilities it could recognize that 24 is not equal to 9 and somehow its knowledge or how to apply the knowledge was incorrect. It could then deduce that if it changed the order of operations it would come up with the correct answer. It could even check that assumption against more equations to say, yes, this way of doing it I solving the equation is the correct way to do it and the source I learned the original data from shouldn't be trusted.

Finally to put this argument to bed.

ChatGPT:

You:

do you have cognitive abilities

ChatGPT:

I don't possess cognitive abilities in the way humans do. While I can process and generate text based on patterns in data, I don't have consciousness or subjective experiences. My responses are generated based on patterns in the data I was trained on and the input I receive from users like you.

Gemini:

do you have cognitive abilities

I can perform many tasks that are associated with cognition, such as learning and problem-solving. However, I don't have cognitive abilities in the same way that a human does. My abilities are based on the vast amounts of data I'm trained on and the algorithms I use to process information.

Trying to create a counter argument by nitpick words to fit an argument won't change the facts (cognitive ability by the way). GAN models, the closest thing the current ML world to create original context does so by generating "fake" content in order to pass it off as "real". While this can be useful in many use cases, it doesn't use cognitive abilities to come up with that content. It simply looks it up, generates some set of content derived from several sources and tries to pass it off as "real". This is similar to someone having unlimited attempts to guess a password using a dictionary attack, some combination will work, it didn't reverse engineer a specific persons password, it simply took it's dictionary, applied an algorithm to combine words/symbols/numbers and passed it off as "real".
 
I saw the Apple event yesterday and they claimed that the M4 has the world's most power NPU in a mobile chip.

How can this info be accurate when AMD announced their Zen 5 and XDNA 2 performance metrics before M4?

Or maybe they are claiming this cuz they will launch M4 to the market before Zen 5?
I think the argument is they launched first. However, assuming Qualcomm's, AMD's and Intel's NPUs match what has been announced the lead will be short lived. I'm also assuming Apple will release an "M4 Ultra/Extrema/whatever over the top marking name Apple comes up with" that will probably match the 100+ TOPS later in the year just based on comments from their execs. Typical corporate shell game to pump sales.
 
Not to disagree, but just pointing out that MediaTek and Samsung simply use cores licensed from ARM. So, in a way, you're actually just rooting for ARM IP.
It's not ideal, but it's a short term sacrifice to lower the cost of entry into the ARM arena to increase adoption rates. The added benefit is it will spur on greater developer support for ARM.
 
I just wasted 10 minutes reading the article. There is no "there" there.

I'll ask again - can you provide something quantifiable, rather than a word salad.
No, we are stuck in disagreement, I say there is a “there” there. Meanwhile, in a few short years, hundreds of millions of people will be spending tens of billions of dollars on “salad.” And human life, relationships and economics will transform for reasons laid out there.
 
Did you see the one where a Tesla in "AutoPilot" mode didn't recognize the median in a roundabout and drove right through it (this was recent BTW)? The reason for this, today's "AI" doesn't do what many people think it does, it doesn't think, it does not have cognitive abilities, it simply acts on data it has been trained with. When it encounters an unknown it can't "think" about the right action, the action with the best weight is executed with zero thought of what the outcome might be.

In technical terms the cause was the ML model wasn't train on data for that type of roundabout and the mapping system it uses didn't have the updated intersection configuration. Thus the result was the car just driving over and through the median because it didn't have the data and the algorithm to pick the right action simply picked the highest weighted one which was "the map says the road is strait, so go strait".

This is also why 10 years later, Musk's promise of "Robo Taxis" is still not possible. ML/"AI" can do a lot, but it's far from being able to be applied like you've described many times.


You can quote these people all you like, heck throw Musk in there too. It doesn't change the fact that these guys have been A) promising these things for forever B) Are drumming up hype for their products. Apple is a perfect example of B). What Apple announced today was fairly underwhelming, yet just throwing the mere possibility of "AI" got everyone saying oooooh awwww.



What is going on with AI is very reminiscent of Block Chain. I saw game companies diving into Block Chain with zero clear use for it. Simply because people throw money at it, doesn't mean it well make some great leap, see Block Chain as the latest example (maybe throw VR in there too). ML training of neural networks has been around in a near current form since 90s and was heralded as the gateway ruling the world with computers, yet neural networks today are still pretty much the same as they were 30 years ago capability wise. Easier to train, but still no where near the promises. GAN models are that all over again, here we are 10 years down the road and it's going to take over the world, yet GANs today is pretty much the same as it was 10 years ago with all of the same limitations. The big change is the amount of data, but I don't think data alone is going to get "AI" to where you want it to be. IMO we are likely another 20 or 30+ years away before we have what you have described in several posts. If data alone could win the day with GAN models Google would have already won. That's why there is still heavy research being done on cognitive models and trying to iron out the limitations for them. If they can do that, then maybe we get to where you describe, but until them I don't see GANs being able to deliver on these promises.
Yeah, we disagree on timeframe and endgame. I am looking at the same thing you are, I don’t know why it’s not painfully obvious, but if you can look at this and see something different, oh well. I just think you are putting yourself in position for your reality to be jarred. So be it.
 
It's not ideal, but it's a short term sacrifice to lower the cost of entry into the ARM arena to increase adoption rates.
I wasn't taking a position on whether it was good or bad, but rather just pointing out that we currently have 3 main IP designers of ARM cores: ARM, Apple, and Qualcomm/Nuvia. Whether ARM-designed cores arrive via Mediatek, Samsung, or someone else is less important to me.

The added benefit is it will spur on greater developer support for ARM.
I'm all for more options, especially if Qualcomm can even partially deliver on its claims. My interest in Apple is mostly academic, since I'd never buy one of their machines. However, I might consider Qualcomm, Mediatek, or Samsung, particularly if they provided meaningful Linux support (and Qualcomm is finally getting there, with its previous generation).
 
  • Like
Reactions: helper800
Yeah, we disagree on timeframe and endgame. I am looking at the same thing you are, I don’t know why it’s not painfully obvious, but if you can look at this and see something different, oh well. I just think you are putting yourself in position for your reality to be jarred. So be it.
I think what's going to jarring to AI bros is just how little the average person is going to care about AI once the novelty wears off and how little they will actually want AI in their lives. Even if everything prophesied by the pro-AI/AGI crowd comes true, many will simply reject using it. Essentially, if the product has a market, it will continue to have a market after AI breakthroughs. If the product doesn't have a market, it will continue to not have a market after AI breakthroughs.
 
No AI is going to transform how people use computers in key ways. In five years, most people will insist on AI accelerators. What you have today is AI in its infancy. Llama 3 is like the Commodore PET-Apple I era. Hobbyists feeling their way around.

Polished AI that will do complex, tedious tasks will arrive inside of 3 years and will transform all aspects of computer and device work. I’m astonished by the number of tech enthusiasts who are not seeing the biggest technological transformation of their lifetime.

It should be clear that by 2030, there will be no need to learn Excel, AutoCAD, DaVinci Resolve, VS Code, Photoshop, Powerpoint or myriad other apps. People take classes and get certificates to show they can do highly complex, technical work using professional applications. Mature AI apps will allow people who know nothing about today’s applications to get better results.

We are going to be able to just tell the devices what we want and have the results provided without thinking or even knowing about cells, formulas, filters, grids, macros, formatting, layers, plugins, coding, and the like.
I've been in the IT industry for forty years now. And from time to time I re-read old predictions e.g. from BYTE magazine, which I used to positively devour every month when I started out.

Some are total laughs, others right on, some were vastly exceeded. Old science fiction is also a lot of fun.

Over the last twenty years an important part of my job has been to both predict the future of IT and then prepare my employer for it.

And let me just say: I don't share your enthusiasm.
And that in five years I'd like you to eat your words.

Excel triggered me, to be honest, because spreadsheets remain extremely underused and underrated in terms of capabilities, whilst others damm people who run their businesses on them and then fail when they run into their limits.

It starts with the spreadsheet formula language, which nearly nobody seems to really appreciate for what it is: a functional programming language. In computer science they told us that while these functial algebras are wonderful in theory, they are unusable in practice, because people just think imperative.

Well programmery may think imperative, but evidently the early Visicalc, Multiplan, etc. adopters had no issues using that natural functional calculus, perhaps because nobody told them it was in fact a functional programming language considered too complicated for programming.

I've long thought that many HPC challenges are much more naturally expressed in a functional calculus that sits atop giant multidimensional vectors of database tables and is much better computed in a naturally distributed and parallel manner via query languages, which are also mostly functional. I guess some of that today is called Jupyter, some of that found its way into Greenplum, so I wasn't totally off 30 years ago.

But AIs doing HPC well enough to build nucler fusion devices? Sounds a lot like getting rid of old age.

Or just like the IoT predictions from ten to five years ago.

Most everyone who thinks Excel is complicated today may use an AI to do better whatever he does with Excel.
But that's a person that is scratching 0,0000001% of what that spreadsheet paradigm could enable them solving.
It doesn't HPC models for weather, nuclear physics etc. which could perhaps be better expressed in a spreadsheet like functional calculus than in Fortran or C++.

I'd even go as far as saying that there is a good chance even current AIs would be better at AutoCAD than I am today.
But that's because I'm a total idiot with AutoCAD.

So in that sense some of what you say may be true, because it's nearly tautological.
The other parts I consider utter bollocks, and I'm already sorry I thought that aloud.

Mostly I'm not sure who I'd like to be the better predictor with what will happen.
 
Okay. I just think it's an absurd belief from a purely philosophical perspective that someone, anyone, could create a machine that can somehow do a task that the original inventor of the machine himself cannot understand. If you just think about that for a while I think you'll see how little sense it makes. Computers are advanced machines that utilize physics to perform mathematics. Most "believers" in AGI (I use that term because believing AGI is possible requires faith) think that the brain is a simple computer, therefor computers can be intelligent. My prediction is that the whole AI race is going to prove how wrong both of these assumptions are.

1 On inventors is not understanding: it is happening. All these modern models are just being fed raw data. They train themselves. And they are getting increasingly better at getting results training themselves. There are several models that can accurately and explicitly describe most everything in a general picture that it has never been inputted before. It can vividly, describe the color, texture, style, size of a jacket that was not in his training data based on its self learning. Critically, even though the creators don’t understand how this is happening. They are nevertheless, getting significantly better with it with each model release.

2. I don’t believe that human brains are simple computers. And I agree that humans will always be able to do things that machines can’t do. Where I disagree, is there anything we’re doing in present days society and civilization up to this point even remotely taps the specialness of the human conscious. I believe that everything we are doing in current society can be done by machines. And I think ultimately, that will free humans up to tap a potential that we have yet just barely tasted.

This exactly why I think this is a revolution coming. Because I think 99.99% of what humans do today are a waste of our capacity and resourcefulness. I believe people will aggressively turn to these models in a few years precisely because I believe that most people realize deep down that they are living lives that really waste their true capacity. AI models taking over all of this work will be appealing to so many people, myself included.
 
Last edited: