The populace at large needs to understand that artificial intelligence isn't actually "intelligent".
Just so we know, what are your qualifications for saying this? Have you even designed, trained, or even applied an AI model? Have you any formal education or informal experience with machine learning?
but the output is again according to pre-programmed algorithms. How good or otherwise the "AI" is depends entirely on how good at programming the humans were who developed the analytical and output algorithms.
People indeed design the topology of the networks (although that's becoming increasingly automated), but substantial aspects of the algorithm are learned from the data used to train it. Humans select the training data and need to take care to avoid common errors like bias and over-fitting, but the actual conditions linking the outputs with inputs are learned - not
In most cases when I see "AI" in the title, it turns out it's just there to catch attention (in sense: "see magic happening"). I think AI abbreviation has become the same as "eco", "green", etc. and has nothing to do with actual maning of "artificial intelligence".
Indeed, the term is commonly misapplied. It's become a synonym for a form of machine learning using deep neural networks. That's somewhat unfortunate, as even microscopic worms have central nervous systems, yet nobody would consider them "intelligent".
my sister works in “AI” and she was offended when I said that AI is still decades away and what she calls AI is nothing more than brute force processing to find the optimal solution.
That's a misuse of the term "brute force", but I think I know what she means. A "brute force search" means literally trying every single parameter combination
. As AI models are typically comprised of millions of parameters, each typically represented by 8 to 16 bits (and sometimes more), that's quite impossible
The way AI training works is to feed lots of examples through a weighted network that's either seeded with random values or based on a previous training. You then compute the amount of error at the outputs, using some loss function that represents how "good" the answer is. The error terms are fed backwards through the network to converge each weight towards a more optimal value, using a numerical optimization method. The simplest of these is Newton's Method.
Anyway, you do that repeatedly, and the network starts to find patterns in the data which enable it to make the correct decisions more and more of the time. At each iteration, you can test the accuracy of the current parameters to see how well it's doing and whether it's converging.
Lots of real-world problems don't have a closed-form solution. Any solution a human programmer or engineer would devise necessarily involves ad hoc metrics, heuristics, and parameters. To the extent your solution has parameters, you want to optimize them according to your data. This starts you down the path of numerical optimization and machine learning. Deep learning is usually where you end up. Contrary to popular belief, effectively
applying deep learning often requires not only having a good understanding of the technique, but also having insights into the problem domain and understanding what are the "hard problems" that would benefit from the approach.
Back to the point about "brute force" - there's a certain aspect of "simply shoving lots of data through it" that might seem rather clunky. I take it that's what she means. That's not the only method, but it's the most basic and common. Training always involves a lot more computation than inferencing or applying a trained network on real data.