This is interesting.
Well, first off, you'd have to define what you actually mean by "Intelligence", in order to create an artificial one.
So, what is it? Is it the ability to learn how to recognize/interact/deal appropriately with any situation? Yes, i'd say that it's the "program rewriting itself" that is actually key to learning, which I'd say should be the first goal...
The problem is that, in existing "intelligences", the rewriting routines also change, i.e. are rewritten. That would amount to a certain perception of the situation someone is in - so the same situation can have different impacts on different sentient beings.
The problem is also that the learning process has to be self-sufficient. Controlled learning is irrelevant and is actually not truly "AI", the way I see it.
I completely agree so far.
But, if the learning process isn't thoroughly stable, there's no guarantee that your AI will actually "evolve" in the classical sense, but rather an increasing probability of falling into some sort of bizarre state of evolutionary stop. That'd be a program glitch, of course. The problem is that you'd actually need a routine just to check if things are working properly, i.e. your AI hasn't gone "insane" (not in that classical movie style, of course; I mean an evolutionary breakdown, like an endless loop).
I also completely agree. There is a high chance that any AI would need constant guidance as it learns so that it does not fall into a deranged state of learning, such as spending the next ten years researching the reason why fingerprints are all different or getting stuck in an infinite loop just evaluating something as stupid as "The next statement is the truth. The previous statement is a lie."
Really, this is one of the reasons why humans have parents. Who knows just how deranged someone would get without any outside influence. **ROFL**
That aside, it's also entirely possible that when left to their own devices, AIs, just like people, can turn out fine. Call it luck, call it the law of probability, call it divine influence, call it whatever you want. The point is that it's possible even though it seems unlikely.
But given the machinery that is able to process what could be called a sentient program, how do we come up with such a thing? Someone has to program that too. What does it have that makes it rewrite itself? How does it know how to rewrite itself, or what to rewrite? The rewriting process has to be performed according to the interaction data stored. How can we assure that that program will have the necessary stability and enough heuristic abilities to warrant being called "AI"?
Interpreted non-compiled languages as well as real-time compilers are making these self-learning softwares looking to be more and more feasable. Yes, someone has to ultimately program the basic tools that the AI would need in order to further refine and develop itself. It would have to be either intentionally written or an accidental result of software with AI-like properties becoming sentient. How can we assure that? I doubt that we can. I'd say that it's more a matter of luck and trial-and-error, at least at first.
So, what are we talking about here?
I have no idea. It's just too big an issue for me to handle right now. I'm hungry, and I'm off to lunch.
Lunch sounds good right now. Mmm. Food.

Time to eat.
"<i>Yeah, if you treat them like equals, it'll only encourage them to think they <b>ARE</b> your equals.</i>" - Thief from <A HREF="http://www.nuklearpower.com/daily.php?date=030603" target="_new">8-Bit Theater</A>