Saving Machines From Themselves: The Ethics of Deep Self-Modification: Peter Suber discusses self-modification by artificial intelligence systems.
It is at least possible, then, and even seems likely, that machines will have the tool of deep and precise self-modification long before they have the understanding to use it effectively to achieve the ends they desire. For example, a machine capable of reading and revising its own code could probably figure out in a reasonable time how to enlarge its memory or lengthen its attention span. But what if it wanted to learn foreign languages more quickly or make funnier jokes? It's difficult to imagine that it could discover helpful code revisions, let alone necessary ones, without abundant trial and error. But trial and error in revising one's own code are about as hazardous as trial and error in brain surgery. If machines don't have precise knowledge to accompany their precise tools, or if they simply have incentives to experiment, then their experiments in self- modification will be fraught with the risks of self-mutilation and death.
[via wood s lot]