Coo, free accounts can do LJ polls now! Following on from
this post, a poll.
I often find discussion of the idea of superintelligence quickly devolves into a less interesting discussion about how people feel about those who talk about superintelligence. I'm interested to know what you think will happen
Read carefully: twothree important caveats
(
Read more... )
Comments 59
The first one is largely true but not in the sense I thing you mean - biological systems are uglier and messier. This doesn't stop us understanding them, but one of the reasons I have difficulty with discussions of creating human(+?) level intelligences is that it tends to be treated as if this isn't the case.
The second and third may well be true. We don't know. As far as I can tell, we currently have no way of knowing based on the evidence we have, so having a position is premature.
None of the above? Well, there are certainly other reasons to disbelieve any specific predictions I've come across.
Reply
Reply
Reply
Reply
First claim is probably partially true (at least to the extent that human minds are probably very different to any kind of physical system/computer that we're used to working with) but the second claim seems more dubious.
Given that the second claim is pretty essential to it being a 'way the singularity could fail to happen', I select that option :o)
Reply
Reply
Reply
"A few orders of magnitude" is still rather large - an AI that's 100x as intelligent as the average human (whatever that means, precisely) may not be enough for us to achieve techno-Nirvana, but it would still have a huge transformative effect on society.
Reply
Reply
Reply
Reply
Ticking yes to this is slightly misleading. I suspect that there is something about minds we really do not yet know and that this is the reason our fundamental understanding of consciousness is not greatly different to what it was thousands of years ago (we know a lot more details but little of the real important question "what process generates consciousness"). If this is discovered then it may be minds would again be capable of being approached with engineering-like thinking. Of course I may be wholly wrong and something more Hofstadterlike is enough of an explanation. At the moment though, I suspect that an engineering explanation of the mind is going to be like a pre-atomic theory explanation of emission spectra.
Reply
My choices, with rationales:
The idea of one mind being greatly more efficient than another isn't meaningful.I wouldn't have phrased it this way, but this is the closest option to my educated opinion on the meta-result of the past 60 years of research into computing and cognition. The way I would put it is: we have no reason to think it is appropriate to model whatever-it-is-the-brain-does as a Turing machine. I emphasize model because there's plenty of reason to think that a universal Turing machine could in principle simulate a brain…but there's also plenty of reason to think that we wouldn't learn anything from such a simulation. It follows from this assertion that everything we know about making computers more efficient is not necessarily applicable to making minds more efficient, and indeed that there may not be any meaningful "more efficient" for minds.Human minds are within a few orders of magnitude of the most efficient minds possible in principle in our corner of the Universe.Supposing that I'm wrong about the above, the ( ... )
Reply
Leave a comment