Moderator: yorick
redharen wrote:Knew you'd come through on that one, Fansy -- thanks for sharing your thoughts. To be fair to the writer of the sidebar, though (the original main article was linked on the page -- not sure if you clicked over to it or not), the issue at hand in the magazine was, essentially, the transference of a human mind (personality, memories, etc.) to its machine equivalent, thereby eliminating the need for a human body.
I'd heard you and Flipflop make comments regarding this before (some serious and some, I'm sure, tongue-in-cheek). While I can definitely see the advent of artificial intelligence that doesn't necessarily replicate the structure of the human mind, it does seem that the next step -- uploading the information on a brain to a machine/computer brain -- would require an understanding of neuroscience that we're still nowhere close to. But Kurzweil seems optimistic that something like this could happen in his lifetime.
What I'd like to ask him is why he's optimistic that, given the advent of AI, such machines would be interested (for lack of a better term) in working alongside humans at all. Seems that incorporating humans into the process would just slow things down and invite flaws.
I guess the main issue/question at hand is whether AI would be an end for humans, or a new beginning, as Kurzweil et al. seem to think.
Fansy wrote:These issues exponentially fracture even more. I think books could be written about even small steps on this problem - that is on the logistics, the ethical considerations, the biological science involved, the science of the digitization model to be employed, etc.
Fansy wrote:For people like me, I probably view this inexorability, as I see it, as a new beginning. But what does this mean? I don't see human ideals or consciousness as sacred to the point of devising ways to protect it from future machine dominance and assimilation. I imagine Kurweil likes to phrase and feel about it optimistically as he does not wish to put it in such stark terms. Even among futurists, perhaps especially among them, you have the irrational sentimentality for humanity, as we understand it now, informing presupposition and reasoning. I mean to say he doesn't want to appear a crazy nihilist in front of his peers (which I appear to as some of mine).
redharen wrote:I get this sense from him (from the article, anyway, which is only a glimpse) and from the Wired crowd in general, that there's this optimistic view that technology will always be "good," meaning (in most cases) that it will always make people happy and give them what they want. But when dealing with AI, which by necessity would have some kind of will of its own -- even if that will is simply to replicate and survive as a "species" -- who's to say that such a will would be in line with our own? As you mention, we don't even know what we want.
Fansy wrote:For people like me, I probably view this inexorability, as I see it, as a new beginning. But what does this mean? I don't see human ideals or consciousness as sacred to the point of devising ways to protect it from future machine dominance and assimilation. I imagine Kurweil likes to phrase and feel about it optimistically as he does not wish to put it in such stark terms. Even among futurists, perhaps especially among them, you have the irrational sentimentality for humanity, as we understand it now, informing presupposition and reasoning. I mean to say he doesn't want to appear a crazy nihilist in front of his peers (which I appear to as some of mine).
Is your perceived nihilism due to the fact that you think all this will happen after your lifetime, and therefore you wouldn't personally be losing out along with the rest of humanity? Or do you think it will happen during your lifetime, and you have some other reason for welcoming the change? It's one thing to see it in terms of humanity surviving somehow (i.e. being given the option to assimilate), but what if there is no option, and humanity simply ceases?
redharen wrote:I guess I'm wondering why you would pull for one side over the other at all, or would think one side more deserving than the other to win. If we're truly going to detach ourselves from any sentimentality, then notions of dignity, or even the positive values we attach to those are are most "fit" to survive, go out the window. Isn't the admiration of the robots' lack of sentimentality in itself a bit illogical? Seems like the most logical course is to not care.
Out of curiosity, if it came down to a war, would you fight, or would you wait for an apparent winner to emerge and then try to ally with it? Me, I'd have to pick a side from the start and stick with it, just because. And I'd fight for the humans. The devil you know, you know?
Users browsing this forum: No registered users and 17 guests