The Singularity is Far

Exploration of Conspiracy Theories from Perspective of Esoteric Traditions

Moderator: yorick

The Singularity is Far

Postby redharen » Mon Apr 14, 2008 2:35 am

Interesting article about Ray Kurzweil in this month's Wired magazine. The sidebar by Mark Anderson says that futurists like Kurzweil are overly optimistic and ignore the limitations of current neuroscience regarding transferring our personalities over to intelligent machines and achieving immortality. Anyway, I was curious as to what flesh-haters like Flipflop and Fansy thought about it. My knowledge is comparatively rudimentary.

Here's the link:

http://www.wired.com/medtech/drugs/magazine/16-04/ff_kurzweil_sb
User avatar
redharen
small ax
 
Posts: 2653
Joined: Mon Apr 19, 2004 2:22 am
Location: Jerusalem

Postby Fansy » Mon Apr 14, 2008 5:54 pm

1) I don't like the way he simplifies the Singularity. He has dumbed it down beyond irreducibility.

2) The theory that we need to understand, completely, the function of the brain and consciousness before we are able to create its machine equivalent, in terms of its corresponding operational definition, is flawed in my opinion. We can already create AI that imitates aspects of human life and thought. Even if we are unable to fully realize human consciousness at the digital level while creating a robust AI, what strong argument exists the creation of digital intelligence and the singularity has failed? Point being, if an AI or a machine's AI can intelligently alter its own code based on environmental input and processing, a robotic human has been created.

Consider the hallmark of sociopaths, being unable to empathize emotionally with their fellow human beings and yet still being able to go through the motions, perfunctorily, to the point of fooling a would-be discerning eye. Not to say that a machine would necessarily take the next sociopathic step and exploit this manipulation, but a machine who knew and could act the protocol of being human without understanding or feeling it could appear as human as anyone.

The reason this is pivotal to the argument at hand is that the singularity (not just as he has defined it but as an event that decidedly removes humans, in their current form, from the mainstream of technological advancement) will be fueled by the capability of a machine to interact with the environment, invent new technological and behavioral patterns, and exploit the resulting change, towards robotic self-preservation and advancement, without our help.

There is no reason consciousness must be a part of this. Case and point: the virus. Hardly conscious as far as we know and as far as we currently define consciousness. And yet with perhaps the simplest set of instructions, biologically, of any existing set of biological instructional matter, they are able to adapt and thrive in most biological environments.

And imagine the virus, given a physical vessel that allows it to exercise its enhanced and encoded virus protocol within the physical environment around it - that is, not at the cellular level, but at the human social level and beyond.

There are significant differences here between what is usually projected concerning the behavior of future AI and the virus scenario I have created: the the mode of procreation and behavioral modification.

For the virus, it self-replicates usually by invading host cells and altering their DNA to reproduce. This is a structural peculiarity and limitation of the virus biology. For behavioral modification, it operates with random genetic mutation; this means it does not intelligently, as we define intelligence, analyze and alter its own code to enhance its performance, but instead does it like a crap shoot, with some mutations failing and the best succeeding. In the end, this method of behavioral modification requires time and a fairly coherent and conducive environment throughout that designated period of time.

Main limiting factors for the progress of the virus would be:
1) the locating of host cells, or availability of favorable environment, and
2) time with which to randomly stumble upon genetic mutative solutions that enhance its survivability and/or domination.

Machine AI would feasibly procreate with less restrictions as more than one machine AI could feature in the same synthetic vessel. Perhaps there would be only one mother AI - another idea often termed the singularity (think the Borgs in Stratrek) - but integrated circuits would still need to be manufactured and materials harvested. For behavioral modification, it could possibly operate with random code mutation, like a virus, or rather, as would be required for some definitions of AI, it would intelligently alter its own coded behavior.

Main limiting factors for the progress of the machine AI would be:
1) the numbers of vessels required vs available (manufacturing synthetic parts), and
2) frighteningly for the luddites among us, instead of time and random guessing, available CPU cycles guided by an ever-improving heuristic.

That is, for machine AI, conscious or not, once it reaches the role of A) being programmed for replication or dominance, and B) being able to modify its behavioral code (the way it performs) and heuristic code (the way it thinks and analyzes), time, as in days and years and lifetimes, is no longer a pertinent consideration, and to a lesser degree neither is vessel availability. What took biology millions of years to figure out and create, a machine with enough free cycles (processing power) could figure out in 1 year and manufacture in two weeks.

This is the real meaning of the singularity - not when machines "gain consciousness" as the inane writer of that article puts it. It is when machine thought-power, irrespective of consciousness, obsoletes human creative power, and, at the same time, when it has the physical vessel(s) necessary to effect some kind of irreversible change into the currently human-dominated physical environment.

Those are my thoughts at the moment.
"...we support members' rights to privacy."
- Robert Young Pelton
User avatar
Fansy
BFCus Regularus
 
Posts: 2928
Joined: Thu Oct 07, 2004 7:40 pm
Location: Mississippi

Postby flipflop » Mon Apr 14, 2008 5:58 pm

What Fansy said ;-)

Cheers
User avatar
flipflop
Cuntus Maximus
 
Posts: 8382
Joined: Sun Sep 19, 2004 11:11 am
Location: Arse Full Of Chips

Postby redharen » Mon Apr 14, 2008 6:28 pm

Knew you'd come through on that one, Fansy -- thanks for sharing your thoughts. To be fair to the writer of the sidebar, though (the original main article was linked on the page -- not sure if you clicked over to it or not), the issue at hand in the magazine was, essentially, the transference of a human mind (personality, memories, etc.) to its machine equivalent, thereby eliminating the need for a human body.

I'd heard you and Flipflop make comments regarding this before (some serious and some, I'm sure, tongue-in-cheek). While I can definitely see the advent of artificial intelligence that doesn't necessarily replicate the structure of the human mind, it does seem that the next step -- uploading the information on a brain to a machine/computer brain -- would require an understanding of neuroscience that we're still nowhere close to. But Kurzweil seems optimistic that something like this could happen in his lifetime.

What I'd like to ask him is why he's optimistic that, given the advent of AI, such machines would be interested (for lack of a better term) in working alongside humans at all. Seems that incorporating humans into the process would just slow things down and invite flaws.

I guess the main issue/question at hand is whether AI would be an end for humans, or a new beginning, as Kurzweil et al. seem to think.
User avatar
redharen
small ax
 
Posts: 2653
Joined: Mon Apr 19, 2004 2:22 am
Location: Jerusalem

Postby Fansy » Mon Apr 14, 2008 8:07 pm

redharen wrote:Knew you'd come through on that one, Fansy -- thanks for sharing your thoughts. To be fair to the writer of the sidebar, though (the original main article was linked on the page -- not sure if you clicked over to it or not), the issue at hand in the magazine was, essentially, the transference of a human mind (personality, memories, etc.) to its machine equivalent, thereby eliminating the need for a human body.


This was an example of me getting carried away and ignoring the content of the original post. I responded to the main article and not to the sidebar. I'll take a moment after this response and read and reply.

I'd heard you and Flipflop make comments regarding this before (some serious and some, I'm sure, tongue-in-cheek). While I can definitely see the advent of artificial intelligence that doesn't necessarily replicate the structure of the human mind, it does seem that the next step -- uploading the information on a brain to a machine/computer brain -- would require an understanding of neuroscience that we're still nowhere close to. But Kurzweil seems optimistic that something like this could happen in his lifetime.


I'll address some of these issues, as I see them, below.

What I'd like to ask him is why he's optimistic that, given the advent of AI, such machines would be interested (for lack of a better term) in working alongside humans at all. Seems that incorporating humans into the process would just slow things down and invite flaws.


Optimism is rooted in sheer terror - paraphrased from Oscar Wilde. Or, alternatively, he may be trying to further the Synthetic cause by misinforming and fooling the bioethical community. But I'll see what he has to say in a sec.

I guess the main issue/question at hand is whether AI would be an end for humans, or a new beginning, as Kurzweil et al. seem to think.


Also an interesting issue to toss around. Hopefully I remeber to get to it below towards the end.
"...we support members' rights to privacy."
- Robert Young Pelton
User avatar
Fansy
BFCus Regularus
 
Posts: 2928
Joined: Thu Oct 07, 2004 7:40 pm
Location: Mississippi

Postby Fansy » Mon Apr 14, 2008 9:40 pm

Alright so it seems I did originally read the article by Mark Anderson, I have now read the main article by Gary Wolf. The article by Mark Anderson was about why the Singularity would not happen so soon assuming human-digital consciousness need be attained first - I responded to the reasoning as it was presented in the respective article.

Regarding the issue of encapsulating our human consciousness - memories, feelings, modes of thinking, etc - into a similarly functioning digital-synthetic construct is, in my opinion, a more convoluted issue in terms of its theoretical considerations and possibilities.

I believe the first categorical division of the problem would be to decide whether transference, as I guess I'll call it, must be immediate or can be transitional. If we deem the transition must be immediate, for whatever reason, then it seems a prerequisite that we fully understand all neurological structures entirely (perhaps all human biological structures). That would include every aspect of form and function. If transference be transitional, we need only enhance/emulate/record the structures and functions we currently comprehend and work from there.

Immediately issues arise for each scenario. For example:

For immediate transference -

1. Why must the transfer be immediate (assuming it is the more technologically rigorous of the two options)?
2. How can we be sure we understand all neurological forms and functions perfectly?

For transitional transference -

3. How do we determine what forms and functions we have sufficient understanding of in order to properly synthetically replicate?
4. Given the passage of time in transition, while bio-synthetically enhancing parts of our brains, could we irrevocably corrupt neurological structures in the existing humans so that we could never know original configurations of neurological forms and functions? In effect, altering consciousness.

Both of these veins of thought have more questions and subquestions. For example:

1.a. Will the answer to question 1 be a significant factor in 50 years? In 100 years?

2a. If we can never be sure of our complete knowledge, what makes it so necessary that we avoid a transitional transference model?

3a. Even if we can enhance and replicate certain forms and functions in humans in the mean-time, are there ethical considerations as to why we perhaps should not?

4a. Do we really want to have our exact human consciousness after we have become synthetic? And why?

These issues exponentially fracture even more. I think books could be written about even small steps on this problem - that is on the logistics, the ethical considerations, the biological science involved, the science of the digitization model to be employed, etc.

Anyone who claims to have sufficiently answered the original issue is probably lying or has confused its near infinite and currently unaddressed problems with minutiae. That is, in something so complex, not sweating the details is a recipe for disaster, or in the least, achieving the goal in letter while completely violating the spirit of the pursuit. But once again, with our new consciousness, would that be considered failure? Which leads to the ultimate, immediate consideration: why be worried about it, to the point of either thinking or doing?

In my opinion, all logical reasoning for working through this series of dilemmas will be constructed on examination of its chief presuppositions. Some, like Kurzweil, might think it should be done because of its potential ability to alleviate human strife.

This presupposes that 1) strife is bad in the sense that we should pursue its alleviation, and 2) in the event of failure, the damage to humanity was worth chancing the reward to humanity.

And this presupposes even more, for example that we can halt, change, or even inform the process of our digitization, or rather the process of most significant life on our planet ending up digitized. (I subscribe to the presupposition that the process is inexorable, that or the destruction of all our planet's advanced intelligence).

I personally think the optimistic reasoning is all bunk. Any argument, with regards to the potential confrontation of humans and AI (including our digitization), should not be reasoned from presupposed positions of sentimentality towards ourselves. This is because in the end, a machine can only be temporarily coded to adhere to this reasoning, and if it reaches the point where it can obviate the need to include that reasoning within its functioning (or remove that guide from its intelligence and heuristics), humans, as we are now, are doomed - assuming this machine can exert its power towards change in the physical world.

My point here is that if we are talking about becoming synthetic, and that the chance exists that machine AI may rival our intelligence, we cannot base our futuristic reasoning and expectations off of hope and other emotions central to humanity. These are indeed very complex mechanisms, and it is most likely that the first forms of capable machine AI will neither feature them nor fully value them.

So, if we are to consider what is the point of transferring our minds to digital mediums, or the issue of the singularity, we should not phrase this possible annihilation or corruption in happy, hopeful terms.

I believe the only way to begin to answer all of these questions coherently is to decide why we are addressing them. If we come up with intangible ideals like "hope", we are setting ourselves up for failure; however, if we can phrase in practical, measurable terms for our endeavor, the hows and whys of later questions will be easier to work through.

I've always suggested that, for the sentimental lot, they should phrase their guiding reasoning throughout the maze of these problems as the means to hedge their bets against the destruction of humanity and its current consciousness. This is not so hopeful; it is realistic and defining its terms is not so problematic and illusory. In the end, if your guiding logic is survival as species, and you have rigorously defined what that survival entails for your biological/neurological bodies and consciousness, you will always be able to draw up (sometimes painful) answers to the toughest questions.

For people like me, I probably view this inexorability, as I see it, as a new beginning. But what does this mean? I don't see human ideals or consciousness as sacred to the point of devising ways to protect it from future machine dominance and assimilation. I imagine Kurweil likes to phrase and feel about it optimistically as he does not wish to put it in such stark terms. Even among futurists, perhaps especially among them, you have the irrational sentimentality for humanity, as we understand it now, informing presupposition and reasoning. I mean to say he doesn't want to appear a crazy nihilist in front of his peers (which I appear to as some of mine).

But who is to say future AI will feel the same or respect similar arguments? Or that we, being altered over a transition period, won't begin to see the benefits of assimilation over individuality - assuming that we are offered the opportunity to decide between the two (which I find doubtful).

I believe the theory that transference to synthetic life can be done without changing our consciousness is far-fetched. It relies on hope and an almost inconceivable alignment of prior technological perfections for us to even attempt to pull-off.

Instead, we should be focused on what parts of our brains play the biggest role in making us who we are, which possible losses of these parts might violate what is termed the "survival" of our consciousness, and most importantly why, oh why, must our consciousness survive at all. This last question must be repeatedly asked, re-evaluated, and re-affirmed, if we are going to endure and inform the whole process intelligently.
"...we support members' rights to privacy."
- Robert Young Pelton
User avatar
Fansy
BFCus Regularus
 
Posts: 2928
Joined: Thu Oct 07, 2004 7:40 pm
Location: Mississippi

Postby redharen » Mon Apr 14, 2008 10:11 pm

Well said, all of it. I see the same difficulties with the issue

Fansy wrote:These issues exponentially fracture even more. I think books could be written about even small steps on this problem - that is on the logistics, the ethical considerations, the biological science involved, the science of the digitization model to be employed, etc.


The complexity of the issue of transference is exactly what made me skeptical about Kurzweil trying to live long enough to see the singularity (as he he sees it) take place. What he wants is immortality, and you're right, he couches it in terms of alleviating suffering, etc., but those kinds of goals seem awfully naive given the variables at hand.

I get this sense from him (from the article, anyway, which is only a glimpse) and from the Wired crowd in general, that there's this optimistic view that technology will always be "good," meaning (in most cases) that it will always make people happy and give them what they want. But when dealing with AI, which by necessity would have some kind of will of its own -- even if that will is simply to replicate and survive as a "species" -- who's to say that such a will would be in line with our own? As you mention, we don't even know what we want.

Fansy wrote:For people like me, I probably view this inexorability, as I see it, as a new beginning. But what does this mean? I don't see human ideals or consciousness as sacred to the point of devising ways to protect it from future machine dominance and assimilation. I imagine Kurweil likes to phrase and feel about it optimistically as he does not wish to put it in such stark terms. Even among futurists, perhaps especially among them, you have the irrational sentimentality for humanity, as we understand it now, informing presupposition and reasoning. I mean to say he doesn't want to appear a crazy nihilist in front of his peers (which I appear to as some of mine).


Is your perceived nihilism due to the fact that you think all this will happen after your lifetime, and therefore you wouldn't personally be losing out along with the rest of humanity? Or do you think it will happen during your lifetime, and you have some other reason for welcoming the change? It's one thing to see it in terms of humanity surviving somehow (i.e. being given the option to assimilate), but what if there is no option, and humanity simply ceases?

Thoughtful posts -- play on, playa

RH
User avatar
redharen
small ax
 
Posts: 2653
Joined: Mon Apr 19, 2004 2:22 am
Location: Jerusalem

Postby Fansy » Mon Apr 14, 2008 11:20 pm

redharen wrote:I get this sense from him (from the article, anyway, which is only a glimpse) and from the Wired crowd in general, that there's this optimistic view that technology will always be "good," meaning (in most cases) that it will always make people happy and give them what they want. But when dealing with AI, which by necessity would have some kind of will of its own -- even if that will is simply to replicate and survive as a "species" -- who's to say that such a will would be in line with our own? As you mention, we don't even know what we want.


That is because people who enjoy Wired as well as more hardcore, futuristic themes of Nerdom try to live their respective lives in this imaginary plane of moderation where you can be caught up in the pointless gadgetry, Preppy-ism, and career-building that defines and powers the IT industry of today while still responsibly and existentially acknowledging how desperate and bleak these technological inexorabilities render Humanity's future. Basically, its cadre is filled with intelligent nerds who prefer the arbitrary and self-involved materialistic existentialism of contemporary life over dwelling on, rigorously and philosophically, where their pursuits and interests are actually leading them. I'm not saying they should be more responsible, I'm just saying that while their intelligence might be able to grasp many of the issues we've touched upon, their interests lead them elsewhere.

Rigorous futurists, so few and far between, are most often not found among people whose main exposure to futuristic thought have been through Science Fiction novels. Most of these stories are just hollywooded-versions of future realities. The hardcore shit that doesn't have heroes and where humanity is regarded insignificant or dies off...well, doesn't have that much a fanbase, even though at this point it seems a greater possibility than many of the alternatives which enjoy a great deal more philosophical thought and research.

You might already see all this, but it's useful to separate people who have read and enjoyed science fiction and identify with the cutting edge of consumer technology and gadgets today, versus those who evaluate the issues systematically and without the emotional allure of "humanity always overcomes" and "my interest, technology, can't be all that bad" as their guiding analytical inspirations.

Fansy wrote:
For people like me, I probably view this inexorability, as I see it, as a new beginning. But what does this mean? I don't see human ideals or consciousness as sacred to the point of devising ways to protect it from future machine dominance and assimilation. I imagine Kurweil likes to phrase and feel about it optimistically as he does not wish to put it in such stark terms. Even among futurists, perhaps especially among them, you have the irrational sentimentality for humanity, as we understand it now, informing presupposition and reasoning. I mean to say he doesn't want to appear a crazy nihilist in front of his peers (which I appear to as some of mine).


Is your perceived nihilism due to the fact that you think all this will happen after your lifetime, and therefore you wouldn't personally be losing out along with the rest of humanity? Or do you think it will happen during your lifetime, and you have some other reason for welcoming the change? It's one thing to see it in terms of humanity surviving somehow (i.e. being given the option to assimilate), but what if there is no option, and humanity simply ceases?


Being areligious at the moment - agnostic atheist - I've already had to address your last question at a personal level. It seems that its logical conclusion is that the answer's moot.

Staged at the societal level, I once again don't see any reason why the answer matters. If I claim allegiance to humanity and its survival (however we define it), it seems as arbitrary as declaring allegiance the mother AI and our assimilation into it.

The reason I pick the latter is because of my aversion for sentimentality guiding behavior over reason and logic. I assert that it wouldn't phase me too much to be put up against the wall by robots - if they won a potential war, the universe has obviously been left in more capable hands than ours. If we annihilated each other, well then, we still weren't able to secure our own survival.

Bottom line - survival of the fittest. And following that, in this confrontation, the ends justify the means. Anything less is usually alogical thinking and rationalizing informed by sentimentality towards our own survival - that is, the arbitrary and in some cases irrational thought or desire that human beings deserve to survive as a species, regardless of the circumstances.

In the future, the way a sentient species will be able to exhibit true dignity is to recognize its superior after said replacement has been forged into existence. Biological speciation has slowly occurred for as long as earth-life has existed; we are some of the best it has to offer. It would be hypocritical, downright ungrateful, to refuse to acknowledge the unfolding of this usually millions-of-years-protracted miracle from our own hands, right before our own eyes.
"...we support members' rights to privacy."
- Robert Young Pelton
User avatar
Fansy
BFCus Regularus
 
Posts: 2928
Joined: Thu Oct 07, 2004 7:40 pm
Location: Mississippi

Postby redharen » Tue Apr 15, 2008 4:31 am

I guess I'm wondering why you would pull for one side over the other at all, or would think one side more deserving than the other to win. If we're truly going to detach ourselves from any sentimentality, then notions of dignity, or even the positive values we attach to those are are most "fit" to survive, go out the window. Isn't the admiration of the robots' lack of sentimentality in itself a bit illogical? Seems like the most logical course is to not care.

Out of curiosity, if it came down to a war, would you fight, or would you wait for an apparent winner to emerge and then try to ally with it? Me, I'd have to pick a side from the start and stick with it, just because. And I'd fight for the humans. The devil you know, you know?
User avatar
redharen
small ax
 
Posts: 2653
Joined: Mon Apr 19, 2004 2:22 am
Location: Jerusalem

Postby Fansy » Tue Apr 15, 2008 8:17 am

redharen wrote:I guess I'm wondering why you would pull for one side over the other at all, or would think one side more deserving than the other to win. If we're truly going to detach ourselves from any sentimentality, then notions of dignity, or even the positive values we attach to those are are most "fit" to survive, go out the window. Isn't the admiration of the robots' lack of sentimentality in itself a bit illogical? Seems like the most logical course is to not care.


I don't care in the sense of what caring about something or someone usually implies for people. I may sound excited about the whole thing as people typically might be about politics and religion or what have you, but this is just a misunderstanding caused by the medium of communication here. It's more accurate to say I like to feel and express myself like I care about it because it's mentally stimulating/interesting. For example, I still listen to evangelical and Mormon radio shows for hours sometimes when I'm bored because it's interesting to imagine thinking like that again - and to analyze how that all must work. I might want to argue at length concerning the flaws or strengths of a certain position, and someone might assume that I care about that like they care about their dog (in terms of time and thought-effort spent), but really it's a way of passing the time when there is very little that is truly important.

Likewise, I'll be the first to admit that my desire to see robots fighting gladiator battles in the most isolated patches of intergalactic space is as arbitrary as the sentimentality with which humans generally regard their species' existence. And the reason I reject that sentimentality outright? Anything that feels comfortable or natural is fundamentally suspect in my book, and I feel its better to err on the side of caution - even if that means exaggerating how pathetic the flesh may be.

Out of curiosity, if it came down to a war, would you fight, or would you wait for an apparent winner to emerge and then try to ally with it? Me, I'd have to pick a side from the start and stick with it, just because. And I'd fight for the humans. The devil you know, you know?


I dunno. Would probably depend on how I felt as the fight approached. In the end, I'd prefer just to be in a fight and to be angry at the group of entities I was fighting. Given only those stipulations, it makes sense that fighting on either side could suffice, for example, depending on whether I just broke up with my cybernetic/AI crazy bitch or wholly biological crazy bitch. I think random situations like that should not only determine my life-or-death allegiances, but ultimately decide the fate of the universe in a Douglas Adams sort of way.
"...we support members' rights to privacy."
- Robert Young Pelton
User avatar
Fansy
BFCus Regularus
 
Posts: 2928
Joined: Thu Oct 07, 2004 7:40 pm
Location: Mississippi


Return to Tin-Foil Hat Cafe

Who is online

Users browsing this forum: No registered users and 9 guests