Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Futurist Ray Kurzweil Pulls Out All the Stops (and Pills) to Live to Witness the Singularity (wired.com)
41 points by edw519 on March 27, 2008 | hide | past | favorite | 94 comments


>Kurzweil predicts that by the early 2030s, most of our fallible internal organs will have been replaced by tiny robots. We'll have "eliminated the heart, lungs, red and white blood cells, platelets, pancreas, thyroid and all the hormone-producing organs, kidneys, bladder, liver, lower esophagus, stomach, small intestines, large intestines, and bowel. What we have left at this point is the skeleton, skin, sex organs, sensory organs, mouth and upper esophagus, and brain."

What developed country’s government would allow people to experiment with such things? Risk-aversion is a real brake on the singularity, and it tends to increase with wealth.


That's assuming that breakthrough research will continue to come out of the United States and Europe, as opposed to China, where human rights purposefully lag without repercussion from other nations.

The major advances over the fifty years will likely come from China for that very reason- the countries that are less risk-adverse will have the greatest technological strides as the frontiers of technology get more and more morally murky.

Is it ok to remove someone's organs and replace them with the mechanical equivalent, knowing that they may very well die as a result? Is it ok to do it with their permission? Without their permission?

France just had a issue where a woman had a rare form of cancer that caused her to look like the girl from Saw with her eyeball hanging out, on top of the whole 'dying of cancer and in excruciating terminal pain' thing.

She was in so much pain, so miserable, and so without hope that she decided she would opt for euthanasia. The courts denied her petition, and she quickly committed suicide on her own.

If we have issues like that with cancer patients, just imagine the issues when we start dealing more with genetic engineering, nano-technology, human cloning, and cybernetic enhancement.

Here in the US, these technologies will, for better or for worse, be held up by the government and our courts as we collectively attempt to wrap our minds around the implications.

In other countries, research will move along, full speed ahead.


I think it is already happening, there are artificial hearts, for example?


Yes, I think Kurzweil's response would be that these things will be replaced in people with a condition that would be fatal if the original organs were left intact. Over time, people would be come more and more "cybernetic" as their natural parts failed and technology came up with "improved" replacements.


Experimental procedures are less regulated than standard practices. "Kurzweil and Grossman justify it not so much with scientific citations — though they have a few — but with a tinkerer's shrug. 'Life is not a randomized, double-blind, placebo-controlled study,' Grossman explains. 'We don't have that luxury. We are operating with incomplete information. The best we can do is experiment with ourselves.'"


I think this is a much better argument against it than any of the more common technologically pessimistic ones.


I loved Kurzweil's book. Unfortunately in all the talk about Moore's Law and exponential growth, Murphy's Law was forgotten (perhaps in a self-referencing manifestation of the same law).

Example: My HMO has figured out that, if a patient has a serious chronic ailment (e.g., diabetes, heart disease, cancer, etc.) then it is cheaper for them should the patient die:

I. Treat patient:

      cost of treatment           $100

      cost of future treatments   $???
Highly-variable costs borne largely by the HMO.

II. Let patient die:

      cost of treatment           $100

      cost of future treatments   $  0
Cost to HMO limited to $100. Remaining costs borne by the deceased's life insurance company.

The perfect HMO patient is one who never visits his doctor and then dies quickly.

Should the HMO increase profitability by speeding their chronically-ill patients to a painless death? Stockholders cry "Yes!"; patients whimper a fearful "No." Needless to say some cognitive dissonance must arise should one's income be so directly tied to one's ability to (not) keep one's patients alive.

Economic and social conflicts within the medical system are a greater hindrance to the singularity than are technical hurdles.


What is a HMO? In any case, if an insurance company would default to "Let patient die", it would probably lose a lot of customers - shareholders would not be pleased.


From the New Oxford American Dictionary:

health maintenance organization (abbr.: HMO)

a health insurance organization to which subscribers pay a predetermined fee in return for a range of medical services from physicians and healthcare workers registered with the organization.


This is exactly why health and life insurance is a losing game. Your health insurance company loses money when it fulfills its purported purpose. Life insurance puts you on the wrong side of the bet: you are betting that your life will be short.

I highly doubt that Kurzweil has an HMO.


I've got a better plan for my future: I'm going to grow old and die.

(Unless, of course, fate intervenes and I don't grow old. But I intend to keep planning to grow old right up until the last minute!)

It's a classic plan, and well tested. Lots of prior art. Lots of examples to learn from, and a lot of infrastructure and literature.

You folks can find fault with my plan if you want. But here's the thing: You're going to follow the same plan, whether you're willing to admit it or not. [1]

Further reading: http://kk.org/ct2/2007/09/my-life-countdown-1.php

[1] I'd offer to bet, because my odds are really, really good... but tontines are illegal for a good reason. ;)


Really? So if you lived in a post-Singularity world, in which clinical immortality was an unquestioned human right, you would periodically get your face wrinkled, inject yourself with bone-brittling nanobots, induce dementia, and then wither away?

If you tried to inflict that fate on anyone else, it would be a terrible crime. If you wanted that for yourself, it would be evidence of mental illness. If you consider it anything but the status quo or an inevitability, it's indefensible to desire it.


So if you lived in a world... in which clinical immortality was an unquestioned human right...

ROTFL.

I live in a world where my wife and I had to fight our old insurance company for basic preventive health care. (She tried to have a mammogram in the same year that I had a simple 20-minute physical. Turns out that violated the fine print in our policy, and we had to pay out of pocket.) I'm glad to say that our uninsured, thirtysomething friend just managed to save enough money for his cancer surgery, but others of our friends aren't so lucky. And I live in Massachusetts, where the state has forced insurance companies to sell relatively affordable policies to individual contractors like me. If I lived in a different state, I'd probably be paying over $1000 a month for the privilege of having my claims denied.

So, sure, I'd love for clinical immortality to be possible, and love it even more if it were an "unquestioned human right". But here in the USA I don't even have the "unquestioned right" to effective, existing medical technologies, and that's a more pressing concern.

Meanwhile, would I want to age in a post-Singularity world? Who the hell knows? I don't think you can be immortal and still be recognizably human -- the entire design of humans is predicated on mortality, just as it's predicated on gravity and an oxygen atmosphere and the presence of edible plants and animals. And who knows what the post-human aliens will think?

To the extent that post-singular beings remain human, I think they'll exhibit many of the same self-destructive tendencies that humans do now. I used to be a cancer researcher, and the sight of people lighting cigarettes still makes me angry inside. It feels like they're setting fire to my hard work. But what can you do? To discount the future is part of human nature.

Recommended reading:

On the alienness of the immortal human: William Gibson's Count Zero.

On the psychology of the immortal human: Doctorow's Down and Out in the Magic Kingdom is good. The much shorter, ha-ha-only-serious version is the tale of Wowbagger, the Infinitely Prolonged in Douglas Adams' Life, the Universe, and Everything. (Douglas Adams is one of those guys whose every throwaway joke encapsulates an entire series of philosophical novels.)

On the "unquestioned human right" to immortality: Roger Zelazny's Lord of Light. You may have to read most of it before it becomes clear what I'm talking about, but keep reading. It's an awesome work.


ROTFL is not the right reaction if you follow it up with, basically, "Your hypothetical futuristic scenario is not like my real-life right-now scenario!" I said "If" for a reason.

I don't think immortality is or should be an unquestioned right, but I suspect that within a decade or two of it being a realistic possibility, letting people die will be about as popular as slavery.


it's no more a mental illness than being a luddite


It's possible to wish that things were different, or to want to have a closer connection to the results of one's labor by avoiding automation and seceding from impersonal organizations. But I don't think Luddites advocate smallpox or other debilitating, disfiguring, fatal diseases. Aging is a disfiguring, debilitating, fatal disease, and so as far as the scope of this discussion is concerned, the difference between it and smallpox is that it's not contagious and it's taken longer to cure.


i agree about what aging is, but people aren't trying to advocate diseases like that, they are usually just skeptical or ignorant about technology and what is possible, and bad at changing their minds to keep up with progress.


It would be fine for them to argue that it's implausible, just as it would almost make sense to say "I don't see how smallpox could be cured. So I intend to rot to death." But the argument I was responding to was, basically, "I accept that it could be possible to cure smallpox, but the idea of not dying that way is repulsive to me."


OK, fish, I'll bite. I read your reference, but he never came out and said what he ACTUALLY DID during those 6 months. Is there a "rest of the story" or is it inferred?

Nice link. Really got me thinking. Maybe the subject of a future thread...


He mentioned that you can find it by finding the online archive of This American Life and playing an early episode... perhaps even Episode 1.

Don't let me make you feel guilty for being too lazy to do this... because I haven't bothered to do it myself yet. :)

I doubt that it matters too much what he did. What one chooses to do is intensely personal. I think that "getting you thinking" really is the whole point.


God bless him; those pills are exactly what will kill him.

The singularity is coming, but Moore's law won't have much to do with it. I remember seeing a few-days-old kitten and for the first time in its little life, it saw a dog. It hissed.

We already have the equivalent computing power of a kitten's brain. What we don't have is the software that can recognize a dog without having ever seen one before.


What we don't have is the software that can recognize a dog without having ever seen one before.

Sure we do.

...................

Anyway, there is a problem with the notion of super-intelligent robots. Without getting too much into it, suffice to say that robots will be forced to subscribe to the same survival-of-the-fittest constraints that humans have. When you observe the human species, it might seem strange that there are so many people who enjoy rap music and nascar, and so few who enjoy calculating derivatives in their heads. But nature (read: survival of the fittest) has only portioned out a comparatively small number of superintelligent people in the world, for a reason. It will presumably do the same for robots.

That is, when robots are manufacturing themselves, a moderately intelligent "thug" robot, or "drone" robot, or "worker" robot, or "leech" robot will have certain advantages over a "sit and design better competitors for myself" robot.


Sure we do.

Really? And we have software that can recognize a cat from a dog (as that kitten can do)? And my face from yours even if I cover my ears (as the kitten will be able to do once it imprints)? I'd like to see that software!

The truth is Kurzweil and others are fooling themselves if they think this is a hardware issue.


> The truth is Kurzweil and others are fooling themselves if they think this is a hardware issue.

I agree that the state of that software is not there yet from what I've seen, but their workaround for the software limitations is to say we will model a working brain rather than inventing the software. If this is possible, it becomes a hardware issue.


>can recognize a cat from a dog (as that kitten can do)

The kitten did not recognize the dog as a dog. Just as a threat. Anything alive, moving, new and bigger is automatically labeled a threat until some clues (chemical = familiar smell, elapse of time without attack) teach the kitten otherwise. So there you have it.


How can you say the pills are dangerous without knowing what they are?


I'm channeling a pharmacist friend here. Either a pill has no effect (such as Vitamin-C) and just gives you expensive urine, or it has an effect. And if it has an effect, it has a side effect(s). No pharmacist in the world knows what would happen if you mix that many pills, whatever they are, and so, unless most of them are herbal non-drugs, he's taking a big chance, which is exactly what he's trying to avoid. Thus the irony.


Vitamin C has no effect?! That's not what mommy told me :(

Any references?


Linus Pauling died.


He's not the best example as he died at age 93.


Problem is we're way off being able to prescribe these things knowing with certainty that they'll work to our advantage. I'm saying that as a one-time biochemist and long-time pill pusher myself. Take selenium and prostate cancer. The picture is cloudy. A researcher in the field whose earlier work offered support for Se supplementation now wonders if it's such a good idea. Whatever Kurtzweil's taking, I hope it works for him.


Anyone seen one of his talks or ppts? Anyone else get those creepy cult feelings? Anyone?


Yes. I've laid out my objections to his timetables here before: http://news.ycombinator.com/item?id=142175

I like what his books do in terms of sparking the imagination, but take his claims with the same level of skepticism as I do an infomercial. Another thing is the repitition; if you've read Age of Intelligent Machines you've read Age of Spiritual Machines as well as Singularity is Near.

Some of the predictions of Spiritual Machines have fallen short already: Most people don't use speech recognition to enter data into computers. Granted, he chose 2009 as that target date, but I feel safe that won't be the main method of entry in a few months.


Speech rec with Naturally speaking, as of a year ago when my buddy broke his arm and tryed it, was getting there, but still not usable for him.

Speech is not as nice as typing for entering data. Typing seems [citation needed] faster and like less work. Also, in an office, the voices absorb mental bandwidth from everyone in earshot. So I think in this case Kurzweil may or may not be wrong about the quality of the technology in a year, but he's definitely wrong about speech being a preferred method for interfacing with a computer.


Solved? I think this qualifies within his predictions. http://www.technologyreview.com/blog/editors/22037/


I'm guessing that given the difficulty of characterizing voiced speech is still inaccurate, the more difficult task of translating neural impulses to speech will result in even further degradation of performance. Unsolved: At the moment, the device has a limited vocabulary: 150 words and phrases.

A long way to go.


They still have an year and a half, which means twice the number of transistors on their next hardware. Give them the benefit of doubt.


No, I won't. The problem is not transistors; the problem is in the lack of understanding how speech recognition works. It is very difficult to get right and it is highly unlikely that those problems will be solved completely within the next 18 months.

They could double the transistor count now by running the algorithm on a better DSP or Cell processor.


I laid out my objections to your objections here:

http://www.kurzweilai.net/mindx/frame.html?main=show_thread....

I wouldn't be surprised if he's spot-on about the technology of speech recognition getting to that practical level exactly on his predicted schedule -- it's one of those kinds of things he has been obsessed with on a very technical level for many years, and it really has been improving rapidly.

The chief reason I doubt most people would choose it is that it just doesn't work in an environment with other people around. However, since developing tendonitis in my right hand, I've often thought I'd try it just to give my hands time to heal.

It could get good enough within three or four years that people will do it anyway, just because it will be faster and more comfortable than typing for all but a handful of people.

I agree that some of his other predictions are a little nutty, but only by degrees.


Thanks much for the much better articulation!


I don't think it is possible to suggest that you could live forever in a world that will be completely different without appearing a bit religious/cult-ish. That really is pretty much exactly the same message as many religions. But I don't this this alters the merits (positively or negatively) of his argument.


Yes, I felt those too. Thought it was only me. His idea of future scares me.


I think it is important to remember that the immortality mentioned here is a protection from biological causes of death, but there are plenty of other obstacles to prolonged existence (especially prolonged self-consciousness and memory, which is harder to wrap one's head around).

Take the popular theories of the universe's lifespan (or life cycle).

1) The universe is expanding exponentially: eventually every star will die and there will be nothing to birth new stars. The universe will stop moving and hit a zero Kelvin freeze.

2) The universe cycles between big bangs and big crunches: information cannot survive through a crunch. Well maybe one binary bit? hehe.

3) Some sort of multi-universe collisions or string theory madness: who knows but there could be some event that wipes out what we know as reality.

Basically, I am trying to point out that immortality is probably a fundamentally flawed concept. A simpler approach might be to consider memory. Memory storage cannot grow forever (let alone be accessed efficiently enough) and while we could cap memory and still exist indefinitely, I think we would have to hit a ceiling of knowledge attainment and we would just be spinning a wheels like a more exciting version of a rock (not that humanity is any less of a _process_ right now).

Smarter people than I have already dwelled upon these issues and probably have better answers than mine (esp. since I have no answer). Asimov's "The Last Question" is an interesting short story that covers this area a bit.


I can't help noticing that while we are making these exponential advances in technology, we are also using exponential amounts of our limited supplies of cheap resources -- things like oil, coal, copper, gold, and helium. I sure hope Kurzweil gets his singularity before any of these limiting factors kick in, or it could put a nasty kink in his graph.


One of the most striking features of the 20th century is that the economy has gotten lighter per unit of wealth. New goods are made of fewer raw materials and more ideas. Greenspan has a famous piece on this, which unfortunately I was not able to google up. I am with Julian Simon on the issue, I doubt that raw material scarcity will be a significant brake on human advancement.


That's a good point but it's only one aspect. Another striking feature of the late 20th century is our ability to use advanced technology to extract resources optimally. So instead of getting a slow decline in supply and a corresponding higher prices slowing demand (the ubiquitous bell curve), you get a diagonal line followed by a vertical -- in other words, a cliff.

That sounds alarmist but it's just because you don't notice until you're on the wrong side of the graph. Cod was one of the most plentiful fish around and because it was good and cheap it became the base of the "fish and chips" British staple. Cod is now an endagered species. 90% of all large fish have disappeared from the world's oceans in the past half century. And that's just fish. We have new technology to extract oil efficiently sometimes called "super straws". So instead of sputtering to a stop, we'll get there at full speed. It's easy to say, "Oh we'll just slow down or find alternatives." That's like going 150mph and saying if the curve is too tight, we'll just slow down. Sure we will. James Dean style.


The situation that you mentioned with the cod fits the "tragedy of the commons" scenario. Where property rights are hard to define, resources are over-extracted because there is no owner with the incentive to conserve.

I challenge you to name one resource with normal property rights that fits the "diagonal line/cliff" scenario. I know of none, although it appears to be the common wisdom of the internet at the moment.


Exactly, they're only overused as long as they're cheap. We'll find alternatives quickly once the price signals that it's worth it. (The alternative might be a lower standard of living because no alternative can be found, but that's life). And usually, that shift creates an arbitrage opportunity that someone is going to make money off of.

For instance, while many people fear rising gas and oil prices, I'm pretty sure motorcycle manufacturers, bicycle shops, and rental property owners in dense, transit-rich cities are licking their chops. $10/gal gas makes NYC rent more reasonable.


I thought about the problem of the entropy too. This is the thing that worries me most about the future at all. One thing that I hope for, but I am not sure, is that the number of atoms in the universe is somehow a constant (might be true for smaller, more essential particles) and if those atoms could once be created (I am a creationist, but it holds true for evolution as well), they COULD be recreated, replenishing the supplies. In fact, we could become very very good at recycling and turn all our trash into a source, instead of waste. Maybe with nanotech? I believe it would be very easy for a bunch of nanorobots to grab atoms of gold in an old discarded motherboard and pile them up, creating a fresh gold bar from them. Is it too far fetched?


It would be very hard to argue that the singularity won't arrive eventually, provided humans survive long enough. It's just the date that's tough to predict. Sure, computing power has been growing exponentially, but as they say, "past performance is no guarantee of future results", and there are so many other variables aside from raw processing speed.


It would be very hard to argue that the singularity won't arrive eventually

Really? What if consciousness is not an algorithm?


What does consciousness have to do with it? Eventually there will be machines that are capable of passing the Turing test or something similar. The singularity is what happens next.

``To the question, "Will there ever be a computer as smart as a human?" I think the correct answer is, "Well, yes... very briefly."'' - Vernor Vinge


What does consciousness have to do with it?

It's human. If one or more aspects of being human aren't computable, then it's plausible that computers will never pass the Turing test.

Of course, if you assume that the space of what can be expressed algorithmically is equivalent to the space of all that is, some of these claims become trivial. But that's a whopping leap of faith. Perhaps Richard Dawkins could get around to writing a book about that one when he's finished demolishing its more popular competitor. :)


[Consciousness] is human. If one or more aspects of being human aren't computable, then it's plausible that computers will never pass the Turing test.

You seem to be saying that a computer would have to actually be human or be conscious in order to pass the Turing Test. But passing the Test only requires the computer to simulate some observable behaviors of a conscious human well enough to fool a skilled interrogator. In particular, it only has to simulate those behaviors that are observable through a low-bandwidth TTY.

I suppose one could argue that there simply does not exist an algorithm that could pass the Turing test (when run on a the powerful computers of the future), but that seems like a very hard case to make, particularly here on Hacker News. One could instead try to argue that though such an algorithm may exist, we will never find it, but that doesn't seem to be what you are saying either.


I'm not saying there simply does not exist such an algorithm; I don't have an opinion on that one way or the other. I'm questioning your claim that "it would be very hard to argue that the singularity won't arrive eventually" because "eventually there will be machines that are capable of passing the Turing test" (emphasis added). You don't know that. Nobody does. Some people like the idea, some people don't; it's wishful thinking on both sides.

My point was to offer one possible scenario under which computers might not pass the Turing test: namely, if there turns out to be some essential aspect of humans (capable of being observed through a low-bandwidth TTY, if you insist) that can't be captured algorithmically. Consciousness might be one such feature. (It's a reasonable one to suggest for two reasons: it's central to human existence and no one has a clue what it is.)

"We'll just simulate it, then" is, in my view, a dodge. If there isn't an algorithm for X, why should there necessarily be one that approximates X arbitrarily closely?

Partisans of the "singularity" are fond of insisting that it's all a matter of processing power, but that's only true if there's an algorithm for everything. That blithe assumption seems to me an article of faith. It reminds me of G.K. Chesterton's line, "Give us one free miracle and we'll explain everything."


You are suggesting that there may be some aspect of humans which (a) can be observed by another un-augmented human over a low-bandwidth TTY and (b) can never -- at any point in the future -- be simulated by a computer, no matter how complicated the algorithm or how powerful the computer.

While I admit that this is possible, it strikes me as implausible for the following reason: the capabilities of computers are increasing as time goes by, while those of humans are not (the Flynn effect notwithstanding).

I could understand if you were to say that we won't have the technology to make computers that pass the Turing Test in the next x years, for a sufficiently large x. But you are suggesting that it is plausible that we will never do it, in the same sense that a physical object will never move faster than the speed of light. In the absence of a well-accepted theory supported by evidence (as we have for the limit of the speed of light), this strikes me as a very strong claim that is difficult to defend given the existence of technological evolution in the world.


At this point I can only appeal back to what I already said: it depends on whether or not there's an algorithm for everything. If this is true, then sure: assuming enough processing power (and programming power), anything can be simulated. And if it's not true, well, it follows that there's a hard limit, à la your speed-of-light analogy, to what computers will ever be capable of. Doesn't it?

How is that a strong claim? I don't state that the "algorithms for everything" thesis is true or false, because I don't know. Nobody does. My point is that singularianism depends on a belief about this; a belief that is, based on what we currently know, rather miraculous, and therefore not very hard to dispute.

Thanks, by the way, for quoting me as saying "may be". I was starting to doubt my ability to get my point across!


[I]t depends on whether or not there's an algorithm for everything.

We already know that there is not an algorithm for everything. Indeed, Turing himself showed that there is not an algorithm for solving the halting problem <http://en.wikipedia.org/wiki/Halting_problem>. But that has no bearing on whether or not there exists an algorithm that would someday allow a computer to pass the Turing Test.


Replace that phrase with algorithm for everything that's intrinsic to being human.

Or, if you prefer, algorithm for everything necessary to simulate being human sufficiently well to consistently fool humans via low-bandwidth TTY. My point remains the same: it's a belief either way. And throwing processing power at it will only yield the predicted effect in one of those cases.

Edit: the existence of undecidability results (and the general failure of the formalist project) ought to make us more, not less skeptical of grandiose claims concerning computability.


[I]t depends on whether or not there's an algorithm for everything necessary to simulate being human sufficiently well to consistently fool humans via low-bandwidth TTY.

But clearly such an algorithm does exist, so this objection fails.

Its existence follows from these facts:

(1) only a fixed amount of data in the form of questions can flow through the low-bandwidth TTY,

(2) only a fixed amount of data in the form of responses can be sent back back, and

(3) there exists an algorithm for any function with a bounded input and output. (To see this, note that the algorithm could consist of a lookup table containing a correct output for each of the finite number of inputs it has to deal with.)

I concede that it is another matter entirely whether or not we will ever discover such an algorithm and implement it on a sufficiently powerful computer. But the fact that an algorithm for passing the Test must exist takes the discussion out of the realm of physical and/or philosophical possibility and into the realm of technology and engineering.


Uh oh, you got technical on me!

It's been a long time since I've studied this stuff, but for what it's worth, your argument feels incorrect to me. It might not be possible to address that effectively here, but let me try.

Let's call a particular run of the Turing test a "Turing dialog". Say a Turing dialog D consists of alternating statements C1,H1,...,Cn,Hn where C stands for "computer" and H for "human". Say a "successful Turing dialog" is one that succeeds in fooling the human participant - that is, Hn is something like "I think you're human".

You're right that for any Turing dialog D, D is finite and therefore an algorithm exists to reproduce it (a simple lookup table will do). But such an algorithm is only good for one specific D. To build it, you'd have to know all of D in advance.

To pass the Turing test, we need more. We need an algorithm, which, given any valid prefix of a Turing dialog, i.e. any sequence C1,H1,...,Ck,Hk, knows how to produce Ck+1 in such a way that the dialog will end successfully. That's not the same thing, is it?


We need an algorithm, which, given any valid prefix of a Turing dialog, i.e. any sequence C1,H1,...,Ck,Hk, knows how to produce Ck+1 in such a way that the dialog will end successfully.

Exactly, but since each Turing dialog is of bounded length (i.e. no more than a megabyte in size, depending on the bandwidth of the TTY and the predetermined maximum length of the test), the set of all Turing dialogs is finite and is therefore subject to the dictionary algorithm approach.

It's rather like building out the game tree of optimal play for White in chess. The tree for the Turing Test "game" would have to include all the questions that could be asked at each point in the tree, but you only need one possible correct ("Turing-test passing") response at each node.


Why is consciousness special? What do you think about IBM's project to simulate a rat brain neuron by neuron?


I'm not sure consciousness is special (or what "special" means). I brought it up as an example. But it doesn't really matter. "What if consciousness is not an algorithm" is just a special case of "What if not everything about humans is computable". That's the real point. Any grandiose claim about this seems to me laughably unsubstantiated. Yet there it is, giggling underneath the solemnity of the Singularians.

Edit: don't know about the rat project. Curious to know the results.


What if consciousness is not an algorithm?

Then it can be simulated using one. If chemical reactions can, it's just a matter of processing power.


There is a huge fallacy to this.

You would think that any formal system could be defined with axioms, but there are undefinable formal systems.

In GEB, Hof explains that he thinks that this could be the missing piece to creating AI/life.


I have never heard of any such thing, and neither has Google. I suspect you are very confused. Can you give an example of an "undefinable formal system" and say (1) how it satisfies the definition of "formal system" and (2) the sense in which it is "undefinable"?


For Kurzweil, it's an interesting way of asserting - advertising, if you will - his own confidence in his capacities as a futurist.


To address the points raised in these comments:

1. We do NOT have "the equivalent computing power of a kitten's brain." That's like saying that because we have apples we have the equivalent taste of an orange.

The only real way to translate current silicon processor-based computing into a biological brain-based form is to ask how <a href="http://www.technologyreview.com/Biotech/19767/">many neurons can we model on current hardware.</a> The answer to that is: not nearly enough... yet. At the current rate of technological growth, we can expect to model a full rat brain within ten years, with a human brain shortly to follow (remember, exponential growth moves fast!) Once we can fully simulate a brain, the only limit to its power is the speed of the processor it runs on.

This approach also addresses the "recognizing a dog" point. By accurately simulating a brain, they will use the same algorithm the brain does to recognize objects. It is also worth pointing out that <a href="http://www.onintelligence.org/index.php">advances</a> <a href="http://www.numenta.com/">are being made</a> in deriving the cortical algorithm directly and using it for the very type of pattern recognition you mention.

Finally, the point is subtly flawed. Cats recognize dogs "without having ever seen one before" because the knowledge of what a dog is is hardwired into their brains from birth. In effect, they <em>have</em> seen dogs before, or some part of their brains have.

2. The point applying survival of the fittest to robots is ridiculous. I barely know where to begin.

First, we will use the term AI instead of robot, as a robot is just an AI with a body. Second, the only thing that matters to an AI is how "smart" it is - how efficient its algorithms are and how fast those algorithms run. Having a big strong body doesn't even make sense in this context.

Additionally, the idea that AI's will design AI's smarter than them is also flawed. If an AI figures out an algorithm that allows for a more efficient thought process, what's to stop the AI from modifying itself to use that algorithm?

3. Pills: Pharmaceutical drugs have (often serious) side effects. Homeopathic medicines usually have very little side effects. I'm not saying that taking all those pills is healthy or beneficial, but the rampant side effects people seem to be suggesting probably won't manifest.

The heuristic that seems to crop up in all matters of health/eating is "in all things moderation."

4. Ray Kurzweil suggests that the final form of immortality will manifest as computer systems able to model/run our consciousnesses. That way we will be able to exist for as long as the machines do, as well as back up and transfer our consciousnesses.

I submit that, even if this is possible, it wouldn't make <strong>you</strong>. Rather, it would merely ensure that some copy of you persisted. Consider if you could make perfect clones of yourself, with all of your memories and developments. Those clones would act exactly like you, but they still wouldn't be you.

5. On a personal note, if we agree that it's possible that a method for achieving immortality will be discovered in our lifetimes, the logical course of action is to devote the entirety of ones efforts towards realizing this possibility. The reward surely justifies the risk.


>Homeopathic medicines usually have very little side effects.

Yes, because they don't do anything. They are based on a false idea that can never work. There is no scientific and clinic evidence to show that it does work. It is ridiculous to even mention it as an alternative.


Homeopathy is based on the idea of taking a solution of some chemical, then diluting it so much that no molecules of the original solution can remain. The BBC did a documentary on it a few years back, and they found that lo and behold, under laboratory conditions, homeopathic solutions behaved indistinguishably from water.


I strongly disagree with Point #2. How we interact with our environment contributes greatly to how we perceive the world. Intelligence that is synthesized outside of natural processes (I think the term "AI" is paradoxical when describing super-human intelligence) will require means of interacting with the environment in the same way we require our bodies.

It would be evolutionarily advantageous for a superhuman intelligence to devise better ways of grasping/flying/swimming in order to expand upon the existing knowledge base that it would acquire. Also from the perspective of experienced-based knowledge having a "body" would be invaluable.

In other words, reading books and doing thought experiments do not contribute to our intelligence as much as these things in combination with experimentation and interaction with our real world through what our bodies can manifest. If a super human intelligence does come into consciousness, then it's possible that it would see the advantage of this...

Could you please elaborate upon Point #1?


If you boil down what the brain really does, it chooses actions based on experience and inputs. If we want an AI that will operate in the real world, then yes, it will need some sort of body so it can learn how the world works (physics simulations are nowhere near as complete as the real world is). If we want an AI that makes financial predictions, then streaming in stock prices, financial reports, news data, etc is enough input and no body is necessary. An AI that isn't going to produce any physical action doesn't necessarily need to learn physical movements, etc.

Having said that, physical environment simulations are ahead of where AIs are right now so they're a good place to start.


But limiting an intelligence to simply making financial decisions is not the type of intelligence Kurzweil is talking about. What you are describing is much more limited than what "strong AI" is actually supposed to be.

You bring up an EXCELLENT point about robotics being more advanced than AI.


True, I think the discussion was about the broader AI, but Kurzweil is all about AGI.


RE: #1, even when we can simulate the entire brain, how will we know what topology the network should have and what the node nonlinearities should be? 100 billion nodes can be connected a few different ways. Parameter estimation of a dynamic nonlinear network of this size is the real limitation.


Great points.

> Consider if you could make perfect clones of yourself, with all of your memories and developments.

> Those clones would act exactly like you, but they still wouldn't be you.

I believe the argument is something along the line of if you replace a neuron with some sort of artificial neuron are you still "you"? How about another 10? How about a handful here and a handful there over the course of a year? What if 10% of your neurons are now artificial along with fake hips and knees and regenerated biological teeth? not to mention the tip of your finger that you accidentally sliced off..


On the same token, if you cloned me, and he was sitting next to me and I had to choose who would get shot, I'd choose him. He isn't me because I can't access his thoughts. I think identity is tied to physical and temporal location and is in continual flux, so a clone is never me--it never occupies the same points in space and time. The nerve replacement idea is tough, though. Maybe our idea of identity is wrong, artificial, or too limiting.


Of course, he'd definitely pick you to get shot!


True, but what I'm getting at is if we were the "same" we wouldn't care.


Ah, the old "different forms of equality" problem. You're thinking === (same object in memory) while I'm thinking == (identical copies).


Well said.


I wouldn't care, and I'd bet that neither would you.

Think of it this way - were you ever shocked by the teleporters in Star Trek? After all, this is precisely what they did: make a copy of someone, then dissolve the old copy.


Have you seen the less advanced version of those teleporters in the movie 'The Prestige'? Where you shocked about that?


If you put an old harddisk in a new computer does it boot to the same desktop? If you say "That does apply to people because..." then Ray counters that we'll find out if this is the case within our lifetimes.


Well, you'd have to remap all the sensory inputs (computers don't have that problem since they have standard interfaces). For instance, if you got put in a smaller body you'd keep reaching for things and coming up short; you'd have a different sense of balance; speech would be hard since mouth and vocal cords would be different shape and quality; lots of things like this. The brain is extremely adaptable and could theoretically learn to do this (esp with something like rehabilitation therapy), but trying to do it all at once would be a tall order.

Basically instead of a computer's fixed API, you are continually negotiating a new interface between brain, body, and world with every action you take. Keeping it updated isn't hard but starting from scratch takes lots of trial and error (just watch a baby grow and you'll see what I mean).


RE #4: He (the clone) may not be you, but it will surely think of himself as himself, and after the fork and you go one living different lives, who's to say he's wrong?


I think of this similarly to cloning.

Anyone egocentric and arrogant enough to think "what the world really needs is another, identical copy of me" should be prohibited from doing so on principle.

For the case of AI/robotic duplicates, make that "what the world really needs is another, identical, immortal copy of me."


we will use the term AI instead of robot

I use the term "robot" because these things need a physical component, especially if they are going to be manufacturing other physical components that contain themselves. AI is attached to atoms no matter what. Atoms are expensive. As soon as expense is involved, economic scarcity is involved. As soon as scarcity is involved, competition is involved.

the only thing that matters to an AI is how "smart" it is - how efficient its algorithms are and how fast those algorithms run

What is this "algorithm" thing? A newborn baby, I strongly suspect, contains very few algorithms.

If an AI figures out an algorithm that allows for a more efficient thought process, what's to stop the AI from modifying itself to use that algorithm

Nothing. And once you have billions of robots [see above] modifying themselves, duplicating themselves, etc, you are going to start running low on resources. Once one of them learns to...instead of making itself "better" [in our opinion]...make itself competitive, the race to higher intelligence will end, as with humans.


A newborn baby, I strongly suspect, contains very few algorithms.

Few, but they are meta-meta-meta algorithms like "Try something. If it makes me happy, do it more. If not, do it less." Then it builds up more advanced rules and patterns to act on later.

Incidentally, humans have much less "firmware" (like the kitten hating the dog) than other animals. That's why our young are vulnerable for so much longer, but also why we do so much more as a species - we learn our environment rather than being pre-programmed for it.

Having a kid and watching it grow is very instructive for anyone interested in the only successful experiment in creating human-level intelligence.


> A newborn baby, I strongly suspect, contains very few algorithms.

I would say "very few" is going way too far. There is essentially language recognition (Broca and Wernicke's areas), pattern recognition, and a large body of instincts and social skills (via pre-programmed behaviors and mirror neurons) built-in already.


But it contains millions at the cellular level.


I worked with that Matt Phillips guy. Glad to see he spent his Yahoo money on becoming immortal.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: