Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Create an AI on Your Computer (singularityhub.com)
24 points by kkleiner on May 29, 2009 | hide | past | favorite | 8 comments


"Problems in chemistry, biology, physics, economics, engineering, and astronomy, even questions of philosophy could all be helped by the application of an advanced AI."

One of the indications of maturity in "artificial intelligence" research is the emergence of more bounded and tractable sub-fields devoted to specific A.I. problems. Machine learning, computer vision, robotics, etc. The tangible success of these research areas make me wonder if simulating human brains is the most useful approach. Which reminds of this Tom Mitchell quote:

"My late colleague Herb Simon used to talk about how aircraft are artificial birds. In many ways aircraft exceed birds in their ability to fly, but in many other ways they do not. The path to recreating human intelligence may deliver a similar outcome. Just as birds and aircraft are similar but different, we may create artificial intelligence in the future that mimics human brains, yet also differs greatly in its implementation and capability in a variety of arenas."

http://singularityhub.com/2009/04/24/devices-that-read-peopl...

Is simulating a human brain the shortest path to solving "problems in chemistry, biology, physics, economics, engineering, and astronomy?" Machine learning techniques that are very different from what a human brain does, working in concert with the brains of human researchers, might turn out to be the most productive path.


You're absolutely right: simulating a human brain is the shortest path to solving problems that relate... to the human brain. No other guarantees exist.

It's going to be difficult to make a computer architecture that approaches the parallel ability of the human brain.


Definitely, especially as its such an 'analogue' problem, maybe digital computers aren't the right job for this.


I think efforts along these lines are hopelessly misplaced; all it's going to produce is a simulation of an autistic brain. It's not without scientific value but it's not going to cross a threshold and say 'hi there!'.

When reading proposals for creating AI, few researchers discuss (at least publicly) the idea of simulating hunger or pain. How are we going to create intelligence without simulating qualitative experiences which give rise to the basic drives of avoidance and desire, which intelligence helps to fulfill?

At the risk of stating the obvious, a cat brain is about 3 orders of magnitude smaller than a human's, but most will agree that a cat has personality (maybe mice do too, but I've never kept one). What we really want to create with AI is something that has enough personality for us to form a relationship with it which does not require us to pre-abstract our communication. So we would be far better off building something about as capable as (and no more dangerous) than a cat, and building 10 of them, which we can then play with and try to train. We can probably dump a whole lot of brainspace since we don't need to simulate the full complexity of an endocrine system etc etc; but we do need our ACats to have enough autonomy to get itself into trouble. We are never going to develop a machine that thinks if we don't give it something to think about, and it will never have anything worth thinking about if it doesn't have a sense of self, and it won't develop that without having to make decisions which feed back its qualitative experience.

I am not an adherent of Searle's Chinese room argument, but on the other hand asking the guy in the room whether he prefers noodles or rice is devoid of meaning if he subsists on an endless supply of burgers and fries; all you get int hat case is an expert system which parses Chinese but has nothing of note to say in that language.


When you refer to a machine needing a 'sense of self' it sounds as if you are referring to consciousness. To an extent you could maybe say even a thermostat has a limited form of consciousness in that it holds a belief that a room is at a specific temperature and it is able to effect this through its own output, functioning as a cybernetic system.

I think you could certainly give such a system the impression of having feelings by implementing some kind of stimulus and reward process, but I think that needs to come second to appearing intelligent first (albeit in a rather emotionless fashion), but yes it would make the UI much more friendly.

I like the idea of trying to mimic the human brain as a path to AI, as it would give the system the ability to theoretically cope with a massive set of problems.


Thou shalt not make a machine in the likeness of a man's mind.


Come on, Butlerian Jihad? Please tell me that was a humorous quip and not a real opinion ;)


It's interesting. I have a couple of spare machines lying dormant. But I'm kind of thrown off by the commercial intentions. It was only a brief mention but I need to read more about it before jumping in.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: