//
you're reading...
Uncategorized

Conscious robots: a product brief

If HTM theory describes how brains work, could you build a conscious machine? Well, first you’d need a theory of consciousness. David Chalmers divides consciousness theory into “easy” and “hard” problems.  “Easy” problems involve cognitive capabilities like attention, autonomy, being awake. To explain these, you “need only specify a mechanism that can perform the function.” Easy peasy.

The “hard” problem is explaining why we have subjective experience (feelings like pain, emotions, and wonder).  Chalmers argues theories ignoring the hard problem are cop-outs.

But if I wrote a product brief for a conscious machine, first I’d ask, who needs consciousness? That’s because I don’t see HTM leading to sentient machines, but I do think  HTM could solve problems people think require consciousness.

For example, conscious machines might excel at autonomous problem solving. But you could also imagine autonomous programming without sentience. Army ants aren’t conscious, but they’re autonomous enough to ruin your safari.

Maybe sentient robots would be creative. But you can talk about creative HTM networks without introducing consciousness.

Or maybe conscious robots would seem more “human.” But if HTM leads us to creative, autonomous robots, our instinct to anthropomorphize could make clever UI tricks eerily effective. For starters, replace the vacant stare of sci-fi robots with eyes that track speakers and have simulated saccades.

In other words, if C3PO brings you vodka tonics and shoots Imperial Stormtroopers, why upgrade to the sentient model? Could you even tell the difference? (Philosophers call these “zombies”—not the flesh eating undead, but indistinguishable imitations of sentient beings.)

On the other hand, if consciousness could provide benefits beyond advanced intelligence, what are they? If you had a “conscious” Roomba with the intelligence of a pigeon, it might just be annoying.

Don’t get me wrong.  Sentient robots would be fascinating. And maybe they wouldbe better (say some recursive observational algorithms required for self-awareness enable sentient machines to solve harder problems). I’m just suggesting HTM may not be a path to “hard-problem” consciousness, but an “easy-problem” level of intelligence would enable a pretty kick-ass robot.

Thanks: Rob Haitani

Advertisements

Discussion

No comments yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: