I’m Sorry, Dave…


or, what are Turing tests trying to tell us?

On Monday, I learned that a CAPTCHA implementation used by at least one blog hosting company (to prevent comment spam) was beaten — thanks, Scott.

That same day I also learned of a proposed anti-spam system that makes heavy use of a CAPTCHA system with 3-D images (the ability to apply different lighting effects to a 3-D model allows for a very large number of images that are all recognizably the same object to a human, but very different images to a machine).

As CAPTCHA stands for “Completely Automated Public Turing test to tell Computers and Humans Apart,” these posts got me thinking about Turing tests in a more general way. When Turing wrote Computing Machinery and Intelligence in 1950, it was as a framework for considering what “intelligence” is, whether machines can be intelligent, and if so, whether an intelligent machine necessarily has to “think” in the same way that a human being thinks. In interesting area to explore, to say the least.

That said, I just can’t get past the fact that in 2005 the most common use of Turing tests is not to determine whether or how machines might be intelligent, but rather to force people to prove that they’re human. Our culture, and our technology, appear to have moved in a direction very different from that which Turing envisioned…