Smart Cameras Don’t Kill People

My field of research being what it is, this article on Boing Boing caught my eye as I scrolled through my feeds just now.

You’d think I’d have some huge conflict of interest working on distributed smart cameras, seeing as surveillance is currently their most prolific application and considering that I am a huge civil liberties nut. I wouldn’t say that privacy is currently a huge concern for me personally, as I’m quite open with most things, but I do believe that the option of privacy is imperative (Wikileaks is a perfect case in point). The EFF is one of my 6 charities, and I regularly use Tor and GnuPG.

I’m also a huge fan of Cory Doctorow’s, so it was decidedly uncool to see his knee-jerk reaction to the technology rather than the questionable circumstances of its deployment. Specifically, he talks about the “fallacy” of trusting these devices to “catch all the bad guys” without human intervention. Even if we ignore the fact that right in the excerpt it says that possible positives are passed on for further human inspection, even if we imagine some horrific future system that automatically dispatches Robocop for immediate arrest of the disruptive persons, the fallacy here is in attacking the technology itself based on disagreement with one implementation of one of its applications.

Don’t get me wrong. Taken with my recommended grain of anti-anthropocentric salt, some technologies are inherently evil (read: counter to human values). Weapons designed to kill people, for example. So, being no fan of Big Brother, I understand the problem, and I’m not trying to say anything to the effect of “guns don’t kill people…” here.

My point is that distributed smart cameras, and even human motion analysis, have plenty of applications that Cory and I don’t have beef with, so can we go back to picking on the bad people who want to control our thoughts?

May 15th, 2008
Comments are closed.