Let’s start with an example of privacy issues without AI. The US government’s PRISM program as described by the ACLU:
- “PRISM is a warrantless wiretapping program that operates around the clock, vacuuming up emails, Facebook messages, Google chats, Skype calls, and the like. Government agents do not review all of the information in real-time — there’s simply too much of it. Instead, the communications are pooled together and stored in massive NSA, FBI, and CIA databases that can be searched through for years to come, using querying tools that allow the government to extract and examine huge amounts of private information.”
From just the above, there seem to be two issues that are offensive: the unwarranted collection and storage of private information, and the subsequent viewing of it by government agents. There is a third issue regarding how data is used that we’ll capture later. For this argument, let’s sidestep the first issue simply because it’s a common factor with or without the use of AI.
There are legitimate privacy concerns around the use of AI. For example, China is exporting its AI camera technologies to monitor populations around the world. At this time, although AI is used in the loop, it ultimately relies on humans (e.g. government agents) to view the videos for making decisions about the monitored populace. The videos are taken in public areas, however, so there isn’t that much expected privacy to begin with. But as surveillance states increase their scope to monitor into private lives, AI will need to serve a greater role in the decision making process.
Current AI technologies are severely flawed. They suffer from bias introduced either by the humans that build the algorithms, or the data used to train the system. Ultimately, though, AI solutions will need to move past these traditional, deeply broken techniques so that more human systems can be used. This is known as “AGI”, or Artificial General Intelligence. As AGI technologies improve, fewer humans need to view private information. This work can be completely offloaded to machines. For this specific issue, AGI can serve to increase privacy by eliminating the need for humans to view your data.
Privacy issues and injustice are two sides of the same coin. The manner in which data is used can become a more grievous violation than the way in which it was obtained. Fundamentally, the main objection to other people viewing our private information is their biased judgement of us without context outside of a sliver of data. All sensors are limited in the information they carry. Electronic sensors, even if more acute than human eyes and ears, are still limited in scope. The lack of additional information that is not captured from sensors with the mechanical application of rigid rules can lead to greater injustice stemming from a complete lack of empathy.
For example, consider red-light cameras. Cars are issued tickets whenever they pass the intersection line regardless of the driver’s reason. The camera (and human analyst that double checks the footage) may not see that the driver was anticipating a passing ambulance that turned off in another direction. Or, consider a funeral procession motorcade. If you’ve been in such a motorcade, you’ve likely experienced getting further away from the hearse as other traffic veers in-and-out of your motorcade regardless of your blinking hazard lights. (Can’t make them blink any brighter, people!) Pass through a red light after the hearse has gone through a green, and you’re guaranteed a ticket by that cold lens with its myopic human partner in bureaucracy. Had a cop been nearby for either of those scenarios, human compassion would have prevailed and, likely, assisted in your safe passage. Were either of the above drivers wrong in their actions even though both actions violated the law? Any reasonable person aware of just a bit more of the bigger perspective would not think so.
Ultimately, AI like any other technology has the potential to improve lives, reduce injustice and inequalities, and unburden people. Alternatively, it can be used to reduce the quality of life, increase injustice and inequality, and unfairly burden people. It’s all a matter of how people decide to use it. It’s never a matter of the technology, itself.