Differential Privacy: Analyzing Sensitive Data and Implications
Differential Privacy: Analyzing Sensitive Data and Implications
Saturday, 14 February 2015: 10:00 AM-11:30 AM
Room LL21C (San Jose Convention Center)
To realize the full potential of big data for societal benefit, we must also find solutions to the privacy problems raised by the collection, analysis, and sharing of vast amounts of data about people. As discussed in the 2014 AAAS Annual Meeting session "Re-Identification Risk of De-Identified Data Sets in the Era of Big Data," the traditional approach of anonymizing data by removing identifiers does not provide adequate privacy protection, since it is often possible to re-identify individuals using the seemingly innocuous data that remains in the dataset together with auxiliary information known to an attacker and/or present in publicly available datasets. Differential privacy offers the possibility of avoiding such vulnerabilities. It provides a mathematically rigorous formalization of the requirement that a datasharing or analysis system should not leak individual-specific information, regardless of what auxiliary information is available to an attacker. A rich body of work over the past decade has shown that a wide variety of common data analysis tasks are compatible with the strong protections of differential privacy, and a number of promising efforts are underway to bring these methods to practice. In addition, differential privacy has turned out to have powerful implications for questions outside of privacy, in areas such as economics and statistics. This symposium will discuss these facets of differential privacy.
Organizer:
Salil Vadhan, Harvard University
Co-Organizer:
Cynthia Dwork, Microsoft Research, Silicon Valley
Speakers: