Highlights of the IAPP Global Privacy Summit 2018

 
 
washingtom.jpg
 
 
 
 

This is a round-up of the global IAPP Summit that takes place in Washington DC every year near Easter. It is a massive event, with over 3500 people moving over 4 floors plus an exhibition hall. There were more than 30 sessions (with many sessions running consecutively), plus 5 key notes. The sessions covered topics including blockchain, data scraping, GDPR compliance, data breach notification and response, Privacy Shield, Artificial Intelligence (AI), Smart Cities, Big Data, online reputation.

The Summit took place against the backdrop of another turbulent week in US politics including the Stormy Daniels interview and the student 'no guns in schools' march on the Capitol, as well as the Facebook/Cambridge Analytica revelations and Mark Zuckerberg's interview acknowledging that Facebook breached its users' trust.

The keynote speakers Monica Lewinsky, Simon Schama and Jon Ronson gave a fresh take on privacy from a personal, research and historical perspective respectively and there was a call to arms from the very impressive European speakers, MEPs Birgit Sippel and Viviane Reding.

As Australian based privacy professionals, our Summit experience confirmed that there is a real sense that this is the time of the privacy professional and we need to be setting the agenda. GDPR is the game changer, but no one has all the answers. In fact, more questions were asked than were answered. Trust, user rights and control, digital reputation and data ethics, were common themes across the sessions.

Some highlights from some of the sessions we attended (which were very hard to choose from!) are set out below.

Perspectives on the Online Reputation Debate

Scraping the Web - The Ethics of Collecting Public Data

Vendor Risk 2.0 – Are You Sure You Know Where Your Data Is

Artificial Intelligence:  Benefits and Harms of Always-Listening and Always-On

Can I Have a Do-Over?  The Top 10 Greatest PR #Fails in Data Breach Response

How Blockchain Will Transform Privacy and Identity

Conflicts between US Legal Demands for Data and Global Data Protection Laws

 

Perspectives on the Online Reputation Debate

This session tackled questions about whether our online reputation deserves protection and if so by what criteria, picking up the themes of Monica Lewinsky's very personal keynote. The Privacy Commissioner of Canada spoke about its recent consultation and research into the issue to develop its own position and the ability to control one’s online reputation through new remedies such as de-indexing or take down requests, in light of the development of the right to be forgotten found in Article 21 of the GDPR.

So much of the public space in which we operate to express ourselves is now defined by so few players who can determine what is visualised online and act as information gatekeepers. Our online status depends on our online engagement, which is determined by them and the devices we use.  Personal data online can be manipulated. But whose responsibility is it to protect freedom of speech or our privacy?

The starting point is that an individual can be harmed simply by their information being available online, because it is used for so many key decisions about them, even if it is out of date or inaccurate.  This raises the challenge of new risks of harm and ongoing negative consequences for individuals.  Social media did not exist when defamation laws were adopted and therefore its remedies are not necessarily appropriate. The aim is therefore to develop mechanisms to give individuals some measure of control and ensure a balance between privacy and freedom of expression.  Canada's focus is on the principle of accuracy and ensuring the most accessible version of one's history that is available in search results is accurate, up-to-date and complete and in the public interest, rather than changing the past.  Its solution is not to delete everything, and the primary focus should be on privacy education to prevent harm rather than remediation, to enable individuals to learn about and manage the risks and knowingly agree to them and to be good online citizens. 

However, after the fact solutions are also required where individuals make mistakes. This requires a balancing act between freedom of expression and the right to a good reputation, which should also be protected and is important to self-realisation and individuals' desires to express themselves.  This leads to the following questions -

  • Who should decide how to arbitrate a balance between these various rights? Should it be the domain of private sector enterprises? 
  • What parties should be represented in any decision-making?  Should it include the original sources of the information?
  • While it may be ideal to have independent and fully funded agencies making these decisions in contested cases, after search engines have made their initial decision, is it appropriate for search engines to make the initial decisions?  Does this promote or hinder access to justice and can it provide a practical and efficient means of resolving issues? 

Apparently search engines are doing a reasonably good job in the EU according to the Canadian Privacy Commissioner.  Less than half of the take down requests have been granted and when reviewed by Data Protection Authorities, for the most part, the take down decisions have been upheld.  While not perfect, he sees this as a better avenue than defamation and a pragmatic way to give some meaningful control to individuals.  It will involve considerations such as where an individual has contributed to the information, or if it is deceptive or inaccurate.

The Canadian proposal is therefore what it says is a modest development of the data quality obligation, in the form of a right to have an individual's data rectified or erased. 


Scraping the Web - The Ethics of Collecting Public Data

Data scraping and how to control it is becoming an increasingly hotly debated topic, particularly given the impact of the ability to combine and use large amounts of publicly available data for different purposes. 

With the push for open data and the development of smart cities, the debate about both the legal and ethical issues relating to scraping publicly available data is an important one to have. Its also another example of law and practice not keeping up with technology. This session sought to analyse the legal and technical issues and public interest arguments and come up with some practical answers.

There are competing interests in public data. If data scraping is technically illegal, can it be ethically justified? Your view about what is acceptable data collection will also depend on who you are (researcher, journalist, law enforcement, recruiter, start up, Cambridge Analytica, etc) and what the purpose of your data collection activities are – private or public interest?  

The session looked at three recent US cases in the context of the scope of the US Computer Fraud and Abuse Act (CFAA) (which prohibits unauthorised access to computers). These were LinkedIn's case against startup hiQ, actions by Craigslist and an appeal by David Nosal, a former employee of Korn Ferry International.  The CFAA is currently the subject of a number of constitutional challenges by some researchers and journalists on the grounds that it has a chilling effect and offends the First Amendment because it makes scraping the internet a crime. The work of David Fahrenthold was cited as an example of this. He writes for The Washington Post and has been actively investigating the Trump administration. He pieced together evidence from public sources to follow 'follow the money' in relation to Trump's claims about charitable donations for which he won the Pulitzer prize in National reporting last year.  But the other side of the coin is activities such as government surveillance that involves the scraping of data of targeted communities of colour and vulnerable groups using social media and demonstrating how different people are treated differently online through a practice known as web lining.

While companies should be developing their own policies on what use of data is acceptable, the challenge is how do you audit these uses? The Cambridge Analytica case is a prime example where it violated Facebook's policy, but once the data has left the building it is hard to know what happens to it. 

So what's the answer? Is data scraping effectively a data security breach? While the legal answer may be no, what is the policy and ethical position which the law doesn't cover?  How can websites address these issues for themselves? Protecting privacy is not just about compliance and is protected as much by law as it is by friction. Developing a set of rules and policies to match user expectation and protect them will reduce friction. So instead of simply saying no scraping is permitted and calling your lawyer, consider a combination of the following tools that were proposed:

  • be transparent and explicit about what data you are collecting and why;
  • use front line user controls;
  • more private default settings;
  • defensive technical measures and strong APIs; and
  • nuanced policies with explicit and transparent statements about limitations on data use.

 Because not all types of 'scrapers' should be treated in the same way.


Vendor Risk 2.0 – Are You Sure You Know Where Your Data Is

This was a panel based discussion featuring Charlotte Young and Michelle Beistle.  On vendor risk, the panellists advocated for:

  • A risk based, consistent and reasonable approach i.e. there is no magic bullet;
  • Tackling highest risk data sets first then working with data owners and key stakeholders to understand systems, safeguards and risks;
  • Developing a questionnaire for vendors (including a threshold test which would then open out to a full privacy and security assessment);
  • Ensuring that contractual terms deal with the whole data lifecycle leading to expiration of the contract;
  • Staff awareness of the process and positioning supplier risk management processes as a commercial requirement rather than a “compliance” one for maximum buy in;
  • Not exempting law firms and consultancies from supplier vetting processes;
  • Installing an assurance layer through audit; and
  • Picking your battles – e.g. it may be sufficient to ask the vendor to maintain their ISO certification or for them to present an audit report by an auditor of their choosing

As for responding to a supplier assessment as a vendor:

  • Responses should always be transparent and not profess to know answers that the organisation does not know;
  • Responses should be current and not cite outdated legislation like “Safe Harbour”. Vendors should ensure that privacy policies and other promises are up to date;
  • Sales jargon and bluffing will always be seen as that;
  • Stand out from the competition with a simple and honest one-page summary of who the entity is, what they do, what data they collect and what they do with it; and
  • Make sure that any links work.

Artificial Intelligence:  Benefits and Harms of Always-Listening and Always-On

This was a panel discussion featuring Andrew Dale of dataxu, Vivek Narayanadas of Shopify and Pedro Pavon of Oracle. 

Some of the key comments from panellists were:

  • Notice and choice bargain does not work with artificial intelligence (AI)/machine learning.  Even if compliance obligations are met, from a philosophical standpoint, the mechanism falls short;
  • In view of this, a social roadmap is needed; i.e. an ethical framework for what is ok and what is not.  Ethical frameworks should be set collaboratively with input from the various functions of the company and shared with customers;
  • GDPR Art 22 (automated decision making) raises the question of how do you tell somewhat what you’re doing with their data when you don’t know because this has not yet been determined and will be determined by the algorithm?  What is reasonably foreseeable? 
  • Machine learning requires privacy by design i.e. decision makes need to identify the potential harms that inputs could deliver.

The panellists were then asked - as smart devices in the home proliferate, will they have a chilling effect on how people behave i.e. will people be “less free”?  The main risks identified included:

  • Everything you say can be picked up, recorded and used against you and therefore devices take away your personal space.  
  • The power of combined data sets to identify and discriminate.
  • The physical risks of a connected home – for example, what if your stove can be intercepted and activated without your control?

However, one of the complexities for policy makers are that attitudes vary.   For example, one of the panellists imagined a world where his smart car could act as a therapist, having gathered enough information to be able to counsel him.  

On the question of whether consumers have a right to know if they’re engaging with a bot rather than a person, panellists agreed that it is contextual - for example, in a sales context they would often be unlikely to care but this would differ with online dating apps, life insurance products and other products with significant legal consequences for the individual.  One panellist advocated for a “right to meaningful human interaction” – i.e. not just knowing the difference between a bot and a human but being given the choice to speak with a human.


Can I Have a Do-Over?  The Top 10 Greatest PR #Fails in Data Breach Response

Tanya Forsheit of Frankfurt Kurnit Klein & Selz and Siobhan Gorman of Brunswick Group provided practical advice and lessons from the Top 10 most disastrous PR responses in connection with massive data breaches in recent years including: Target (2013), Sony (2014), Talk Talk (2015), Ashley Madison (2015), OPM (2015), Equifax (2017).

One key message is to avoid a situation where you present to the world as not in control of the situation. 

The top 10 fails include:

  1. Saying too much too soon – only disclose information that is iron clad.
  2. Saying too little too late – you need to strike the appropriate middle ground and your response should demonstrate you are on top of the situation.
  3. Social Media Missteps – only use approve corporate approved messaging, avoid missteps as social media snowballs very quickly.
  4. Tone Deaf CEO -  CEOs need to respond appropriately in media. 
  5. Forcing affected individuals to waive rights.
  6. Overpromising – such as, saying the system is completely secure when it isn’t.
  7. Insider trading.
  8. Careless internal communication without privilege.
  9. Minimizing the impact –  it isn’t helpful to be changing the scope of the data breach event, e.g. changing the number of customers information that have been breached.
  10. Vendors speaking for the company.

In summary, preparation is key to responding to data breach events.  Organisations need to be responsive – the challenge is striking the balance ensuring what is said is responsive and accurate and that it doesn’t make the situation more damaging to your organisation and for customers.


How Blockchain Will Transform Privacy and Identity

This session was presented by Stuart Levi of Skadden and Allison Clift-Jennings of Filament.  A quote from Roy Amara, Standford Research Institute set the scene:     

       "We tend to over-estimate the impact of major technological breakthroughs in the short run and underestimate the impact in the long run."

During 2017, blockchain was adopted at a breakneck speed across numerous industries ranging from financial services to logistics management. In this presentation, the speakers explained blockchain and the significant impact that the technology is expected to have on privacy and identity. 

Blockchain technology uses a distributed ledger system instead of physical currency in the traditional marketplace exchange or a trusted third-party solution.  The problems with the trusted third-party solution include: security, as it is reliant on one third-party; you are charged a fee for the transaction; and in cross- border transactions there can be hours and days of delay to finalise transactions.  

The distributed ledger system enables everyone to have a copy of the ledger and in real-time they can see it without the involvement of a trusted third-party.   The key challenge is how to verify each transaction and ensure that each block in the chain is legitimate and, in an environment where everyone can fully trust each other.   This can be done by using Private Key/Public Key cryptography so that as things are added to the network you can authenticate the identity of someone because there is a Private Key and a Public Key.  The Public Key allows one to verify that the holder of the paired Private key sent the message.   Only the paired Private Key holder can decrypt the message encrypted with the Public Key.

In order to enable a new block to be added to the chain a message is sent to all the nodes on the network (the ‘miners’, which are the computers on the network that work on a complex mathematical problem) and once 51% of the nodes on the network confirm that it is a valid transaction, the new block is added to the chain and the all the ledgers are updated.  This is referred to as ‘Proof of Work and the Consensus Algorithm’.  Once the transaction is validated, the miner is financially rewarded and the process begins again.

Blockchain is secure and immutable because each block is transformed into a unique 64 character of random letter and numbers, which cannot be reversed engineered and even a small change to a block dramatically changes its hash.  The algorithm to add a new block incorporates the hash from the prior block.  For example, if you go back and try to change a transaction from 4 blocks ago, you will change its hash.  This means that every block after that will show an error.

Issue of Identity

The issue of ‘identity’ may include the sovereignty of the individual, the sovereignty of the machine operating on behalf of the individual, authentication of the other party to the transaction and whether they have the ability to do certain things – referred to as authorisation.  These concepts are important because if you want to store identity in a blockchain you need to consider what you want to include.  This enables you to put what identity or parts of identity that you want to use in the payload - e.g. real name, pseudonym, social security number, you can include it in a blockchain and it can be private, but still verifiable.  Cryptography enables you to verify without revealing your identify.   This is where the concept of digital sovereignty becomes important.  There are many start-ups trying to enable verified digital identity in a blockchain.  Digital identify can be used where people may not otherwise have an identify, for example, in a developing country or war-torn country where individuals do not have official identification documentation. 

Presently we use numbers as unique identifiers for humans – e.g. passport, driver’s license, social security.  However, when you bring in cryptography, such as the Public and Private Keys, you don’t need to trust numbers anymore as you have an enhanced version of your number.  This is because you have in the Public and Private Keys the mechanics of the maths to prove who you are and that you will do what you say you will do.  A good analogy is an email anyone can send to you, but only you can read.  A Public Key allows anyone to send you anything encrypted and only you can decrypt it with your Private Key.

The right question is how do we rethink identity in a concept of mathematically secure verification and the speed of computers?

Currently there is not very much privacy on a blockchain, because the first generation of blockchains require you to see the entire ledger history to verify that none of the hashes have been changed.   The transactions will always be public in the first generation blockchains, such as, bitcoin.

Some of the new and emerging technologies that enhance personal privacy are:

  • Zero Knowledge Proofs – this allows one party to reveal to the other party enough about a transaction but not the whole transaction to enable the other party to verify the transaction, such as Zcash (which is similar to bitcoin, but is an anonymous).  It is likely this will be the way forward where you no longer see public viewable transactions on blockchain as you do on first generation blockchains.
  • Next generation consensus algorithms and Machine-anonymity – this is about enabling machines to transact with each other.   Machines which can add to or change the ledger you can remove friction from enterprise.  Machines have the same issues around identity and privacy as people do.  This is because they operate on behalf of their owners and machines will need anonymity and privacy in the same way as humans.  A group of 100 machines can through cryptography share the same Public Key and any Private Key can identify it.  There is considerable research and technology development in this area, such as Intel’s EPID (Enhanced Privacy Identification).   

If a blockchain is immutable, what does that mean for GDPR compliance?

Early blockchain technology is very much like dial-up era of blockchain and are not very compatible with GDPR.  While the right to be forgotten won’t be able to be implemented on the first generation blockchain it will be able to be implemented on the next generation of blockchain.   One practical way is to include an expiration date on the transaction, such as, a statute of limitation period. 

In summary, developments in blockchain technology will enable people to control their own data, determine who gets to use that data, how much of it and for what purpose. This paradigm shift in personal privacy, data monetization, trans-border data flow, and government access to personal information will significantly influence the future of transactions, many of which are already being fully and securely verified without knowing the identity of the user.


Conflicts between US Legal Demands for Data and Global Data Protection Laws

This session highlighted the growing complexity for organisations operating cross-borders arising from the growth in data localization laws and regulations.  The presenters were Walter Delacruz of Deutsche Bank, Brian Hengesbaugh of Baker & McKenzie and Hugo Teufel III of Raytheon.

While the session was presented from a US perspective, the issues for companies operating cross-borders are similar for Australian and New Zealand companies.  The session explored the differing legal demands for global data and global data protection and other legal restrictions on data transfer and disclosure.  The different types of legal demands and the types of data restrictions that need to be taken into account when responding for three of the five types of demands identified are set out below.

1. Cross- Border internal investigation

Legal demands include anti-bribery, financial fraud, Code of Conduct Violation, and other compliance issues.

Key Data laws include Data protection, telecoms privacy, labour laws, anti-investigatory/blocking statutes (maybe), and professional secrecy.

Elements to address include privacy solutions, key facts (location of data, controller, restrictions), collection and production protocols, negotiation with authorities and protective orders.

Indicators of data law risk include aggressive data protection officers of works councils, high profile company, media attention, actual wrong doing and disputes.

2. Cross- Border penetration testing and security monitoring

Legal Demands include security requirements due to industry-specific regulation, public company requirements, data protection laws and contractual duties.

Key Data laws include Data protection, telecoms privacy, labour laws, computer misuse, data localisation and blocking statutes.

Elements to address include privacy solutions, focused and locally-approved exercises, protocols on data use and local segmentation.

Indicators of data law risk include aggressive data protection officers or works councils, use of data for discipline, high profile company and media attention.

3. Cross- Border third-party due diligence and background screening

Legal Demands include third party due diligence and background screening as needed to address EIM/ITA, OFAC, AML, or other requirements.

Key Data laws include Data protection, labour laws, blocking statutes.

Elements to address include privacy solutions, protocols on data collection and use, local segmentation.

Indicators of data law risk include aggressive data protection officers or works councils, use of data for discipline, high profile company, actual disqualification of third parties.
 

4. Cross- Border Regulatory demands

Legal Demands include industry specific regulatory requirements due to oversight or investigatory powers (typically home office, but can be local).

Key Data laws include Data protection, telecoms privacy, labour laws, anti-investigatory/blocking statutes, professional secrecy.

Elements to address include privacy solutions, key facts (location of data, controller, restrictions), protocols on data collection and production, negotiation with authorities and protective orders.

Indicators of data law risk  include aggressive data protection officers or works councils, use of data for discipline,  high profile company, actual disqualification of third parties
 

5. Cross-Border National Security or Law Enforcement demands

Legal Demands include law enforcement or national security demands.

Key Data laws include Data protection, telecoms privacy, anti-investigatory/blocking statutes, labour laws, professional secrecy and data localisation.

Elements to address include privacy solutions, key facts (location of data, controller, restrictions), protocols on data collection and production, negotiation with authorities and protective orders and litigation/defence.

Indicators of data law risk include third-party data, high profile company, media attention, frequency and expansiveness of demands.

In summary this session highlighted:

  • Companies are being caught between various regulations, data localization laws, and law enforcement.
  • Data laws are proliferating.
  • Legal Demands for global data are increasing.
  • The digital age is accelerating cross-border integration.
  • Addressing conflicts required multi-layer approach that:

Reduces data law risk

Manages legal demands, and

Importantly, handles remaining conflicts in a consistent manner.


Veronica Scott,Vice-President of iappANZ, Special Counsel, Minter Ellison

Melanie MarksPresident of iappANZ, Advisory Board Member Information Governance ANZ, Principal, elevenM

Susan BennettCo-Founder Information Governance ANZ, Principal, Sibenco Legal & Advisory