Personalised services have become a major stake in the IT sector as they require actors to improve both the quality of the collected data and their ability to use them. Many services are running the innovation race, namely those related to companies’ information systems, government systems, e-commerce, access to knowledge, health, energy management, leisure and entertaining. The point is to offer end-users the best possible quality of experience, which in practice implies qualifying the relevance of the provided information and continuously adapting services to consumers’ uses and preferences.
Personalised services offer many perks, among which targeted recommendations based on interests, events, news, special offers for local services or goods, movies, books, and so on. Search engines return results that are usually personalised based on a user’s profile and actually start personalising as soon as a keyword is entered, by identifying semantics. For instance, the noun ‘mouse’ may refer to a small rodent if you’re a vet, a stay mouse if you’re a sailor, or a device that helps move the cursor on a computer screen if you’re an Internet user. In particular, mobile phone applications use personalisation; health and wellness apps (e.g. the new FitBit and Vivosport trackers) can come in very handy as they offer tips to improve one’s lifestyle, help users receive medical care remotely, or warn them on any possible health issue they detect as being related to a known illness.
How is personalisation technologically carried out?
When surfing on the Internet and using mobile phone services or apps, users are required to authenticate. Authentication allows to connect their digital identity with the personal data that is saved and collected from exchanges. Some software packages also include trackers, such as cookies, which are exchanged between a browser and a service provider or even a third party and allow to track individuals. Once an activity is linked to a given individual, a provider can easily fill up their profile with personal data, e.g. preferences and interests, and run efficient algorithms, often based on artificial intelligence (AI), to provide them with a piece of information, a service or targeted content. Sometimes, although more rarely, personalisation may rely solely on a situation experienced by a user – the simple fact they are geolocated in a certain place can trigger an ad or targeted content to be sent to them.
What risks may arise from enhanced personalisation?
Enhanced personalisation causes risks for users in particular. Based on geolocation data only, a third party may determine that a user goes to a specialised medical centre to treat cancer, or that they often spend time at a legal advice centre, a place of worship or a political party’s local headquarters. If such personal data is sold on a marketplace and thus made accessible to insurers, credit institutions, employers and lessors, their use may breach user privacy and freedom of movement. And this is just one kind of data. If these were to be cross-referenced with a user’s pictures, Internet clicks, credit card purchases and heart rate… What further behavioural conclusions could be drawn? How could those be used?
One example that comes to mind is price discrimination, i.e. charging different prices for the same product or service to different customers according to their location or social group.
Democracies can also suffer from personalisation, as the Cambridge Analytica scandal has shown. In April 2018, Facebook confessed that U.S. citizens’ votes had been influenced through targeted political messaging in the 2016 election.
Responsible vs. resigned consumers
As pointed out in a survey carried out by the Chair Values and Policies of Personal Information (CVPIP) with French audience measurement company Médiamétrie, some users and consumers have adopted data protection strategies, in particular by using software that prevents tracking or enables anonymous online browsing… Yet this requires them to make certain efforts. According to their purpose, they either choose a personalised service or a generic one to gain a certain control over their informational profile.
What if technology could solve the complex equation opposing personalised services and privacy?
Based on this observation, the Chair’s research team carried out a scientific study on Privacy Enhancing Technologies (PETs). In this study, we list the technologies that are best able to meet needs in terms of personalised services, give technical details about them and analyse them comparatively. As a result, we suggest classifying these solutions into 8 families, which are themselves grouped into the following 3 categories:
- User-oriented solutions. Users manage the protection of their identity by themselves by downloading software that allows them to control outgoing personal data.
Protection solutions include attribute disclosure minimisation and noise addition, privacy-preserving certification, and secure multiparty calculations (i.e. distributed among several independent collaborators).
- Server-oriented solutions. Any server we use is strongly involved in personal data processing by nature. Several protection approaches focus on servers, as these can anonymise databases in order to share or sell data, run heavy calculations on encrypted data upon customer request, implement solutions for automatic data self-destruction after a certain amount of time, or Privacy Information Retrieval solutions for non-intrusive content search tools that confidentially return relevant content to customers.
- Channel-oriented solutions. What matters here is the quality of the communication channel that connects users with servers, be it intermediated and/or encrypted, and the quality of the exchanged data, which may be damaged on purpose. There are two approaches to such solutions: securing communications and using trusted third parties as intermediators in a communication.
Some PETs are strictly in line with the ‘data protection by design’ concept as they implement data disclosure minimisation or data anonymisation, as required by Article 25-1 of the General Data Protection Regulation (GDPR). Data and privacy protection methods should be implemented at the earliest possible stages of conceiving and developing IT solutions.
Our state of the art shows that using PETs raises many issues. Through a cross-cutting analysis linking CVPIP specialists’ different fields of expertise, we were able to identify several of these challenges:
Using AI to better include privacy in personalised services;
Improving the performance of existing solutions by adapting them to the limited capacities of mobile phone personalised services;
Looking for the best economic trade-off between privacy, use of personal data and user experience;
Determining how much it would cost industrials to include PETs in their solutions in terms of development, business model and adjusting their Privacy Impact Assessment (PIA);
PETs seen as a way of bypassing or enforcing legislation.
On 11 April 2019, the CVPIP team will hold a meeting on the specific features of personalised services, the balance that needs to be found between using PETs and personalising services, as well as challenges related to the industrial appropriation of such technologies.
Scheduled program and registration will be soon available.
An e-book for non-specialists summarising the main technologies will be introduced at this meeting. After listening to talks by our guests on the technical stakes related to implementing data protection at the conception stage as provided by the GDPR, we will have a round table discussion to confront views and discuss what still needs to be done, after which the audience will be invited to speak.
Maryline Laurent, Professor of Computer Sciences at Télécom SudParis, co-host of the Chair
Nesrine Kaâniche, Research Engineer of Information and Communication security engineering at Télécom SudParis
 Personal data allows companies to build statistical models and predict the maximum price each user is willing to pay for the same good. Therefore, they can offer different prices or a wider range of products in order to make the maximum revenue out of each consumer.
 Study by CVPIP and Médiamétrie, « Données personnelles et confiance : quelles stratégies pour les citoyens-consommateurs en 2017 ? » (Personal data and trust: which strategies can citizens/consumers adopt in 2017?), June 2017 (in French only),
 Laurent, M., Kaâniche, N. (March 2016), ”Identity or Attribute Credentials that Preserve Pseudonymity”, in Digital Identities, Handbook #1 by the Chair Values and Policies of Personal Information, https://partage.mines-telecom.fr/index.php/s/xXqDYWgANu5MwSQ
 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), EUOJ, L 119, 4.5.2016, https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016R0679&from=EN, of which Article 25 states: “Taking into account the state of the art, the cost of implementation and the nature, scope, context and purposes of processing as well as the risks of varying likelihood and severity for rights and freedoms of natural persons posed by the processing, the controller shall, both at the time of the determination of the means for processing and at the time of the processing itself, implement appropriate technical and organisational measures, such as pseudonymisation, which are designed to implement data-protection principles, such as data minimisation, in an effective manner and to integrate the necessary safeguards into the processing in order to meet the requirements of this Regulation and protect the rights of data subjects.”