1. Skip to content
  2. Skip to main menu
  3. Skip to more DW sites

Tracking 'intimate traits and future behavior'

Ever larger digital footprints and ever stronger algorithms reveal our most intimate personality traits. Privacy is lost, Michal Kosinski told DW at the Global Media Forum, and we had better get ready to deal with that.

Dr. Michal Kosinski (Assistant Professor of Organizational Behavior, Stanford Graduate School of Business, USA) being interviewed by DW editor Matthias von Hein
Image: DW/K. Danetzki

DW: People who use "free" online platforms are either unaware that they themselves are the product or unbothered by that fact. Is there anything we can do to ensure thatthe benefits of big data are reaped by society rather than just a handful of Silicon Valley tech tycoons?

Michal Kosinksi: One of the major roles of our scientific research is to try to detect potential risks to privacy. And one thing that has to be stressed is: Whether we run this research or not is not going to stop companies from invading people's privacy. I am sure the models we are developing are nothing new to big companies. I know for a fact that both governmental institutions and commercial institutions are working hard and building models, trying to reveal intimate traits and trying to predict humans' future behavior from digital footprints. I think that the public and policymakers deserve to know what is possible. So, I am working hard and trying to show to the public and policymakers what those algorithms are presumably employed to do. 

CEO Alexander Nix claimed that Cambridge Analytica has data sets for 230 million people in the United States, with 4,000 data points each. How accurate could predictions about the personality traits of those 230 million people be?

The research that me and other scientists are doing in this area shows that this is enough data to make very accurate predictions about intimate traits and future behavior. I don't know what data Cambridge Analytica is working with. In fact, now that they got into trouble, now that they actually got under investigation, they actually claim that they don't really have much data and that they never used any models - which directly contradicts what they were saying before. So, they are either lying now or they were lying before. But, whether they have done it or not, whether they managed to predict future behavior accurately, does not matter. Because what matters is: It is possible! And there might be other companies now or in the future that will do it, maybe without telling anyone or boasting that they have done it.

Does this open the door to endless manipulation?

It is an open question where to put the boundary between acceptable and not acceptable usage of such algorithms. But another thing becomes pretty apparent: Those algorithms and users of those algorithms are very difficult to police. You can police Facebook, Google, big insurers, health care companies. You can police responsible governments and governmental institutions. But it is difficult to police foreign governments. It is difficult to police small companies or individuals that do not have much to lose for doing an unethical thing. 

A laptop with a strong processor and some memory gives enormous power to individuals to basically invade other peoples privacy, manipulate them, extract their private information. We perhaps have to try to focus not only on the legislation that will make such actions illegal, that can be enforced on big companies, but also on making societies more tolerant, more open, more educated, more equal, to make sure, that even when you are losing your privacy, or even if someone tries to manipulate you into not voting, that they will have a much harder job.

Speaking of voting: We have an election year in Germany. Have you seen the use of psychometric technologies or psychometric advertising in campaigning here or elsewhere in Europe?

All responsible politicians are going to be using tools like that - very much like they were using radio and TV and news ads in the past. And because all parts of the political spectrum are now using those technologies, what is happening is that it does not give anyone an undue advantage anymore. Both Hillary Clinton and Donald Trump were using extensively psychological targeting. In fact, Hillary Clinton - if I understand correctly - spent three or four times more on doing exactly the same thing that Donald Trump is accused and blamed for. At the end of the day, this is basically what is happening now and I don't think there is way that we can actually prevent it from happening. This would require switching off Facebook, switching off Google or taking away their ability to post adverts. In fact, even if you as a politician would not use psychological targeting in your communication, then just the tools you use for marketing, like Facebook or Google, they also have optimization built in, so you just post an advert and Facebook will try to see which people are most receptive to clicking on an advert or reacting to an advert in a positive way and then just focus this advert on those guys. The whole model of digital marketing basically makes it to a large extent unavoidable that psychological microtargeting will be happening. There is no going back. 

Michal Kosinski (left in photo) is a psychologist and data scientist, currently at the Stanford Graduate School of BusinessHis research focuses on studying humans through the digital footprints they leave behind while using digital platforms and devices.