Privacy Future Threads. The Subconscious E-Mirror
 
SECURITY
Privacy Future Threads. The Subconscious E-Mirror
2014-11-14 | by David "DeMO" Martínez Oliveira

It is 2014. It took some time for the people to realise the threats of current Social networks, specially with regards to people's privacy. A lot of people had been rising their voices for several years on this specific topic. A lot of bad things had happen to reach to this point. Bullying, fraud,... Some people had even die. It is pretty sad.
Recently I was watching a documentary about this topic on TV. It was nice to see how even normal TV starts to properly address these issues. Anyway, the documentary fired up a nice discussion on how to get the young ones protected from the nowadays Internet threads. The conclusion was the usual one: Education.

But I do not want to talk about that. It is really a very interesting topic, however our discussion derived in quite some different concept. Some hidden threats that are starting, shyly, to pop up and that are far more dangerous that weak security on massive websites during the last years. I will call these threats The Subconscious E-mirror.

The subconscious e-mirror is the projection of our subconscious thoughts in our digital profiles. I will talk about two obvious ways to exploit all the information we are exposing publicly.

The first one is the biometric/behavioural information that we are sharing via different media. As technology goes on, the quality of the media that everybody uploads to Internet is better and better. Nowadays most smartphone's cameras can produce Full-HD video with plenty of information to perform any kind of analysis. Face recognition is already a reality and there is no technical reason for not going further and extract any kind of additional information... movement and voice patterns, biometric body parameters (height, rations between different parts of the body),...

This is actually identified in wikipedia as Soft Biometrics (http://en.wikipedia.org/wiki/Biometrics#Soft_biometrics) and is classified as privacy-safe because its low discrimination rate and because the information is publicly available. It is a reasonable point of view for current technology. But as technology evolves those arguments will slowly vanish.

Many of those technology already exists. Maybe they are not fast enough to be applied to the massive amount of information uploaded to the Internet every minute but it is just a matter of time that this starts to happen. These technology will not just produce biometric data to identify people very easily they can, potentially, provide emotional information. Is that person happy, nervous, depressed?, and that just looking to a video and comparing it with previous recorded patterns.

Recent results shows how the new cameras generation are so sensitive that they can be used to estimate the heart rate of a person just analysing a video recording (http://www.geekwire.com/2013/xbox-watch-kinect-detect-heart-rate-room/). Similar results can be achieved with Google Glass (http://www.technologyreview.com/news/530521/google-glass-can-now-track-y...). This adds new possibilities to actually get physiological information without even touching a person.

This is scary enough, but there is something else, even more subtle and new. It is still in a very early stag, but it already has a name. Nowadays it is called sentiment/opinion mining or analysis (http://en.wikipedia.org/wiki/Sentiment_analysis). These techniques uses natural language processing techniques to analyse text and extract information about the feelings or opinions of somebody. We, humans, can, in a sense, do that automatically, but only with a limited amount of information. We cannot analyses millions of blog entries, tweets or messages on social networks to extract this information in a way that can be exploited.

However, if a machine can do that, and if it does it properly, third party entities may know more about us that ourselves. Furthermore, if the machines does not do the analysis properly but the persons behind the machine trust what it said, then... that will be even worst. And note that this has been happening for years... not listening to machines but listening to other persons doing wrong (or self-interested) decisions.

The implications of this I cannot imagine. But I know there are plenty of people out there that can figure out those ways.

I do not have a conclusion yet about this topic. The current unique solution seems to just keep away of "social services" on the Net. This may work for somebody like me, but the young ones have a lot more pressure on them. It is not easy to be different to everybody else and at some ages some things looks bigger than they really are. We are not yet there but... I'm having the same feeling I had with facebook several years ago. It is being pretty scary

Be careful, you may had already said much more from you than you belive... how much of your subconscious is already reflected in the net?....

 
Tu publicidad aquí :)