Free will is the basic principle of freedom – but technology seems to be making it very difficult to “enforce” as we move forward in the digital age.
It’s been quite a few years since the “right to be forgotten” has first been featured in the media, and even more since it’s begun seriously affecting people’s lives. Take for instance Amanda Todd, the Canadian teenager that was cyber-bullied to death, despite changing home and school 3 times.
Or Mario Costeja, the Spanish citizen that was cyber-branded as having debts to the spanish social security, and has been struggling since to 2009 to get those records (that are no longer updated) erased, time when the newspaper La Vanguardia, followed a governmental order to digitize those public records, and publish them online.
When something was printed, it could just be stored somewhere where no one could access it, or simply be destroyed. How do you do this in the digital age? How do you erase your digital footprint, and even more important how do you legislate? The current law in the EU is from 1995 – smartphones back then were an exquisite luxury, and the internet was not taking over everything.
Regulators could not have imagined where this could end, so without a proper legal framework (now under discussion in the EU), judges didn’t dare open up precedents, until a few months ago. The system made a stand on this growing issue, andGoogle has been ordered to erase Costeja’s information from the search results.
Google reacted quickly and created a form to allow people the right to be forgotten. What are the implications of this? Why is it such a challenge even to Google or any other digital company? What are the possible solutions?
This is a turning point. A big one, that could go either way – transparency or censorship (most probably both).
She [the internet] moves in mysterious ways
Google’s (and all others) search engine was built on a set of algorithms that operate automatically – it’s the only possible way it could crunch and serve so much information in near real-time. This makes it very complicated to be able to ensure a piece of information is deleted from every website, blog or social media post, or in any other reference or location in the web that mention that content. It’s not only google, it’s the whole web!
And furthermore, the digital breadcrumbs you create through the use of digital devices create patterns (check out some of the work from Alex “Sandy” Pentland), and those patterns repeat themselves online. Mankind is a pattern repetition machine that now has AI (Artificial Intelligence) doing it much quicker and more efficiently than we ever could. It or someone will find a way to that piece of content.
And another aspect to take into consideration is the vast reach of our personal networks through our social media connections, mobile phone contact list or e-mail distribution lists. It’s not something you have to actively do – but when you accept the rules of the game, you’re in, want it or not – and this happens when you buy a phone, open an e-mail account or get a credit card.
Take a look and the Darpa Network Challenge, where 5 people, in three days, managed to identify and confirm the location of 10 red balloons spread all over the US! They single-handedly got the collaboration of over 2 Million people across the country in just a few days.
The hacker-murphy’s law – if it exists, it’s meant to be hacked
One of Google’s first comments on the use of the tool they made available was it’s fraudulent use. Almost 70.000 people have requested the takedown of roughly 250.000 pages. How many of those requests are legitimate and how do you decide what information is “inadequate, irrelevant or … excessive“?
Obviously someone will find a way to use this to manipulate the information and systems. And someone will find a way to make money out of it. But that’s how the world works now – nothing is static – and laws and regulators need to step up and keep up.
When the going gets tough
There is a thin line between both concepts in today’s digital world, and attempting to manipulate tools that are “unbiased” (they’re machines – they do what we tell them to do) is in Google’s lawyers’ words: work in progress (more work than progress).
I suspect there’s a solution for cases like Amanda’s – there must be a service or an organism, maybe ran by an NPO, that allows people at risk to disappear from the system, and help them to stay out, or help them understand what they need to do to keep them out.
This must be put together in collaboration with all parts of the web – search engines, publishers, social networks, and further ahead in the format of a framework anything on the web may adopt – the true privacy by design.
The concept of erasing must be reinvented – the one from the analogical era is not working properly.
More carrots, less sticks
I also think this should be a collective effort from the Internet giants, governments and regulators – it can’t be enforced, it must be co-created.
For Costeja’s example I see things a bit differently than for cyber-bullying, and there are ways to use the transparency of the web to adapt concepts like the right to reply. Maybe instead of having to try and patch the whole search environment, there is a way to allow adding a comment box in front of the search results that verifies or disqualifies that information (ex: Twitter’s verified icon).
Imagine having an automated version control mechanism, applied to search results, and going a bit further, integrating a quora-like gamification system for those comments. This means every search result would have input from a regulator, as well as from the users – a crowdsourced qualification system.
So when you’d search for Mario Costeja, you’d get the search results, a quality value (true/false perhaps) with referral to relevant links, and crowdsourced review-like rating.
This could help fight fraudulent campaigns that bankrupt people everyday, or illegal activities disguised through SEO/SEM manipulation, or even to promote e-commerce directly from search results!
With so much investment going into tech startups, how is it possible that there isn’t a dedicated global initiative for companies (small and large) working with personal data, and especially the ones trying to answer these questions, and potentially save lives? Do you know any?
There are huge verification and user-interface challenges to guarantee a trustworthy solution to these problems, but I am sure that transparency can trump censorship in making the digital world a better place to live.
Tiny URL for this post: