The first “all the cool kids are doing it” meme I saw in 2019 was the “10yearschallenge,” which involves posting a photo of yourself from 2009 and now. Another variation was your first and current Facebook profile photo.
Do you ever know where these memes, challenges, and stuff actually start? Sure, on occasion there’s something like the Ice Bucket Challenge (an ALS fundraiser), but most of the time do you think you’d ever be able to trace trends to their origins? Doubtful.
A few days after I started seeing the #10yearschallenge pop up, a media teacher friend posted: “I have this odd feeling that the ‘first and last profile pic’ thing is actually a training program for a facial recognition algorithm.”
And wouldn’t you know it, there’s a good chance he’s right. That same author also published this Medium piece on the subject of machine learning and public vs. training data, if you prefer non-Facebook content.
I’ve seen some software developer friends pointing out that doing that kind of project wouldn’t really be all that hard. Which kind of misses the point. This isn’t about technical challenges.
It’s almost closer to a question of medical ethics, and to the kind of research we used to do before there were ethics boards. (Or when ethics boards had a rather limited concept of who was a “human” who deserved “rights”…)
Others have noted that Google has done this kind of research experimentation for years using photos uploaded to their platform. I’ve no doubt that they have, and who knows what issues have come up in their experiments that we didn’t hear about?
Seems unlikely that my dozen years’ worth of Facebook profile photos have just been… stored. The practical applications probably aren’t quite as fanciful as the God’s Eye, but surveillance applications there and elsewhere are certainly plentiful.
We don’t tend to care that much when we’re being manipulated and our data used without consent – if we don’t know. Even when we find out, if the usage seems… passive (it’s in the background, the photos were already there…) it doesn’t seem so bad. And if it increases convenience, well... sign us up!
These memes and games are a stroke of genius on the part of the platforms. Served up right to us in a format we actively want to engage with – posts from our friends! The platforms don’t have to struggle to get us to notice or interact with the content, unlike serving ads.
We simply enjoy wasting time playing along, and doing so also fosters a sense of camaraderie with others who are playing. You’d think that upon learning that the experience was manipulation and privacy invasion we’d be livid. But plenty of people still don’t care.
I like to think (hope?) that we are getting more media savvy and literate. But after reading the articles about the latest Facebook data-related scandal, do we actually change our habits or usage?
I think the generation that’s grown up exposed to social media their entire lives has very sophisticated analytical abilities regarding the internet and social platforms. But I also think their baseline expectations regarding privacy and other concerns is at a level that would make many older folks very uncomfortable.
After all, it’s a generation that’s been bombarded since birth with the idea of celebrity for celebrity’s sake. (The reality TV genre has now been around more than 20 years...) To achieve that you need all the exposure you can get.
When the audience/product is people who know the game that well, along with increasingly crotchety geezers who want their memes and privacy, too, companies must find ever more cunning ways to get us to voluntarily turn ourselves into sweet, sweet data.
The Romans knew the value of “bread and circuses” millennia ago, and we’re still happy to play along today. Given that we still engage with absurd “Like and share if you agree that kicking kittens is wrong!” offerings, I can’t imagine we’ll be shutting down our technological overlords any time soon.
Netflix may now be staring down a lawsuit due to their perhaps ill-advised (but highly recognizable) use of the phrase “Choose Your Own Adventure”. But their recent interactive episode of Black Mirror: Bandersnatch was a great success. Instead of just submitting more data hourly as we decide what to watch next, we can keep up a steady stream of clicks.
But like everything else online, every “adventure,” every choice, every click – from sugary breakfast cereal to murder – is just more sweet, sweet data. At least when we read Choose Your Own Adventure books as kids, our plot direction choices (or when we cheated and checked them all) were known only to us.
Sure, one could argue that Netflix is going to get some pretty skewed and unrealistic results by mining interactive data from Bandersnatch. Why, I hardly ever eat processed cereals or commit premeditated homicide!
However, as the Quartz article notes, our choices mean more than we may think. If you choose a violent option, that can tweak the Netflix recommendation algorithm, which in turn may recommend more violent content to you.
Somehow you started off enjoying Downton Abbey and ended up bingeing season 2 of The Punisher. Uhh, I guess then you’d have a teaching moment with your kids regarding how British imperialism has killed far more people than Frank Castle ever will…?
These kinds of manipulative recommendations are already common, they’re just spreading. Our old friend Facebook provides a million examples. You’re casually scrolling along through your feed and see “Like if you played pond hockey as a kid!” Which makes you feel all warm and fuzzy and nostalgic.
Then the next thing you know Ontario Proud has you in their clutches and you’re being served a steady content diet of virulent racism and xenophobia. (Canadaland has had some illuminating coverage of that phenomenon and the other <Province> Proud groups, if it’s of interest.)
Chris Hadfield has talked about a fundamental part of astronaut training being getting in the habit of asking “How could this kill me?” to ensure the best planning and preparedness.
For the average person, a solid alternative for online life here on Earth is “How does this want to manipulate me?” (and to whose benefit?) Or perhaps more fundamentally: Do you want to be the watcher, or the watched?