Smartphones, technology and our hidden human app
I am a Black Mirror newbie. I just discovered it over the recent New Years’ weekend, binge watching season 4 practically in one sitting, my inefficient human need for sleep being the only thing keeping me from making it through the final episode.
After watching, I totally get its popularity and binge-worthiness—the integration, intrusion and even collision of technology into our lives eerily reflected in parallel realities or near future settings.
Being curious as to where Black Mirror’s creative team Charlie Booker and Annabel Jones get their ideas from, I did a little web surfing, and was surprised to find that it’s not what it seems. Each episode’s plot kernels aren’t lifted from the latest newsworthy tech trends, innovations and snafus, but rather from their own casual conversations-turned brainstorming “what if” ideation sessions, based in part on their own exposure to and experience with technology, albeit subconsciously or not. Aiming for universal, idea-based stories, Black Mirror episodes are inherently speculative and prescient without trying to be, coming out of their creators’ sense of nostalgia and sarcasm versus one of dystopian didacticism.
Yet the accidental timeliness that even strikes creator Mr. Booker as mildly terrifying could make you wonder—are these eerie coincidences echoing a collective consciousness that’s running in the background like low-grade feedback that we can’t hear? Is it like rubbernecking at a highway accident while also causing it to happen? Are we predicting our own doom, or simply orchestrating it subconsciously?
Coincidentally a few days after my first viewing of Black Mirror, I heard about the open letter to Apple by two shareholders concerned about the addictiveness of iPhones among younger users. In the letter, the investors ask Apple “to offer parents more choices and tools to help them ensure that young consumers are using [the] products in an optimal manner,” and go on to cite current statistics and rationale as to why both makes sense ethically and financially to provide such safeguards and controls.
What struck me about this open letter was not the cited research, or the suggested action steps to take, but rather the tone and intent of the letter itself. In a sense, it acknowledges that smartphones are indeed dangerous and we have to reign them in by way of reigning in their usage among the most vulnerable users—children, adolescents and teens. Not to mention the fact that this kind of technology (and its applications) is persuasively designed to keep us addicted.
This is not really news to parents dealing with kids and their devices. Despite the letter coming from shareholders as stakeholders, their concerns are on the same plane as parents who already implement limits on screen time among other controls to their kids’ tech access.
The other key witnesses to the smartphones’ influence over kids are those who see them daily, such as teachers and librarians like myself. We see firsthand how technology is both a boon and bane among students, especially with the advent of one-to-one computing and BYOD (Bring Your Own Device) initiatives among schools as an answer to equity and access.
As part of an introduction to a PBL/Guided Inquiry/Design-Thinking project on Technology Innovations I’ve facilitated with students in recent years, I show them this video snapshot of Common Sense Media’s study on tween use of digital media. And it’s likewise no surprise that the tweens in the audience confirm what those on the big screen do with their little screens.
In Dr. Jean Twinge’s excerpt from her recently published book iGen (also coincidentally one of the experts consulted and referenced in the open letter to Apple), she goes into extensive detail on how smartphones are changing, if not dramatically influencing, a generation. She cites an increase in loneliness, depression, and suicide rates among iGen teens who use smartphones as a primary tool for socialization and engagement, among other issues of excessive screen time, such as a lack of face-to-face interactions and a resulting lack of essential interpersonal and social skills; a lack of desire to engage and explore outside of the home; and a lack of initiative to seek out and take on the responsibilities and independence of imminent adulthood:
The results could not be clearer: Teens who spend more time than average on screen activities are more likely to be unhappy, and those who spend more time than average on nonscreen activities are more likely to be happy. There’s not a single exception. All screen activities are linked to less happiness, and all nonscreen activities are linked to more happiness.
The smartphone has become a way to create experiences we should be having in real time, in real space, with real people.
Admittedly I have seen changes in the tween-age students I have worked with over the last ten years, and have wondered how differently their brains must be wired than mine—not just because I am now two generations out from theirs, but simply because of our respective generations’ relationships to technology in space and time.
Right now in the middle school in which I teach, the youngest students were born in 2007—the same year the iPhone was born. I know it sounds like a cliché, but they don’t know what life is like without the Internet—and certainly not one without smartphones.
I can’t speak directly to the issues cited in Dr. Twenge’s article and outlined in her recent book, but I can speak to what I have seen as a change among middle-grade students in their basic abilities to focus and use some of their own natural senses for interacting, listening, conversing, collaborating, and speaking in front of others.
What I see among this particular age group (11-14) is especially how their abilities to do things like listen and take notes, participate in a discussion, or engage in analog-style collaborative and independent work is not the same as it was ten or more years ago. Growth Mindset notwithstanding, it seems the bigger issue is not making them think they are capable, but engaging them long enough to make them care whether they are or not.
Kids tend to use their phones as their own distraction devices for lack of a better way to spend their time (shared firsthand by teens in this Common Sense video). Smartphones for tweens are also a huge attention suck.
At this age, a smartphone can be their own babysitter, best friend, and game arcade all in one. Whether it’s the games, the apps, the ability to create their own media or socialize through it, smartphones have become appendages for most of them. When beginning to teach a class, it’s rare that I have to ask a student to put away a book versus a phone—but I’m secretly pleased when I catch one sneaking a quick read under the table instead of sending a text, checking Snapchat, or playing a game on her phone.
There’s a lot of current literature on cultivating student motivation and engagement through choice, mastery-based, gamified, and personalized learning, alongside other inquiry-based models like PBL, Design Thinking and Challenge-Based Learning (CBL). Making the move from external to internal motivation among learners is actually a welcome challenge to have as a teacher; yet shifting from technology intrusion to integration can be a trickier move if we don’t deal with how the learners themselves view, think and feel about their own interaction with technology.
Maybe adding in more controls are what we think will help with this perceived cognitive drift among youth—but are they a real solution to the potential threats that technology like smartphones pose?
If the Black Mirror episode Arkangel is any answer to that question, it’s not a good one—the scenario of helicopter parenting taken to the extreme, as the parent who is trying to safeguard her daughter ultimately loses her because of the very technology designed to protect her.
Yet the control issue in question may not be about whether parents can mediate their kids’ online time and consequently their well being, but looking at how kids see themselves—as habitual users or controllers, in control or being controlled by the devices themselves.
As suggested in Reinventing Learning for the Always-On Generation, one approach might be to simply embrace the common attitudes of digital learners and use them to our advantage rather than seeing them as problematic.
But kids and teens aren’t the only ones who need some self-control—or cognitive control—when it comes to our smartphones. We already know that us grownups are not immune to the powers of the smartphone as we theoretically are to other dangers that affect younger brains, like lead paint and honey.
Nicholas Carr’s recent WSJ article outlines how smartphones are also hijacking our adult minds. This NPR On Point segment features Carr and other experts explaining how smartphones really are making us dumber through our own dependent and addictive behavior, “draining away” our thinking ability and inhibiting cognitive functions like working memory and fluid intelligence—even when we are not using them but are just simply near us. Proximity to tech can dominate processing.
After reading Carr’s book The Shallows in tandem with Maryanne Wolf’s Proust and the Squid a few years ago while designing a UbD project on Literacy (the original kind), I too saw how my own reading has been changed by the Internet, so why shouldn’t smartphones likewise change our brains and patterns of thinking? It makes sense, albeit in a very Black Mirror sort of way.
Another argument could be that it’s all on us as users. What’s the difference between using an Internet-connected laptop to search out and read current research on a relevant issue versus listening to a podcast on your smartphone? Whether it’s because of or in spite of so many options available to us, we should be able to delineate the good from the bad in what we consume and what we let consume our time, attention and minds, shouldn’t we? Yet the whole fake news phenomenon is just one potent example that even the best intentions cannot be enough to temper the temptations presented, or help us determine what’s worth consuming or not.
It might be way too cliché (or predictable) to do a Black Mirror episode on mind control, brain rewiring, brain upload/download or body invasion a la podpeople via smartphone, just as “it is a cliché to point out the ubiquity of Apple’s devices among children and teenagers” as mentioned in the open letter to Apple.
But even if smartphones are effectively making us “dumber” simply by the fact that they can “control” us in a way, what if they could actually make us smarter—by turning the paradigm on its microchip somehow?
Maybe instead of more controls over the devices themselves, we retool the concept of what a smartphone should be—not a device that makes our lives easier and more dependent upon it, but instead a tool that we really design and control, and not just by which apps we download. Instead of offloading or downloading our brains in a way to it—or allow the phone to build it for us through what we do with it—we use it to build our brains in some other way, and not merely by doing a 5-minute meditation via text notification, language learning via apps, listening to NPR podcasts or watching TED Talks on YouTube.
In one sense, I would say that my iPhone has not necessarily made me smarter—but it has helped increase my ability to learn and think “just in time” via its easy access to other ideas, whether through reading article feeds, streaming video content, or listening to audiobooks or podcasts, for example. In that sense, it truly is a tool, rather than a mediative device between me and the world.
Yet the temptations for a doing a quick email scan, checking notification bings, and succumbing to the call of social media sirens are something that I am working on balancing between my “pure” tech use—no self-control apps downloaded yet, though.
So the difference here seems that it’s not always what you are consuming, but how and why. The intention behind the attention given to the information, interactions, and distractions on screen, online.
Maybe instead of examining the ethics of persuasive design, installing controls or limiting screen time, we actually talk about what these devices are doing to all of us in a frank and honest way?
If we can ultimately control what food we digest, shouldn’t we be able to control what technology we consume? Are we now being shaped by technology in a new kind of digital literacy?
Thinking about the neuroplasticity of the brain, rewiring is possible. If in fact young people’s brains have in effect been wired differently by smartphones, could they be rewired by it in a positive, beneficial way, too?
Maybe it’s worth considering the interplay between genes and environment, as discussed in David Shenk’s The Genius in All of Us:
“Every human being (even a whole society) can grow smarter if the environment demands it…Genetic differences do play an important role, but genes do not determine complex traits on their own. Rather, genes and the environment interact with each other in a dynamic process that we can never fully control, but that we can strongly influence.”
If this is so, then the inverse could be equally true—every society and individual can grow dumber, if the environment doesn’t demand more of it or us.
Whether smartphones are really making us dumber or smarter may not be the point at all. Instead, this pocket technology could be preparing us for our ever-changing environment. Yet the unintended consequence is that we may be altering our own evolution in subtle ways that affect our essential capacities as people.
Our humanness is what might actually save us from our own [tech] designs.
We are ultimately social creatures, and at the moment, social interaction is something that both our brains and bodies crave and need. Whether our brains need it or we need it to extend our lives, socialization on the most basic interpersonal level is key to our survival—at least in our current human state.
In James Paul Gee’s book Teaching, Learning, Literacy in Our High-Risk, High-Tech World: A Framework for Becoming Human, developing literacy is tied to how our mind works. The human mind is neither a digital computer nor an information-processing device: “In reality, the human brain and body combined are an experience-processing device.”
So maybe the answer does lie in looking at a different kind of digital literacy—one that is more self-reflective of our own role in what it means to be digitally literate. And this reflection should include a more analog analysis of our own analog self:
If we allow technology to become the dominant force, then we will lose our biological advantage that we have as human beings. If we allow technology to control us, then we are at risk of losing our humanness, which is the real antidote to the threats that technology like smartphones pose.
It’s more than striking a balance between online and offline, screen time and real time. Integrating Social & Emotional Learning (SEL) and 21st Century Soft Skills could be a part of this re-humanization of our digital selves. Another Black Mirror irony here? These same attributes are sought out in employees hired by tech companies like Apple.