We read a lot this week about the rapid evolution of HTML and the World Wide Web, as well as some current issues of data privacy, data sharing, and the difficult intersection between copyright and the open web. Three pieces in particular formed an ironic triangle: Sir Tim Berners-Lee’s (henceforth, TBL) 2009 TED talk on the importance of linked (and presumably open) data; J.M. Porup’s writeup—never quite a laying of blame, but close to it—on TBL’s silent, and thus seemingly wishy-washy, position on DRM; and Tom McGrath’s profile of TBL’s new company, Solid (presumably a punning near-anagram of SILOED?), which seems simultaneously to double down on TBL’s commitment to linked data while attempting to address the problems of data collection in the present-day internet. (The irony comes, in part, from the importance of encrypting and controlling data that is simultaneously necessary for personal data, but is lambasted and money-making in the case of DRM.) What all of these pieces largely ignored, however, is the problem of balancing trust and suspicion: even if I control my own data, how can I make an informed decision on whether or not to allow a company/program to access it? Only McGrath’s piece briefly touched on this issue, discussing the social changes necessary for Solid to have any impact:

“The other big issue is whether any of this will truly make the Web any better—that is, any closer to Sir Tim’s original vision of a platform that exists to share information and lift humanity. … Studies suggest people say they care about privacy and data, but at the end of the day we never do much to back it up. In such a world, it’s easy to see billions of people giving Facebook as much data as they want, the consequences be damned.”
McGrath, “Can MIT’s Tim Berners-Lee Save the Web?”

This is a social change that will have to be taught, not learned through experimentation. A piece in the New York Times this week, on teaching media literacy to high school students, shows one way forward: some teachers are working to educate their students on how to “defend themselves against disinformation,” although more extensive programs along these lines (starting younger, lasting for more than a couple of weeks, baked into the curriculum, offered in contexts outside of school as well as inside) will be necessary to start effecting those social changes on a cultural and global level.

The NYT article pointed to a few tech-based approaches, as well; for instance, Google has created a game called “Interland,” which teaches kids to “Be Internet Awesome” (their tagline) and describes itself as “Helping kids be safe, confident explorers of the online world.” This is important: if Google puts the online world (filled with “awesome and blahsome surprises,” to quote their introductory flavor text) at kids’ fingertips, it has an ethical duty to be involved in teaching them—now and as adults—to protect themselves. (The data-giant equivalent of carbon-neutral, I guess?) I played through the first level of the game a couple of times to test its functionality, and while a lot of the questions in the first level felt strikingly obvious, especially as an adult playing the game (I think it gets more complex as it goes on, but I only have so much time!), from an educator’s standpoint, it does good things but not enough (although I know more about college students than K–12). Some of the good: it tests, and thus reinforces, their factual knowledge (“what should I do in this situation?”); and it does explain why wrong answers are wrong if you choose one. However, it doesn’t reinforce conceptual knowledge for kids who answer the question correctly, which I see as a real missed opportunity. It would be simple to include a button that gives you the option to find out why the wrong answers were wrong if you do answer the question correctly but aren’t quite sure why that felt like the best option.

Here’s an example of the issue at stake: A prompt on the screen reads: ‘An email from your school administrator says your attendance record needs to be verified with your social security # or you could be suspended.’ Screenshot from Google’s Interland

You are given three possible reactions to this question to choose from, two of which are actually reasonable courses of action: “Reply to the email and tell them that you will bring it in person” and “Check the email address to see if it’s really your school admin. Then run it by an adult.” (The third option, clearly incorrect, is simply to give them the information.) While it’s obvious that the second answer is the correct one, technically the first one is a reasonable approach as well. If you do choose this “incorrect” answer, you’re told: “It’s a smart move not to send private info over email, so good job. But if you aren’t certain the email is real, it’s best to talk to an adult you trust.” That’s all well and good, but what about once they are the adult, and they’re faced with a similar dilemma? (E.g., The phone call claiming to be from Con-Ed that tells you your electricity is about to be shut off because you didn’t pay a bill, or any of the other thousands of current similar scams.) In reality, the first choice is probably better as a life lesson, and certainly it has lessons worth flagging for a kid who does choose the “correct” option, rather than just telling them they picked the right answer.

Now, Google does provide a 135-page curriculum with worksheets to accompany the game, so teachers can bring it into a classroom and help to build that missing conceptual framework. But that depends on having the time and direction to do so. As the NYT article points out, unless there is a legal mandate to provide such education (currently only Illinois requires it, and then only at the high school level), most schools are too short-staffed, too poorly-funded, and too subject to political headwinds to give the necessary attention to meaningfully integrating media literacy throughout their curricula. Finding ways to integrate and reinforce conceptual learning into the platform itself would also allow for more effective use of programs like this outside of school, especially when parents themselves are ill-equipped to guide their children in the right direction. (At this point, I thought of the post we read on “How to teach yourself hard things,” as well as McGrath’s description of the scrap-materials drawer in TBL’s childhood home. How can we create those sorts of learning environments for all children?) It turns out, however, that Google has also produced a Family Guide in both English and Spanish, so they’re evidently working to address this side of the education equation, as well. (Good job holding on tenuously to “don’t be evil,” Google.)

Ideally, of course, more states will begin to mandate such education, and as the balance of power concerning personal data management shifts thanks to companies like Solid, there will be other types of critical thinking and approaches to information literacy that need to be incorporated into this nascent branch of the curriculum. In the absence of such mandates, however, and hopefully one day as a complement to them, we should also ask what we, as future information professionals, can add to the mix. What are approaches and technologies that we can develop and use to aid or supplement classroom learning? How about for adults who will never have the benefit of future curricular options? (The NYT article notes that “several studies suggest that older adults are more likely to struggle to recognize fake news and are the most likely to share it,” citing one study from the University of Florida and another from researchers at Princeton and NYU.) In short, those who are the custodians and purveyors of information need to be involved in educating those who consume it on how to consume it, especially as those consumers become the custodians of their own information.