A unified 20-year history of the radically changing way we relate to the Web

Over the past 20+ years, our understanding of what the Web is and how it impacts society has developed. So, I would argue, has our understanding of Web literacy.

By “Web literacy” I mean the skills and competencies that are required to read, write and participate on the open Web. It’s worth noting that “reading” and “writing” should be understood broadly to include multimedia and not just the written word. My focus here isn’t the history of the Web itself, but the history of the skills that people have thought necessary to use it. It’s a subtle, but important distinction.

Not everyone uses the term “Web literacy” when referring to the skills required to read, write and participate on the Web. Many researchers, especially during the 1990s, preferred the term “information literacy.” Even today, there are overlapping terms (such as “media literacy” and “digital literacy”) used to describe similar areas. Proponents of each tend to advocate their favored term as a container for the others. My intention is not to do this with Web literacy, but to carve it out as something increasingly distinct from the kinds of literacies that have been identified before.

While somewhat arbitrary, I’ve divided the history of Web literacy into four eras plus the one in which we currently find ourselves. I’ve started from 1993 as this marked the year when graphical Web browsers such as Mosaic could browse a decent number of Web sites.

  • 1993-1997: The Information Superhighway
  • 1999-2002: The Wild West
  • 2003-2007: The Web 2.0 era
  • 2008-2012: The Era of the App
  • 2013+: The Post-Snowden era

It’s worth noting that what follows is partial, incomplete, focused on the developed, western world, and only a first attempt. I’d be very grateful for comment, pushback, and pointers to other work in this area.

1993-1997: The Information Superhighway

The Web was new in the 1990s, and a great deal of early research into Web literacy was upon the ways in which it could be considered different from what had gone before. Although there was an awareness that the Web potentially allowed anyone to be a producer as well as a consumer, the focus was very much on “reading” rather than “writing.” The organizing metaphors were of users accessing the world’s biggest “library,” navigating an “information superhighway” (to use Al Gore’s famous expression) or “surfing” a huge wave of information.

There was a sense at this time that humanity stood on the edge of a radical new way of connecting and organizing society. Lots of people got excited about hypertext. Researchers and thinkers discussed how access to linked knowledge could lead to massive changes; here, for instance, is the literacy researcher Bertram C. Bruce in 1997:

“The literacy practices one can engage in are to a large extent a function of the available technologies and not a property of the individual. The power of digital technologies to make possible new forms of literate practices thus leads us naturally to a transformative stance on technology. Instead of just asking, will there be equity or inequity as a result of new technologies, we might start instead with, ‘What sort of society do we want?’ With this frame, we see that if rich schools get all the new computers, it is not that things just happened to work out that way, but rather that we as a society chose to selectively empower one group at the expense of another through technology. New technologies make it easier to carry out society’s agenda; the key issue is what that agenda should be.”

Web literacy at this point in time was understood mainly in terms of understanding how to get access to the Web, and then being able to navigate hypertext. In other words, Web literacy was about procedural skills. Researchers experimented with non-linear research papers, narratives, and approaches. Bulletin Board Systems (BBS), perhaps one of the earliest examples of user-generated content, developed into Web forums.

As Wendy Hall and Thanassis Tiropanis pointed out in 2012, “for the general user, the Web was experienced as a single system that could be accessed using a personal computer, and which was primarily a source of information and news.” While this is still true of many people today, there were technical and financial barriers for people wanting to write and/or participate on the Web.

1998-2002: The Wild West

The year 1998 marked, in Britain at least, when anyone who wanted to could get online relatively easily. Alongside existing subscription services such as CompuServe and AOL came pay-as-you go services from the likes of Freeserve. Along with their connection, Internet Service Providers or ISPs gave users a small amount of Web space. Just as academics who were given space on university servers could publish to the Web, so anyone who could use a tool such as Macromedia Dreamweaver or Netscape Composer, could do likewise.

The Web itself continued to be understood through the lens of traditional print media. This was a time of the so-called “browser wars” where users would be told that a particular Web site was “best viewed using Netscape Navigator” at a particular resolution. Indeed, the amount of technical knowledge required when building a Web site meant most users remained “readers” rather than “writers” of the Web. Participation was mainly through “Web rings” such as Bomis (if you had your own Web site) and Web forums.

Over the period of just a few short years, a great deal of money was invested in Web-based start-ups as part of the “dot-com boom.” Every major company began to have a Web presence and there was much talk in the media about online shopping and about the security of transactions. This also was the time of great (unfounded) fears around the havoc the “Millennium Bug” would wreak as clocks flipped from 1999 to 2000.

In this era, the Web was seen as a slightly dangerous new frontier. The metaphor that was often used was that of the “wild west.” It was unknown, there were no gatekeepers, and doing anything more than consuming the Web required skills in short supply. One of the most visible ways the Web seemed “wild” was the a lack of standardization. We’ve since come to take for granted what one researcher, John McEneaney, complained about in the year 2000:

[W]e don’t yet quite know all the ins and outs of ‘Web literacy,’ and compounding the problem is the fact that the new technologies preserve much of the capabilities of the old ways while introducing new elements. The result is increased opportunities for complexity and as we increase complexity, we increase demands on those who seek to use the materials we produce.

The Web was familiar, yet alien. Reading on it was somehow different; writing was reserved for those with technical skills; and, while instant messaging was popular, participation on the Web was primarily through clunky, slow forum software. The skills required to use the Web at this point were ill-defined. But, more than that, the Web and the skills required to use it were seen as optional extras to “real life.”

2003-2007: The Web 2.0 Era

After the dot-com boom and bust, some predicted that that Web was a fad and had had its day. What happened instead was the birth of a new era, popularized by Tim O’Reilly in 2004 as “Web 2.0.” This signaled a shift in emphasis from users as consumers to users as (co-)producers. Blogs, wikis and podcasts were made possible by technologies and approaches such as JavaScript and AJAX. Users could “write” the Web in the browser itself, rather than using separate tools that required additional technical knowledge.

This was a time when the 2001 article by Marc Prensky — cited by Hall and Tiropanis above — was used extensively by the press. Labeling some people as “digital natives” and others as “digital immigrants,” the argument was that because young people were growing up with the Web and other digital technologies, they automatically became adept at using them. Despite a number of number of articles rebutting Prensky (see “the ‘digital natives’ debate“), this view remains endemic in the wider population of Web users.

It was during the Web 2.0 era, write Joahanna Ahtikari and Sanna Ernonen, that the Web moved from being a “nice to have” to essential in the personal and professional lives of most people in the western world:

If we understand Web literacy as a social practice, it also follows that it is historically situated and constantly changing…. At least in many western societies, the Web is an influential medium causing changes in literacy practices of working, public and private lives. The Web functions as a new source of information demanding new strategies of handling this information, as well as brings along new ways of communicating. Reading and writing related to work and personal lives is ever more often connected to using the Web, and Web literacy practices have become an integral part of society functions.

Social networks such as Friendster, MySpace, Bebo and then Facebook shifted participation away from the “traditional” forum/BBS model. In addition, video-sharing Web sites such as Vimeo and YouTube allowed Web users to communicate using multimedia. Reading, writing and participating on the Web became easier, lowering the barrier to entry. The Web became integrated with everyday life, meaning that, per Colin Lankshear and Michele Knobel’s 2007 “Literacies Sampler,” all of the following were recognized as literacies:

… blogging, fanfic writing, manga producing, meme-ing, photoshopping, anime music video (AMV) practices, podcasting, vodcasting, and gaming are literacies, along with letter writing, keeping a diary, maintaining records, running a paper-based zine, reading literary novels and wordless picture books, reading graphic novels and comics, note-making during conference presentations or lectures, and reading bus timetables.

Services were often free, looking to grow their user base in search of funding. This meant Web users regularly explored new services, and there were many and diverse ways to achieve the same outcome. The Web was interoperable through technologies such as RSS and RDF. This meant there was a low cost of switching services for those who wished to do so.

2008-2012: The Era of the App

During the Era of the App, it became increasingly controversial to point to a dichotomy between “real life” and the virtual world. Online, digital life and offline, analogue life was blended. One of the reasons for this was the growth in online access through smartphones and tablets. Although smart devices such as Personal Digital Assistants (PDAs) were available earlier, smartphones changed the way that people interacted with the Web. Instead of having to go to a particular place to do something online, we now had a search engine in our pocket.

People began to take it for granted that they had a camera on them at all times that could instantly send images and videos anywhere in the world. It was commonplace to search a person’s name to see what came up on the Web before meeting them for a job interview or a date. We could join in with multiple global conversations through the use of hashtags. As the developed world had near ubiquitous broadband connections in densely populated areas, we began consuming media by streaming instead of sharing physical media.

Native apps on smart devices started to augment existing services, and then became the primary way people accessed them. Indeed, some social networks, such as Instagram, have only recently added a Web presence accessible via a browser. Most of the remaining complexity of the Web was removed for users through apps. This often came through apps having a single purpose — making them simpler to use, but blocking users’ view of the Web. The cost of simplicity was silos.

The baseline level of Web literacy demanded by society shifted during this era from understanding how to “read” the Web, to at least also being able to “write” it. However, the shift away from the open Web (even if Web technologies such as HTTP were being used) came at a price. It was at the expense of users owning their own data and understanding the infrastructure behind what came to be called “the Cloud.” The seemingly-public spaces in which our reading and writing of the Web takes place became increasingly privately owned.

An important aspect of Web literacy in this era was about appropriating the privately-owned tools to stand out from the crowd and form an individual identity. Being on the Web was no longer seen as different in and of itself. Researcher Rafi Santo has coined the term “hacker literacies” which many might see as falling under the aegis of Web literacy in this respect:

I define hacker literacies as empowered participatory practices, grounded in critical mindsets, that aim to resist, reconfigure, and/or reformulate the sociotechnical digital spaces and tools that mediate social, cultural, and political participation. These ‘critical mindsets’ include perceiving how values are at play in the design of these spaces and tools; understanding how those designs affect the behaviors of users of those spaces and tools; and developing empowered outlooks, ones that assume change is possible, in relation to those designs and rooted in an understanding of their malleability.

This was a time when a majority of people in western societies were using the Web. They still believed “use” led to proficiency, meaning Web literacy skills were not taught systematically. Reading and writing the Web was seen to be as simple as opening an easy-to-use app. Participating was getting involved in social networks. “Web literacy” was seen as something learned by osmosis rather than needing to be systematically taught.

2013+: The Post-Snowden Era

In mid-2013, the Snowden revelations lifted the veil on our understanding of the shiny, easy-to-use landscape of smart devices and apps. We collectively realized that we didn’t understand what was going on underneath the surface. We were shocked to discover who had access to our data. We started to worry more about how to manage our increasingly-important online identity and reputation.

More people started to realize the answers to questions they’d wondered about (such as: how come most apps and Web services are free?) had dark and scary answers. We found out that, along with untold benefits, our use of apps and the Web has a dark underbelly of corporate and government surveillance. Mozilla’s Executive Director Mark Surman believes this era could last for a while:

We’re at the beginning of this era of the Internet and its going to last at least to the end of this century. We’re really figuring out the infrastructure; the legal structure and the social norms for choosing how and when to be private which is just a normal part of human society.

We’re at an interesting juncture in the history of Web literacy. There are those coming to the Web for the first time and who need to learn how to read and write it. However, instead of this being on the open Web that many of us experienced, their welcome is from companies providing things for free in return for building up a comprehensive and privacy-invading advertising profile. Their “participation” will be tracked as ours is.

Thankfully, there is a growing realization that Web literacy is not something that necessarily comes naturally. People are not “digital natives” and literacy is something that needs to be continually worked at. Web literacy is like traditional literacy practices — we evolve as they evolve, and vice versa.

My hope is that in this new fifth era, we understand the Web for what it is, a platform for human flourishing. Perhaps we will learn to teach current and next generations how to read, write and participate, effectively using it.

« Return to Government

Partnership for a Connected Illinois 1337 Wabash Ave. Springfield, IL 62704 Phone: (217) 886-4228 Fax: (217) 718-4546 info@broadbandillinois.org