All that hardware, communicating computers around the world, only serves to provide a substrate on which we can build the real story of a virtual world that is popularly viewed as not being part of the ‘real world’. At the core of the issue is a symbiotic relationship between the content and software of an information service on one hand and the human mind on the other. While the presentation of and interaction with information that is generally available today falls well short of William Gibson’s 1984 definition of ‘cyberspace’ (see next section), that term has been widely adopted by technologists to describe what is experienced through the use of networked information services. So, just what does the human mind find in today’s primitive cyberspace? Even more importantly, how does the human mind find anything there at all? Concern about users potentially ‘lost in cyberspace’ has certainly sharpened the thinking of designers of newer information services.
Beyond the very earliest days of expecting users to memorise or keep at hand a long list of cryptic commands used to select and call up information, the simplest way to organise access to information proved to be through the use of a rough hierarchy of indexes or content lists. In my 1981 software design for a pioneering local information service, The Australian Beginning, I adopted that kind of approach, borrowing from the little we then knew of early U.S. information services. It took until 1992 for such an approach to become widely adopted for accessing the vast collections of information on the Internet, through the free distribution of ‘Gopher’ software from the University of Minnesota.
The second obvious approach was to allow the user to search for particular words, or combinations of words, that are likely to be contained in an item of interest. Limited searching of key words and names has long been a feature of library catalogues and other purpose built systems. However, the goal of ‘free text searching’ continued to be over-hyped in areas outside such obvious specialities as legal information services. Also around 1992, the Wide Area Information Service (WAIS) brought the capacity for free text searching to the Internet.
As I discuss in more detail in following sections, Ted Nelson’s concept of ‘hypertext’ has proved much more popular with ‘links’ taking a user directly from one document to another and each link ‘button’ placed so that its context is clear. The November 1993 release to the Internet community of versions of Mosaic for popular personal computers provided a graphical user interface to the World Wide Web and quickly led to the Web’s dominance of information ‘browsing’.
Hierarchical menus, free text search and hypertext are all predicated on the information being sought being predominantly text-based. It is certainly not coincidental that these approaches have been adopted while text, in the form of ASCII character strings, remained the dominant form of computer based information. However the design of the Web and its graphical user interface also accommodate the growing range of graphical and other formats in which information is increasingly being stored and presented by a new breed of ‘multimedia’ computers. At the start of 1995, the physical medium of choice for multimedia content is CD-ROM—a product of the convergence of computing and audio-visual media technologies. As a cornerstone of their second major technology-facilitated repositioning in recent years, libraries are beginning to provide online access to CD-ROM titles.
Whatever the representation of their information content, the essential difference between electronic information services and those based on paper is the way they respond to the actions of the user. Electronic information can be readily set up to actively respond to the user. It is interactive. The user finds information by an active process which, amongst other things, promotes learning. The information itself may even have a capacity for some degree of interaction. It may be a simulation which the user controls. The magical capacity for computers to undertake complex actions based on complex sequences of instructions becomes entwined with the possibility of those instructions, and thus those potential actions, being managed and accessed in the same way as for more static content. And, of course, especially compared to print, it is almost trivial to make corrections, changes and corruptions to electronically stored information.