 
Get Smart Fast

An analysis of Internet based collaborative knowledge environments for critical digital media autonomy

Joe Tojek PhD

Smashwords Edition 1

Copyright 2000-2009 Joe Tojek PhD

Learn about this author at

http://www.linkedin.com/in/joetojekphd

Abstract

This study examines the interactive features of Internet based collaborative knowledge environments from a democratic media perspective. Structuralist and semiotic media analysis methods are applied to develop a model of interaction for the investigation of social and cultural perspectives on knowledge construction and learning with digital media. Questions for problematizing common sense interface constructions are proposed for critical participation and to inform democratic educational practice.

Quotes

Joe has tackled a truly significant topic in his dissertation, namely, a grammar of interactive digital media. The ideas and approaches presented embody a substantial humanistic as well as scientific orientation and a value stance that fosters democratic media education.

-Michael Streibel, Program Chair University of Wisconsin-Madison, Educational Communications and Technology Program

At the 1998 IEEE Semiotics and Intelligent Systems conference, Joe introduced original semiotic models for analyzing interaction, content and user activity in digital media for social and cultural investigations. Mr. Tojek's work is well thought out and impressed me with a deep level of coverage of a wide field.

\- Leonid I. Perlovsky PhD, Author of Neural Networks and Intellect, Inventor of Modeling Field Theory

Get Smart Fast

An analysis of Internet based collaborative knowledge environments for critical digital media autonomy

Chapter One: Background to the problem

Introduction

In this study significant technological developments and humanities computing techniques proposed for educational practice motivate an analysis of the collaborative knowledge environment program format (CKE) from a media education perspective. Whereas computers were originally conceived of as data processing devices, a classic collapse and reversal of the computer as data processor metaphor can be observed by acknowledging that in the techniques employed in these programs, representations of human activity and expression constitute the data and the focus of the underlying algorithmic processing. Constructivist technology and research which collapse psychological concepts and objects into the technical structures and functions of Internet hypermedia are examined.

This study proposes a formal descriptive interaction model for digital media; a collection of generic interaction codes that may be identified across the domain of electronic communications for the purpose of describing and analyzing the observable functionality of interactive systems from a user centered perspective. This approach employs close observation techniques to an analysis of two computer programs, Cognitive Flexibility Hypertexts (Spiro & Jehng, 1990) and the Knowledge Integration Environment (Bell et al., 1996). These sophisticated interactive systems combine Internet collaboration and cognitive hypermedia with content and user activity analysis. These approaches bear scrutiny from a media education perspective, which seeks to elaborate the claims made by corporate, government and educational interests with balanced critical knowledge and methods that enable users to develop a critical autonomy with regard to such systems in the context of their unique settings and agendas.

This framework privileges the diversity of active interpretations that audiences bring to their participation with a medium and shifts the locus of judgement and evaluation away from authoritative experts to the participants and practitioners who employ the systems in practice (Fiske, 1987; Masterman & Mariet, 1994). By focusing on investigative methods and techniques that foster the development of critical autonomy with regard to electronic media, the media education framework provides sound pedagogical and analytical approaches to address the growing complexities of current and future digital media practice.

Review of the literature

This section presents an overview and survey of the important literature that motivates the conduct of this study. The democratic media education and educational technology approaches are outlined by describing some of the conflicts and controversies that have emerged in the literature and by introducing aspects of the analytical perspectives employed in the field of educational communications and technology. Constructivist learning theory and the radical technology oriented approach that has been identified in the literature are then introduced before the collaborative knowledge environment format is presented by describing the systems and practices employed in the development of Cognitive Flexibility Hypertexts (Spiro & Jehng, 1990), and Knowledge Integration Environments (Bell et al., 1996).

The controversy: Interpreting educational technology

This section briefly introduces some of the conflicting claims in the debate between the proponents and critics of technology in education. While computer technology has always inspired passionate claims regarding its potential to "revolutionize" the educational process, the emergence of the World Wide Web has corresponded with an unprecedented resurgence of those claims from political, commercial and educational sectors. These questions are the source of heated debate among proponents and detractors of the various claims regarding the capabilities of Internet hypermedia systems and their application in education.

From the executive branch of the federal government comes President Clinton's declaration that, "Every single child must have access to a computer, must understand it, must have access to good software and good teachers and to the Internet" (U.S. Dept. of Education, 1996). Educational proponents have extended the previous claims of the benefits of multimedia and hypermedia technologies to the use of Internet hypermedia. The "vast range" of information is cited in claims that liken the Internet to a global encyclopedia for millions of school children. The conferencing capabilities are claimed to enable students to interact with remote educators, scientists and other professionals who are willing to share their knowledge and experience with today's students. Student design and creation of hypermedia documents is claimed to satisfy the active learning and knowledge construction requirements of the latest constructivist theories on learning and instruction (Bell et al., 1996; Lehrer, 1991).

Detractors oppose many of these claims with equally compelling arguments. At the heart of many critical analyses is the mediation of reality inherent in computer representations. The underlying abstract logic of computer programming, the two-dimensional screen of the display device, and the emphasis on textual and visual representations of knowledge, are all manifestations that are seen to result in a cognitive distance from the object of study (Monke, 1997). Education, particularly in early childhood, is seen as vulnerable to the effects of mediated experience, the beliefs being that, "The human and physical world holds greater learning potential," and, "Sensation has no substitute" (Oppenheimer, 1997). As an abstraction of direct experience, there are concerns about what is lost when computer learning replaces the rich context of experiential learning.

Traditional educational goals such as the pursuit of truth, the discovery of meaning and the generation of new ideas are seen as giving way to an emphasis on efficiency, measurability, rationality and progress (Monke, 1997). These established arguments have been supplemented by critiques of the updated claims regarding educational uses of the Internet. Of the "vast range" of information available, much of it is characterized as ill informed and superficial and the coverage of an idea or topic can at times seem to be shaped more by fashion or fad than significance (Oppenheimer, 1997). Regarding the communicative aspects of applications such as online chats, critics point out that correspondents are usually sitting alone and that, "The dialogue lacks the unpredictability and richness that occur in face to face discussions" (ibid., p. 2). Experts cite concerns that current interface hardware designs enforce individual use, which may encourage social isolation and in classroom environments that have limited computing resources, "collaborative" projects that allow only one person to sit at the keyboard at a time often result in conflicts and competition between students (ibid., p. 3).

While the arguments presented on both sides may have merit and while even the most oppositional claims may each be supported by recourse to the diverse research on educational computing, they can each also be criticized for their tendency to over simplify a complex debate. According to Tyner's (1998) review of educational computing research, a number of surveys find a preponderance of descriptive, single case studies that tend to overstate the significance of their results outside of their study context and her review also showed a very limited number of studies that can plausibly link technology to student achievement (p. 72). This suggests that any claims that attempt to characterize technology practices and effects generically as either overwhelmingly positive or negative may ignore important considerations of the contextual aspects and contingencies required for critical understanding.

In seeking more sensitive methodologies Bromley suggests that, "... The impact (of a technological artifact) can vary with the context, according to the purposes of the humans involved in the particular situation" (Bromley, 1998, p.3). In an educational setting such contextual specifics can include, "...The culture of schooling, classroom pedagogy and curricular issues" (Tyner, 1998, p. 70). These contexts may be found to vary widely across educational settings and audiences. As Grint (1992) describes in his study of a computer based distance education course, "What is crucial, then, is to retain the ambiguity of technology in the sense that organizations and social relations are neither determined by technology nor are they determined by social agency; organizations are the contingent result of a permanently unstable network of human and non-human actors. Technology and its properties, then, are not fixed or determinate but contingent" (p. 155). To address these concerns and the increasing complexity of educational computing environments this study employs analytical perspectives from the media education framework introduced below.

The framework: Democratic media education

The democratic media education framework combines critical media analysis traditions with democratic and liberatory pedagogical practices in an approach to developing sophisticated communication skills and practices that privilege the cooperation and tolerance required to sustain democratic societies. The framework is employed to guide the application of the analytical methods and the interpretation of the study data and results. This important construct is introduced here by describing the educational goals, the pedagogical perspectives and processes, and the analytical concepts that are employed.

While modern institutional education research and practice can often be described as reflecting desires for efficient learning and the systematic replication of results (DeVaney, 1998), democratic and liberatory approaches foreground the importance of fostering the intellectual skills and curiosity that empower life long learners to critically interpret existing and future knowledge and the social and cultural interests that it serves (Dewey, 1902; Freire, 1970; Shor and Freire 1987). Educational goals of the media education framework are framed in terms of the development of a critical autonomy, or "...The independent capacity to apply critical judgement to media content" (Masterman & Mariet, 1994, p. 11). This approach shifts the focus from a privileging of the evaluative efforts of experts to a primarily investigative process of systematic group exploration which suspends judgement while fostering a diversity of interpretations.

Masterman summarizes the objectives of media education which he describes as, "Increasing our students' understanding of the media - of how and in whose interests they work, how they are organised, how they produce meaning, how they go about the business of representing 'reality' and of how those representations are read by those who receive them" (ibid., p. 29). These goals seek to provide students with interpretation skills that transfer to new communication experiences outside the classroom and an interest in critical practice that extends beyond the school years.

This framework also employs an inquiry based pedagogy and problem-posing processes to motivate learner desire and prepare them to create new knowledge and new approaches to significant problems in the world (Freire, 1970, p. 57). This view is structured in opposition to classical process models of education that seek to deposit existing knowledge into the minds of students and employ prescriptive pedagogical approaches in educational practice. Active and participatory approaches to investigating media analysis and production activities are integrated to inform and motivate each other in an inquiry based pedagogical approach (Tyner, 1998). In this view students explore the nature and origin of their own meanings and knowledge in what is called a, "Transactional pedagogy... that seeks to engage student understandings and sense of self quite directly" (Moore, 1991, p. 173). Problem posing dialogs and investigative processes are used to problematize or 'denaturalise' media practices in the process of illuminating and exploring student knowledge and interpretations (Masterman & Mariet, 1994). Active and participatory media analysis and production projects are employed to identify various production techniques and effects in a medium such as the construction of common sense or, what is termed the reality-effect, and to explore audience reception and response. The investigative process is privileged in this approach as well as the complexity and diversity of participant interpretations that arise in practice.

Finally, Masterman describes the media education framework as being organized around a group of key analytical concepts. Ideology, representations and codes are just three that are mentioned here. These concepts have been integrated from media and literary studies traditions and have been adopted widely in current communications research practice. Representation is considered a unifying concept of media education based on the fundamental assumption that media communications do not reflect reality so much as they re-present it. In media analysis projects representations, codes and ideologies are considered in terms of their organization in levels which are employed to provide useful but temporary structures for analysis.

In the media education framework these analytical concepts function as tools that are employed in the investigative process. This study employs these tools for investigating representations of interaction, content and user activity in the collaborative knowledge environment format. Figure 1-1 below provides a summary and organization of the educational goals, pedagogical processes and analytical concepts of the media education framework. The following discussion introduces the interests and focus of the field of educational communications and technology before describing some additional analytic methods employed in the study.

The field of educational communications and technology seeks to investigate, understand and inform the use of electronic media and technology for learning and education. This field fits within the democratic media education framework, while tending to focus on media that is developed for educational practice. As technological approaches and associated educational research continues to emerge from a range of authoritative interests, this community seeks to conduct analysis and interpretation in terms of best case theory and practice in education, media studies and technology. Analysis and evaluation seek to foster and validate the democratic and liberatory educational principles that require all classroom communications and practices to promote inclusive and equitable learning experiences for students (Apple & Beane, 1995). Research attempts to appraise the classroom ecology from an experiential perspective, articulating educational practices and media messages for implicit or common sense constructions; these may then be traced to the confluence of authoritative institutional and organizational interests that impact classroom practice. Practitioners in this field are quite often concerned with the intersection of constructed representations in educational media, educational practice and the requirements of culturally diverse learning communities in the classroom.

These differences in perspectives may be identified at the level of pedagogy through an examination of the educational communications and activities that are proposed for the learner. Thus, a drill and practice exercise, through its requirements of rote memorization and recall of previously presented facts, reflects a behavioral perspective and constructs a behavioral learner as a receiver of knowledge. Similarly, an educational documentary that proceeds to narrate a single view of events with an all knowing "voice of God" presentation style, can be characterized as a behavioral communication as well through the identification of empirical statements that imply factuality but actually refer to unobservable metaphysical realities that represent assumed cultural beliefs or worldviews. Even computer programming has been described as a behavioral practice in that the rational and procedural approaches that it requires are defined in terms of a hierarchy of system structures that already exist and require explicit conformity to their logic and procedural rules for success (Streibel, 1991).

The challenge: Constructivist technology

This section introduces constructivist learning theory and the radical technology -oriented constructivist research that have converged in the emergence of the collaborative knowledge environment format for education. Constructivist technology has been selected for analysis due to it's sophisticated blend of constructivist learning theory and advanced technologies that propose to address a number of critiques and controversies from the literature. This initial overview will attempt to describe the convergence of constructivist learning theory and hypermedia design efforts that may be observed in collaborative knowledge environment programs from current practice.

American educational theory this century has developed an array of theoretical perspectives; authoritative interests in government and education research can be described as having gradually shifted their focus from objective, neutral scientific theories to more socially cognizant approaches such as constructivist learning theory. Behavioral and cognitive science approaches employ psychological concepts and constructs in the attempt to isolate mental aspects of the learner, such as their responses to various stimuli, and use this knowledge to design curriculum that satisfies desires for learning efficiency in educational practice. Cognitive science opened the "black box mind" of behaviorism with scientific approaches to the conception of generic schematic constructs and cognitive processes in the mind of the learner. A societal penchant for technology as progress, the experimental rigor of science, and robust research funding for these perspectives from authoritative interests emphasizes these perspectives historically in American education (Popkewitz & Shutkin, 1994).

Currently there are numerous examples of the human sciences revisiting their methods to address the social, cultural and historical aspects that were stripped away in experimental approaches. A 1998 issue of American Behavioral Scientist concerning the nature of social inquiry in a postpositivist era describes the methodological problem by stating:

A cardinal assumption of mainstream social science in the 20th century has been that human action and personality are simply a part of nature, to be explained by methods drawn from the science of nature. Then, when little predictive power is achieved or results seem trivial (or both), investigators conclude that the cure can only be more precise measurement and experimental rigor. In situations like this, it may not be possible to rethink one's methods without rethinking human nature, the very object of study (Richardson et. al., 1998, p. 1).

In behavioral psychology and cognitive science approaches, the learner is constructed individually as receiving knowledge that exists independently and is possessed by others who present it to the learner, either personally or through an educational medium such as textbooks or electronic media. Today, the psychologically influenced education research can be seen as shifting the theoretical focus to integrate social concepts with cognitive scientific methods. Constructivist educational research proposes a theoretical perspective that expands individual cognitive psychological language to include a social context of communication and interpretation in the apprehension of human learning. Considering learning from this perspective requires recognizing learning as a social and cultural process of knowledge construction (Brown et al., 1989).

This discussion continues by focusing on constructivist research that has been termed, "A technology-oriented radical constructivist position" (Tergan, 1997, p. 269). This research emphasizes hypermedia based learning as benefiting from a perceived structural connection between the organization of information in hypermedia systems and the scientifically perceived psychological structures and processes of human cognition. These approaches seek to, "make knowledge visible," (Bell, 1997, p. 10) and when married with hypermedia environments for learning, constructing these knowledge representations explicitly in the interface becomes a design goal.

This research can be described as having expanded the use of objective, value-neutral language and methods in the study of individuals to the social contexts and the group processes of collaborative activity. Thus, in constructivist technology programs the unified subject constructions, the generic mental constructs and processes, and the psychological language of cognitive science may be identified, while the context of learning can be described as transformed from a consideration of generic individuals to interacting groups of generic individuals in networked hypermedia information systems. Next the collaborative knowledge environment format is described and the programs selected as sample data are introduced.

Collaborative knowledge environments are an example from current practice of constructivist technology that can be described as combining constructivist learning theory and cognitive psychology concepts with the hypermedia and social collaboration capabilities of the Internet. A basic metaphor of collaborative scientific or systematic inquiry is employed; these approaches attempt to create explicit representations of the actors, objects and processes involved in the real-world work settings of scientists and investigators and engage learners by constructing authentic subject positions for them to adopt. Aspects of expert cognition are foregrounded in the interface design to provide scaffolding and expert guidance for learners as they assume the roles created for them. In terms of form and function, the collaborative knowledge environment (CKE) format is here defined as computing environments that integrate some combination of cognitive hypermedia and Internet collaboration with automated content and activity analysis.

This discussion proceeds by introducing Internet hypermedia and then describing the approaches proposed in the educational applications of the Knowledge Integration Environment (KIE) and Cognitive Flexibility Theory (CFT). The World Wide Web is a technology that prescribes technical protocols to deploy hypermedia interfaces on the global Internet. Invented by scientists at the CERN laboratory in Switzerland to ease the tasks of distributed scientific collaboration, the web employs standard protocols that enable the global publishing of hypertext, multimedia information and sophisticated interactive software applications (Berners-Lee et al., 1995). Briefly, networked hypermedia can be described conceptually as multi-level web like meshes of information nodes (documents, pages, etc.) and arbitrarily associated relational links. Internet collaboration is typically described as shared access to content, authoring and communication resources which allow groups to interact and organize their efforts (Landow, 1997).

Cognitive Flexibility Hypertexts (Jacobsen & Spiro, 1995) and the Knowledge Integration Environment project (KIE) (Bell et al., 1996) are examples of CKE programs that are introduced and examined in the conduct of this study. Cognitive Flexibility Theory (Spiro & Jehng, 1990) considers the problems of advanced knowledge acquisition in complex domains and seeks to promote the process of considering evidence flexibly and from multiple theoretical perspectives; additionally, theoretical constructs, methods, evidence and interpretations must be presented in ways that are open to examination and validation by the community of interest. A case-based knowledge model is employed to describe the apprehension of evidence as it occurs in the world as captured by the investigator. In the practice of medicine for example, a patient with their history and their symptoms may represent a case and the knowledge of an expert practitioner is considered to occur in a complex, situated social context which defies simple rule based encoding. For the expert, no ultimate knowledge or endpoints in learning are thought to exist, rather the work occurs in a social context with others and the group knowledge and perspectives are dynamically and continuously brought to bear on situations that ideally result in thorough and negotiated understandings and collective achievements.

Constructivist technology approaches seek to parallel this activity in hypermedia environments where the underlying data is seen as representing the case evidence and inquiry involves analysis and interpretation as required in the investigative process. For example, in video based educational research a case based approach may require the selection of video segments from archives of classroom research video that support the interpretations drawn for a specific investigation. In such a scenario, researchers collaborate in the effort to analyze and interpret the underlying evidence and share information regarding the process and methods undertaken to enable peer validation of the work.

This overview has introduced the convergence of constructivist learning theories and hypermedia design efforts that may be observed in the collaborative knowledge environment format. It is proposed that the sophistication represented in the convergence of these technical, scientific and social approaches motivates an analytical approach that avoids over simplification and seeks to provide participants and practitioners with critical knowledge and strategies for interpreting the complex techniques and issues that may be encountered. The next section introduces the structuralist and semiotic methods employed in this study which can be seen to converge in their efforts to advance the theory, practices and understandings of electronic and interactive media.

The methods: Structuralism and semiotics

In this study the application of structuralist and semiotic methods is proposed to provide detailed analytical power to the task of interrogating the sophisticated representations and techniques of the collaborative knowledge environment format. Structuralism and semiotics can be described as representing a range of complementary theories and methods concerned with the symbolic and highly coded nature of human communication systems. These methods share a basic tenet which states, where there is meaning there is structure (Fiske, 1982). More specifically, structuralism defines the interpretive act in the communicative process as one where meaning is a product of mechanism (Pettit, 1975). If the mechanism of articulation can be compared to a language, then an application of the linguistic model can proceed, "Without protest... in any area if there has not been a consideration of the possibilities which it opens for empirical inquiry" (ibid., p. 37). Articulating formal structure in communication practices is undertaken in attempts to relate identified formal elements to the culture in which they are found (Propp, 1968).

Structuralist and semiotic methodology for social and cultural analyses involves an application of the linguistic model to literary arts such as cinema and narrative (Barthes, 1957; Metz, 1974; Pettit, 1975; Scholes, 1982) and has also been applied to a range of cultural practices and expression such as video games, shopping malls, television programs and computer software (Aarseth, 1997; Andersen, 1990; Fiske, 1982). The many applications of these methods include analyses of the syntactic, semantic, and ideological or pragmatic aspects of the interpretation of meaning. Scholes (1982) describes the progression of methods in the interpretive tradition as follows, "Hermeneutic critics seek authorial or intentional meaning; the New Critics seek the ambiguities of 'textual' meaning; the 'reader response' critics allow readers to make meaning. With respect to meaning the semiotic critic is situated differently. Such a critic looks for the generic or discursive structures that enable and constrain meaning" (ibid., p. 110). Semiotic codes may be considered as interconnected multilevel meshes or web like networks shared by authors and readers that, "Enable their communicative adventures at the cost of setting limits to the messages they can exchange" (ibid., p.110). From a semiotic perspective eliciting the generic patterns of how meaning is made takes precedence over determining what the meaning is.

Semiotic analysts may employ structural techniques in the conduct of their investigations, utilizing their descriptive power to elicit relevant patterns of code construction and choice for analysis. While in many ways unique, communications within a medium can be shown to share certain generic characteristics or codes which constitute their structure or form. The contribution of form to meaning, within the social context of the audience, may be explored by the careful application of semiotic techniques (Fiske, 1982). This perspective seeks to articulate the formal characteristics of a medium as a language that must be actively read for cultural meaning. DeVaney (1991) cites the difference in technical structures between cinema and television as implying the existence of a unique grammar, thus warranting the development of a separate framework for analysis.

In the study of signification it is generally agreed that a sign is defined as something that stands for something to somebody. In the Peircian model the parallel sign created in the mind of the person interpreting the original sign is called the interpretant (Fiske, 1990). The content that the sign stands for is termed the signs object or referent. Saussure (1966) also split the concept of the linguistic sign to suggest that the separately conceived signifier and the signified are both arbitrarily and conventionally assigned meaning in different societies, cultures and languages. Semiotic interpretations attempt to identify and articulate formal aspects of the signification process and their impact on the interpretations that may occur.

The structural units of construction of a medium are referred to as signs, and the various ways they are employed is termed syntax. If within a syntax, specific patterns of use are established, they are called codes. Semiotics then, calls for a description of the signs and codes that give meaning to the system (DeVaney, 1991). Peirce classified types of signs, creating categories of relations between a sign and its objects, which he described as iconic, indexical or symbolic (Danesi & Perron, 1999). An icon represents its object by resembling, replicating or simulating it in some way, such as the way a photograph resembles its subject. An indexical representation employs some form of indication such as a direct link to its object, and the common example given is that smoke is an index of fire. An entry in a table of contents is an index that points to its objects location in the book. A symbol is related to its object solely by convention, such as words in a language or letters in an alphabet. In a symbolic representation the sign user and the concept referred to are linked by cultural and historic convention. These categories are also considered in levels, with icons having a physical or sense based link, indices having a partial link and symbols having completely arbitrary links.

DeVaney's A Grammar of Educational Television (1991) is an example of a structural framework that may be employed in semiotic media analysis projects. Her proposed model for the grammatical analysis of television describes a series of increasingly detailed structures for analysis. Within a defined program format and segment, structures are defined for identifying structural units for recording and analysis, such as sequence, shot and frame. Using trained observers, and validation methods from the social sciences, specified segments are coded as data for analysis. This enables an investigator to utilize quantitative techniques in support of semiotic, cultural or other qualitative frameworks for analysis.

In subsequent work analyzing news and commercials presented in educational settings to students at schools participating in the controversial Channel One program, DeVaney (1994) described and questioned the techniques used by the marketers for this captive audience of children. In this program, schools agreed to provide student audiences to content providers in exchange for free video equipment. The content included advertising specially created for the children and the analysis identified highly questionable cultural discourses for an educational environment including sexist, hedonist and consumerist messages presented by the advertisers during the course of the program. This powerful application of a descriptive framework demonstrates an important use for investigators concerned with critically analyzing the growing use of commercially sponsored media in educational environments.

In interactive media the analysis of formal structure has demonstrated sustained interest from a number of disciplinary perspectives. For instance, investigators in human-computer interaction seek to streamline the symbolic interpretation of graphical user interfaces by manipulating their structure. Heavy use of iconic representations and the tendency to utilize, rather than change, established interface codes dominate interface design (Shneidermann, 1990). Usability studies analyze changes in interpretation and use based on variations in presentation or conceptual designs. Similarly, analysts from literary theory such as Aarseth (1997), Joyce (1995) and Bernstein (1998), apply techniques from literary criticism, such as the analysis and discussion of the best existing work, to hypertext. In Bernstein's (1998) "Patterns of Hypertext" he states, "From observation of a variety of actual hypertexts, we identify a variety of common structural patterns that may prove useful for description, analysis and perhaps for design of complex hypertexts" (p. 1). In their effort to develop a richer vocabulary of hypertext structure, these analysts apply structural analysis techniques from a literary theory perspective.

Aarseth's Cybertext: Perspectives on Ergodic Literature (1997) employs a literary theory approach to describing and analyzing the interactivity of texts to develop a typology of textual communications for an aesthetic analysis. Interactive media typologies have been challenging, due to an emphasis on technical jargon or interface hardware that changes rapidly. Aarseth describes a typology of textual media whose terminology is, "Not grounded in computer industrial rhetoric (cf. hypertext, interactive, virtual, etc.) but purely on observable differences in the behavior between text and reader (user)" (ibid., p. 59). At this level, functionality described in the typology is freed from the complicating constraints of technological descriptions, as well as the descriptions of the appearance and characteristics of the content elements themselves (i.e. text versus graphics or animation, etc.).

Aarseth's investigation seeks to compare textual characteristics across both printed and electronic mediums by coding a range of textual communications using the descriptive variables and values of his typology. He examines texts from traditional literature and from ergodic literature, which means difficult or requiring viewer effort beyond interpretation. His data set includes Moby Dick, electronic texts such as Joyce's Afternoon, and Internet based MUD's (multi user dungeons) such as TinyMUD. Using an exploratory quantitative method called correspondence analysis (Greenacre, 1993), Aarseth is able to show that many characteristics of hypertext interaction are not unique to the digital medium and that some printed works exhibit characteristics that parallel the interactivity of electronic texts, such as the configurability of the I Ching. His work demonstrates the power of descriptive methodologies for describing and comparing historically the interactive characteristics of textual expression that is both printed and electronic.

The emerging field of computer semiotics is concerned with semiotic descriptions and analysis of program and interface design. (Andersen, 1990; Brandt, 1993; Hasle, 1993; Winn et al., 1995) These investigations seek to answer questions about the nature of program creation and the impact of semiotic theory at every level from programming to interface construction. Hasle (1993) describes computer semiotics as, "...Concerned with developing a more systematical description of the interface level.... If any serious discipline concerned with the interface level is to emerge, then a systematical and unifying foundation must be established" (p. 2). Establishing this foundation in 1990, Peter Bogh Andersen published A Theory of Computer Semiotics, which describes the theory and characteristics of interactive and other computer based signs.

Andersen's approach is to examine and analyze computer systems as media. He describes seeing the computer as an empty expressive system as meaning, "That we see it as a 'palette' of expressive devises that the designer and user invest with meanings" (ibid., p. 131). By relating the design to other semiotic systems, such as work language or video games, the designer suggests interpretations that may or may not influence the user. He describes the designers output in the interface as sign-candidates, some of which are accepted and become a part of the viewer's understanding, some of which are ignored, and some which are reinterpreted and used by viewers in unforeseen ways. In this view then, the interface, "... Denotes a _relation_ between the perceptible parts of a computer system and its users" (ibid., p. 129). This relation is manifested in the expression of computer-based signs, which are signs, "Whose expression plane is manifested in the processes changing the substance of the input and output media of the computer" (ibid., p. 129). The interface then, is the collection of computer-based signs that are in any way experienced by a community of users.

Andersen (1990) seeks to inform system design in the workplace by systematically aligning the structures of information systems with the language used by workers in the execution of their work tasks. This pragmatic perspective foregrounds the social and political issues involved in the design of computer systems in the workplace. Andersen's design for a post office automation system involved the postal workers in the articulation of their work language and in the design of a system from their perspective for the accomplishment of their work tasks. In Europe, some countries and labor organizations have won the right to be fundamentally involved in the design and implementation of information technology in their work places. This approach reflects a democratic perspective on technological developments that balances power among all of the stakeholders and promotes socially equitable solutions that foster group cooperation and achievement.

Designers and theorists involved in interactive systems as complex as fully immersive virtual reality systems have described semiotic theory as important to their work (Biocca & Levy, 1995a+b; Heim, 1993; Winn et al., 1995). In Internet hypermedia, a graphical user interface can transparently unite information, people, and programming from anywhere on the global Internet, creating complex and subtle interactions that cross geographic, social and cultural boundaries in unpredictable ways. The graphical nature of screen based GUI's and fully immersive virtual environments means that, according to Winn (1995), "...The signs that represent objects and events are completely computer-generated. This means they can appear in any form and may behave in any manner whatsoever" (p. 6). This highlights the importance of making explicit the hierarchy of abstractions that are constructed in interface representations.

Computer simulations provide an example where graphical output presents the results of underlying mathematical models while hiding the details of the assumptions made by the programmer in creating the models and representations (Streibel, 1991). The reductions and simplifications made in order to create the graphical output can be said to constitute a hidden layer of abstractions. Thus, a key representation for critical understanding is the explicit knowledge of the relationship of the content representation to reality and the underlying assumptions embedded in the algorithmic techniques applied in the creation of the representation.

Summary of the literature review

In summary, it is proposed that the sophisticated techniques and powerful interests of the constructivist technology research and development efforts motivate specific analytical efforts from the media education perspective. Identified here as the collaborative knowledge environment format, these programs combine Internet collaboration and cognitive hypermedia with user content and activity analysis and can be described as an intense integration of advanced technologies and techniques. Sponsored and developed largely through U.S. military and government supported education research, these sophisticated technologies and systems deserve critical scrutiny as they are proposed for educational practice by outside institutions and organizations with authority.

Media analysis efforts from television and computer research have provided impressive illumination of the concept of formal structure and it's relationship to cultural and social practice. In educational settings the analysis of constructed representations in educational media has received deserved recognition as a potent tool for the participants and practitioners who seek to explore the relevant meanings and interpretations for their settings and agendas. In applying structuralist and semiotic methods to the analysis of interaction in the KIE and CFH collaborative knowledge environment programs, a generic approach is proposed which may serve to illuminate critical issues and questions for practitioners and participants in interactive media.

Problem statement

Given the application of collaborative knowledge environments for educational practice, this investigation proposes to examine these technologies critically and to articulate important issues, questions and implications for educational practitioners and participants. In adopting this approach the complexity of the debates surrounding the application of these sophisticated systems is respected by focusing on the development of critical knowledge and processes for practitioners and participants to make judgements suited for the specifics of their settings and agendas. The problem statement therefore is as follows:

The application of collaborative knowledge environment formats in educational practice demands an investigation from a critical perspective that seeks to problematize sophisticated interface constructions and generate critical knowledge and questions in terms of a democratic media education framework that promotes the development of critical autonomy for practitioners and participants with regard to interactive media.

Goals of the study

The following goals are presented to motivate the conduct of this investigation:

Goal 1: To analyze interaction in the KIE and CFH collaborative knowledge environment programs and to develop a formal descriptive grammar of interaction that may serve as a framework to organize and guide analytical projects in digital media.

Goal 2: To interpret the interaction codes identified in terms of the development of an organized approach to problematizing participant knowledge of common sense interface constructions in digital media programs for democratic media education practice.

Focus questions

The following focus questions are posed to frame and guide the investigation.

Question 1: What structural units or codes of interaction can be identified in the CFH and KIE programs?

What structures for analysis and codes of interaction can be identified and articulated for analytical projects in digital media?

Question 2:What domains of origin can be traced for the borrowed codes identified in the formal analysis of the KIE and CFH programs?

Question 3: From the originating domains identified in #2 above, what institutions and organizations may be identified and what relevant practices can be identified and described?

Question 4: How are the meanings of the borrowed codes transformed in their new use in the CFH and KIE programs?

Question 5: What assumptions and implications underlie the identified characteristics of the textual structure from a media education perspective?

Question 6: What methods and critical questions can be proposed to problematize participant knowledge and to foster critical autonomy for users of collaborative knowledge environment programs?

Delimitations

The delimitations of this study are impacted by the theory frameworks and methods selected and the conduct of their implementation. This analysis combines formal and informal techniques that are subject to important limitations which are described here and addressed through their careful application in the course of this study. The shortcomings and limitations of structuralism and semiotics have been thoroughly explored in a subsequent movement that has been termed post structuralism and has produced critiques that have focused on three areas of concern namely, the empirical nature of the method, the influence and bias of the analyst and the isolation of the sign from the context of signification (Seiter, 1992). The attempts undertaken to minimize these concerns in this study are described in the following discussion.

First, structuralist and semiotic methods have been selected to provide rigorous descriptive power to the object of interest before broader models are applied in subsequent studies. The recent emergence of the format and the relative sophistication of the objects of analysis suggest this initial approach to inquiry. The structural approach as applied is not intended to be scientific but in this context it does allow an organized analysis that goes beyond ad hoc observations (Pettit, 1975). Therefore, results must be considered in terms of their ability to expand the context of discovery by introducing new theoretical objects and ways of approaching a given area of exploration. This approach seeks to provide a deeper understanding of the area of study as opposed to providing a mechanism to predict some future state. Also, any semiotic or structural analysis must be considered as necessarily partial and incomplete; recognizing that to create a model or structure requires a reductive process that excludes important aspects of the object of study. In this study the models employed are considered provisional and are subject to modification as necessary.

The second area of concern regards the limitations and biases of the analyst. Research that employs the theoretical perspectives of the interpretive framework assumes the presence of implicit biases associated with the investigators membership in social and cultural groups (Guba & Lincoln, 1989) and requires explicit acknowledgement by the investigator (Banks, 1998). Knowledge construction as a social process is seen as connected to multiple group affiliations and social statuses that influence the perspectives and questions undertaken by the researcher (p. 7). The author of this study is a thirty something white male working toward a Ph.D. in Educational Communications and Technology at the University of Wisconsin-Madison, who has over fifteen years of educational and commercial experience working with computers, video and Internet hypermedia technology as well as studying the social, cultural and educational implications of electronic media. From a perspective that recognizes a culture of information technology and technical knowledge, I would fit into Banks' (1998) typology of cross-cultural researchers as both an indigenous-insider and more recently have experienced the perspective of the indigenous-outsider as my work has shifted from the technical to the social and critical.

Finally, by adopting important tenets from the media education framework to guide and shape the analytical efforts of the study, I seek to address both the limitations of my own perspective and to respect the contingency of signified meaning in cultural practice pointed to by the post structuralists. Thus, the formal structures identified in the analysis are interpreted in terms of processes for critical practice which shift the locus of interpretation from the analyst to the participants and the specifics of their settings and agendas. This study avoids interpreting meanings from specific texts but seeks to provide entry for others to explore the meanings produced in their own interactions with and relationships to these and similar texts.

Organization of remainder of dissertation

The remainder of this dissertation study is organized as follows. Chapter two presents the theoretical construct employed in the study. The democratic media education framework for interpretation is introduced as well as the theories and methods of the methodology that is employed. Chapter three presents the methods, including the interaction structure and the pilot study analysis of the sample data. The pilot study includes the code set analysis of CKE's, as well as initial syntagmatic and paradigmatic analyses from Cognitive Flexibility Hypertext's (Jacobsen & Spiro, 1995) and the Knowledge Integration Environment (Bell et al., 1996). Chapter four expands the pilot study and organizes and presents the results of the analysis. The interaction model is revisited and revised in light of the insights gained in the analysis. In Chapter five the implications of the analysis and results are discussed in terms of the media education framework. Critical knowledge and questions for practitioners and participants are proposed and organized in a problem posing process for educational practice. Chapter five also examines the methodology for shortcomings and proposes future approaches to inquiry based on the results of this investigation.

Get Smart Fast

An analysis of Internet based collaborative knowledge environments for critical digital media autonomy
Chapter Two: Theoretical construct

In this study an interpretive methodology based in linguistics and anchored in the media education and analysis tradition is assembled. Structuralist and semiotic methods are employed to identify and articulate observable interaction codes in digital media. The implications of the identified formal structures are discussed in terms of the democratic media education framework and are presented in the form of a problem posing process that seeks to foster critical autonomy for participants. The theoretical construct is outlined next by describing the specific aspects of the media education framework and the structuralist and semiotic methods employed in the conduct of this study.

Democratic media education framework

Although the media education framework introduced in Chapter one guides this investigation in a number of important ways, the primary impact is to shape the form of the study analysis and results for application by practitioners and participants. This is achieved by employing the proposed theoretical model of interaction not as a model for judgement but instead as a tool for developing a critical framework for practice. This approach works to minimize the imposed cultural values of the author and aligns this study with the stated principle that media education is primarily investigative (Masterman & Mariet, 1994). Important objectives of this principle are, "...To produce well-informed citizens who can make their own judgements... Encourage students to explore the range of value judgements made about a given media text and to examine the sources of such judgements (including their own) and their effects" (ibid., p. 54). This study attempts to address the need for effective approaches to interpreting sophisticated interactive media formats through the development of robust and accessible methods that may be employed in critical practice.

The media education approach to analysis and interpretation identifies central objects of interest and employs critical questions to motivate inquiry and investigation. Three critical areas of investigation for a given medium include the programs, the contexts and the audience (Buckingham, 1991). The following discussion describes the specific aspects of these areas that provide the methodological framework for this investigation. In approaching an analysis of the programs of a given medium, the framework proposes inquiry based processes that include identifying observable formal and structural properties of the text, identifying contexts of production and consumption, and articulating the techniques employed in the construction of common sense or reality. The focus of this study is on the development of a formal descriptive model of interaction in digital media which is presented in Chapter three. In the following section on structuralism and semiotics the methods for identifying formal structures and their relations in a program are described.

An analysis of the formal structure of a communication may also include an articulation of the characteristics of an implied receiver of the message which is called the textual structure (Fiske, 1987). An articulation of observable characteristics of the textual structure can be used to interpret ideological or cultural themes in a program. For example, drill and practice software that employs a feedback mechanism to guide the learner toward a pre-specified goal can be said to construct a behavioral learner (Streibel, 1991). In this program format the learners behaviors are, "Shaped by an external, mechanical process," (ibid., p. 291) which indicates a process model of education that can be interpreted in terms of an ideology of control (Freire, 1970). Textual structure may be appraised in the attempt to articulate how and why socially and culturally constructed artifacts position their users as they do and to explore the implications for teachers and learners in educational practice.

Educational media practices in the classroom have been criticized for presenting unbalanced and inequitable portrayals of exclusive or "dominant" cultural representations (Ellsworth, 1987). Currently, the application of unified subject constructions are considered problematic outside of their narrow and specialized research domains. For example, research has shown that the application of psychological constructions and experimental results outside of their research contexts, such as in educational settings, is considered not only patently inaccurate but also constitutes a socially inequitable oversimplification that should be avoided (Magnusson, 1975).

In digital media, participation can be described as resulting in multiple, simultaneously realized constructions, including observable textual structure which is here termed the digital media textual machine, and a social subjectivity actively constructed in the mind of the participant (Fiske, 1987). Social subjectivity is largely beyond the scope of this study which focuses on the textual structure and the important functional properties of interaction that have been identified across a range of digital media applications. A digital media textual machine may also include an invisible but empirically constructed computable structure managed by the software program. In technologies such as user profiling and personalization, computable structure may be employed in the generation of content and interface selections for the user. In current practice these techniques are largely invisible to the user and require recourse to the technology literature for analysis. These techniques are described in more detail in Chapter four.

Context refers to a number of important aspects that exist outside of the program itself which are widely defined here to include the technologies of distribution, production and reception for a given medium. These may be considered from social, political or historic perspectives, or by identifying the interests of institutions and organizations with authority (DeVaney, 1998). For example, in this study technologies identified in the collaborative knowledge environment format are traced to their original domains of application in military and government intelligence and communication projects (See Chap. three). Examining the complicated contexts of communication are regarded as crucial to the process of interpretation. This is required to situate observed practices accurately within the social and cultural contexts of their occurrence. For instance, in the drill and practice example presented above it has been shown that in certain learning contexts, such as beginning skill-building, performance gains from the use of drill and practice techniques have been documented (Streibel, 1991). Thus, for a specific audience and learning objective an application of the drill and practice format may be appropriate.

Finally, the audience is considered as an important area of inquiry in the media education framework. This complex and heavily researched area includes many aspects of media reception such as audience responses, subjectivity, viewing contexts and behavior (Masterman & Mariet, 1994). This study will consider educational viewing contexts primarily and the detailed approach to formal structure undertaken proposes to provide robust analytical objects to facilitate investigation in these and additional research areas. In informing an inquiry based pedagogical approach, this study seeks to provide participants with tools to entertain critical questions such as, "How do we read and respond in collaborative knowledge environments?" Because answers to important interpretive questions are both personal and social, this study focuses on methods of investigation that seek to empower participants to explore and map the complex terrain of their individual and communal meanings.

Figure 2-1 below organizes the objects of inquiry described here and some of the critical questions employed in the media education framework. This study proposes to elaborate and expand this framework to the domain of digital media by interpreting the formal structures identified in terms of these critical areas of investigation. In the next section, the structuralist and semiotic methods employed in the formal analysis of interaction in the collaborative knowledge environment format are described.

Formal structure in digital media

The theoretical construct of this study is informed by the work of analysts of traditional media, such as literature, television and film, that have developed and applied structuralist and semiotic frameworks and perspectives (Barthes, 1977; DeVaney, 1991; Fiske, 1982; Metz, 1974; Morrow, 1975; Propp, 1968). In interactive media, literary theorists have contributed important examinations of the concept of text and its expression in electronic forms (Aarseth, 1997; Bolter, 1991; Joyce, 1995; Landow, 1997). Semiotic perspectives have also been applied to the analysis and design of a range of interactive systems from the World Wide Web to immersive virtual world environments (Andersen, 1990; Arnold, 1995; Brandt, 1993; Hlynka, 1989; Lee, 1996; Ma, 1996; Tucker and Dempsey, 1991; Winn et al., 1995). The range of disciplines and methodological perspectives represented suggests the perceived potential of the application of structuralist and semiotic methods to analytical problems in digital media.

In social and cultural investigations, meaning is often interpreted in terms of what are called signifying orders or cultural discourses. These are defined as the shared, dynamic and interconnected patterns of meaning that enable and constrain cultural practice and expression for individuals and groups (Danesi & Perron, 1999). A micro semiotic view examines the selection and organization of signs in a given text and attempts to identify and articulate connections and relationships to wider cultural discourses or signifying orders that may exist in the culture. In this view, meanings from larger signifying orders are seen as projected or collapsed into specific examples of communication practices or texts. In the example of the drill and practice format in the previous section for instance, the articulation of a behavioral learner in the textual structure may be described as a projection of the cultural discourse of behavioral psychology.

An investigation involves identifying minimal signifying units, called codes, in a text and investigating what something means, how it means what it means, and why that meaning may be interpreted (ibid., p. 291). Because codes assume meaning within systems of structural relations, the various structures of selection, combination and organization must be examined. In the following three sections, the important structures for analysis and structural relations are introduced. First, syntax, paradigm and analogy are described as basic structural relations that characterize all codes. Then, the analytical concept of organizing levels is introduced before describing the program format structure and its application in digital media analysis projects.

In structural and semiotic analysis it is commonly held that there are multiple, simultaneous organizing levels or dimensions involved in the interpretation of meaning. This is termed the dimensionality principle which is defined as reflecting, "The interconnectedness among the multifarious dimensions of representation and signification" (ibid., p. 95). For example, Barthes (1977) described denotation and connotation as analytical concepts employed to discern the multiple levels of meaning that may be interpreted in photographic images.

Denotation is the meaning interpreted in a first order of signification, or the explicit concept of what the content of the picture is. The second order signified, or the connotation, is the additional meaning ascribed through the relation of the original sign to a single concept that may have conventional or ideological meaning. For example, while a graphic icon on a computer desktop may denote a trash can, the second order connotation assigned in the grammar of the user interface exists in the practical interpretation of the icon as the location for files that are to be removed from the system.

In Television Culture, Fiske (1987) describes the following three "...Arbitrary and slippery," (p. 4) levels that can be applied to interpreting codes in electronic media:

Level one is the reality level which consists of the social codes through which we make sense of the actors, objects and processes of the world. Appearance, behavior, sound and expression make sense according to our shared cultural understandings at this level. Reality is encoded technically in a medium and the technical characteristics of the medium function to structure and shape communications.

The conventions employed in a medium make up level two, the conventional representational level. Narrative, character, dialogue, knowledge, etc. may be represented at this level.

Communication at the representation level makes sense through explicit and implicit relationships with a transparent network of shared cultural and political beliefs, worldviews or discourses that make up the third level which is called the ideological level.

Fiske (1990) describes the coding of common sense meaning as follows:

...Ideological codes work to organize the other codes into producing a congruent and coherent set of meanings that constitute the common sense of a society. The process of making sense involves a constant movement up and down through the levels of the diagram, for sense can only be produced when "reality," representations and ideology merge into a coherent, seemingly natural unity. Semiotic or cultural criticism deconstructs this unity and exposes its "naturalness" as a highly ideological construct (p. 6).

Thus, the use of organizing levels in an analysis allows us to discern multiple, simultaneous levels of meaning and the complex interactions that occur in the experience of interpretation. In the next section, the concepts of program format and code set theory are introduced.

Structural relations: Syntax, paradigm and analogy

In the linguistic model, message signs and codes are considered meaningful in terms of the system of structural relations that enable and constrain their interpretation. The structural relations that are employed in this study and are described here are syntax, paradigm and analogy.

Syntax refers to the organization of signs and codes that are presented to the viewer within a given communication. If the signs are organized according to patterns that may be found to repeat in practice, these patterns are called codes (DeVaney, 1991). Syntagmatic meaning refers to the patterns of combination and organization of the signs over time and the meaning that may be derived from those patterns. For example, melody in music is based on recognizable sequential patterns of notes. Each note in the temporal string is also meaningful due to its selection from a paradigm of choices, this is called paradigmatic meaning.

Paradigmatic meaning is produced through the selection of structural units and syntax from a set of code choices, some of which may have meaning previously established from their use in other media. In a video program a slow dissolve between shots may be employed to represent the passage of time, borrowing that meaning from the established codes of Hollywood film. In this case paradigmatic meaning must also be considered in relation to the other transition codes from the set of possible transitions that were not employed at that instant. Fiske (1982) sums up the choice aspect of paradigmatic meaning when he says, "Where there is choice there is meaning, and the meaning of what was chosen is determined by the meaning of what was not" (p. 58). Signs signify paradigmatically based on their distinctiveness within a range of available choices and those choices are somehow differentiated from one another by some minimal feature or characteristic of the signifying unit.

Analogy is the third structural relation employed to describe the way a sign or sign type can substitute or replace another in a certain way (Danesi & Perron, 1999, p. 91). An analogical relation describes a mapping of a concept from a source domain to a target domain which results in the production of a new abstract concept. In this process the conceptual transformations that may be observed may be significant in an analysis and are considered to be a force of change in sign systems (ibid., p. 92).

Sometimes, what were initially metaphoric comparisons may become collapsed or permanently displaced through an analogical relation that results in a shift of the meaning that is interpreted. For example, in cognitive science discourses conceptual objects from computer technology are employed metaphorically to describe the workings of the human mind. When these concepts are transferred literally to the design of hypermedia information systems for learning in the constructivist technology research, the metaphoric language is collapsed into the formal structures of the design and the transformations that can be articulated have important educational implications. This important relation will be demonstrated below in the section on interaction in constructivist technology.

When certain rules or conventions are found to guide the structural coding of signifying units in a medium, this is called the grammar (Fiske, 1982). For instance, in a graphical user interface, or GUI (pronounced gooey), textual signs across the top of the visual frame are employed to signify the presence of drop down menus of choices related to the word displayed. A number of programs from current practice are found to share combinations of word choices and their organization on the screen. For example, the word 'edit' always comes to the right of the word 'file' and the menu displayed always contains the choices cut, copy and paste. This convention can thus be said to constitute a shared code in a socially constructed grammar of the GUI that structures communication for both authors and participants. Describing the grammar of a medium then, suggests the application of the linguistic model in the attempt to identify and articulate the formal structures that emerge in practice to organize the communication codes of the medium.

Program formats and code set theory

Identifying categories of existing program formats is an important step that is required in media analysis applications. In DeVaney's (1991) grammar of educational television she states, "Format codes supply the infrastructure in which to examine syntax" (p. 269). Metz's (1974) code set theory of film establishes the format as a powerful descriptor for tracing codes and interpreting meaning in a medium. Formats, which are often called genres in film theory, serve to describe program categories that exist in practice in a medium.

Code set theory introduces the idea that codes are constructed within general domains of influence, such as general culture, cinema and theater. For example, lighting conventions in film typically share characteristics with everyday experience, or what Metz calls the domain of general culture. Codes of lighting that deviate from this domain are commonly used to suggest surreal or extraordinary events and are often associated with specific film formats, such as horror or science fiction. Metz believed that the codes identified in a filmic sequence should be traced to their original domain of origin in an analysis and that shared codes may have meaning that can be described in relation to other domains of use or origin. Code set theory allows the identification and analysis of relationships between domains of influence and specific formats in a medium.

In an example from digital media, Lee (1996) describes the hybrid codes of the electronic mail format and identifies conventions from oral and written communications domains in current practice. She states that, "With its immediate electronic transmission of typographic text, e-mail stands midway between the telephone call and the letter" (ibid., p. 277). The codes of electronic mail are shown to blend characteristics from their domains of origin, such as the informal qualities of verbal conversation and the formal structure of the business memo templates to, from, subject and date fields. Lee's analysis found that the informal communication style used in electronic mail messages, such as the absence of formal greetings and closings, may be influenced by formal characteristics such as the business memo fields that are encoded into the application and the immediate transmission style of the telephone conversation.

This study proposes a digital media program format structure which defines separate activity, content and interaction structures to isolate these important aspects for analysis. Content is considered separately from interaction and activity to allow a consideration that distinguishes the interactive functionality of a program from the content provided through the use of the program. This important distinction allows the articulation of a distinct grammar of interaction. The activity structure provides a higher level social view and the interaction structure describes a more detailed collection of functional-descriptive codes that may be articulated. Figure 2-2 below shows the structures and their organization defined for the analysis of program formats in digital media.

Activity formats in digital media

In examining program formats in digital media, an activity based perspective provides a useful set of categories for consideration. Because interaction in digital media is extremely diverse, this initial classification allows us to discern the salient characteristics for an analysis. An activity based perspective is also important because it attributes the desired relevance to the user. This is crucial in the study of interactive media, which by definition requires user action, but was also found to be useful in Propp's (1968) formal analysis of folktales.

Propp's approach isolated acts of characters in relation to their significance in the course of the action. He based his method on the observation that, "Characters of a tale, however varied they may be, often perform the same actions" (ibid., p. 20). From this perspective, a limited number of stable, repeatable functions could be identified within the folktale format providing the descriptive rigor required for satisfactory classification and analysis efforts. In this study activity and content are separated for the purposes of analysis and here an initial consideration proposes three major functional activity categories in digital media. A more detailed terminology for considering interaction is proposed in Chapter three.

Based on an initial examination of the sample data and the literature concerning hypermedia and the Internet, a large number of analysts and practitioners appear to be in agreement that exploration, authoring and collaboration represent important categories of activity in Internet hypermedia (Aarseth, 1997; Bell et al., 1996; Bolter, 1991; Landow, 1997; Lehrer, 1991; Jacobsen & Spiro, 1995; Winn et al., 1995). These social-technical categories use social terminology in describing technical characteristics and are employed in this study to discern the major functional activity categories in digital media. Joyce (1995) uses structural characteristics to distinguish between two major categories of hypertexts which he names exploratory and constructive.

Exploratory hypertexts suppose an audience that is provided with navigation functionality and tools that assist in the process of exploring an information space. In Internet hypermedia for example, individual web pages contain multimedia information and may include hypertext links that can be selected to move between or within documents. Mechanisms to assist in navigation are included in the web browser and may also be designed into the individual web sites that are encountered. An individual site may provide a content index, map, search function or structured menu of choices to guide the user. The browser maintains a history of the pages viewed and allows the user to navigate forwards and backwards in the list, or to view the history and select a destination directly.

Joyce's second category of constructive hypertexts suppose an author, requiring, "... A capability to act: to create, change, and recover particular encounters within the developing body of knowledge" (ibid., p. 42). Here termed authoring, this category distinguishes the ability to extend content or interaction capabilities of a system through the creation of multimedia content or interactive functionality. In Internet hypermedia, the authoring function is extended by the ability to publish the work created globally, making it accessible by anyone with access to the Internet.

The third major category is employed to represent the interpersonal communication capabilities enabled in Internet hypermedia. This category is termed collaboration and it is defined here to include interpersonal communications functionality such as synchronous or asynchronous messaging and conferencing, document sharing, and real-time remote interface control. This category is regarded as important in the literature and it provides the focus for discrete fields of research such as computer-mediated communications and computer supported cooperative work. These functional activity categories are employed in the next section in an initial consideration of the formal structures identified in the Knowledge Integration Environment program.

Constructivist technology and digital media interaction

In this section the theoretical construct is elaborated further by demonstrating an application of the methods on a selection of the sample data. The activities from constructivist learning theory are introduced before examining a sample program from the Knowledge Integration Environment (Bell, 1997). Using the digital media activity categories described above, activities from the discourse of constructivist learning theory are identified as projections in the construction of the KIE program. Finally, a brief consideration of these structures examines their transformation in the formal structures of the medium and the implications that may be considered from a media education perspective.

The constructivist learning theory research proposes integrative approaches to cognition and learning that emphasize active learning processes and the social contexts in which they occur such as knowledge construction (Resnick, 1989), cognitive apprenticeship (Collins et. al., 1989) and knowledge integration and scaffolding (Bell, 1997). This important framework attempts to redress the shortcomings of the process model of education, where knowledge is seen as abstracted from authentic practice into the cultural context of a school where it becomes decontextualized and formalized for efficient and replicable transfer to learners.

Constructivist pedagogy situates knowledge and learning in the context of the authentic activities of a culture and describes learning as a process of enculturation (Brown et. al., 1989). Learners are conceived of as active and communicative participants in social contexts designed to facilitate knowledge development and discovery through activities such as communication, collaboration, design, construction, and exploration (Brown et. al., 1989; Resnick, 1989; Lehrer, 1991). It is important to note here the similarities between the terminology employed to describe activity as conceived in constructivist learning theory and the social-technical activity categories of digital media introduced above.

In the Knowledge Integration Environment research, investigators and designers describe their explicit attempts to make expert and learner knowledge visible for K-12 science instruction (Bell, 1997). Hypermedia design that purports to parallel or mimic human knowledge structures is referred to here as cognitive hypermedia. A suite of web based applications, called the KIE, are provided to enable students to interpret, construct and evaluate arguments based on information and evidence that is developed and provided for the students or discovered by the students on the Internet.

In the SenseMaker module of the KIE, a software program designed to model and support scientific argumentation, the goal of making knowledge visible is supported via three techniques (ibid., p. 4). First, the program is used by the researchers to model the scientific knowledge of expert or historical scientific arguments. This requires a formal organization of existing evidence and arguments into a web based hypermedia design that seeks to mimic the mental schemas and cognitive processes of an expert engaged in scientific inquiry. In this effort the authors employ a configurable template to isolate and order key concepts and evidence and make their relationships and interconnections explicit through the use of hypermedia linking and organization techniques. Students explore these hypermedia knowledge models in the discovery and research phases in the classroom.

Second, individual or group authoring is enabled through the provision of tools that allow students to build their own hypermedia representations of scientific arguments. Student claims are argued and supported through the presentation of supporting research, evidence and data. The use of the same configurable template employed to model expert knowledge guides student practices of knowledge construction in this phase. Finally, the collaborative exchange and evaluation of arguments is enabled through the ability to web publish student arguments for discussion. Learners explore and evaluate the arguments of others in their class before engaging in a group process to bring the various perspectives together for discussion. A stated goal of the process is that, "Particular conceptual and epistemological ideas of the groups are made visible and can easily become productive topics of conversation" (ibid., p. 4). As learners engage in discussion and debate of the claims and evidence presented, cognitive reflection practices are promoted and encouraged.

Identifying formal structures allows us to consider how the activity concepts from constructivist learning theory are projected and transformed in the construction of the KIE program. Figure 2-3 below aligns these concepts and structures to illustrate the important relationships that have been identified.

For example, the exploration of hypermedia based expert knowledge models may be identified as a projection of the concept of cognitive apprenticeship in the KIE program. A semiotic analysis could identify an analogical structural relation between the constructivist learning theory discourse and the formal structures observed in the program. By citing relevant research from learning with hypermedia, the transformation of the concept from the originating domain to its realization in the text could be described as a problematic over simplification that is implicit in knowledge representation efforts that attempt to mimic mental phenomena through the use of hypertext structure (Tergan, 1997). A technical worldview may be implied in a digital media textual machine that constructs an objective explorer in a hypertext knowledge space of expert knowledge schemas and cognitive processes.

Paradigmatically, an analysis may consider the implications of selecting the exploration of hypermedia knowledge models over other approaches such as peer mentoring that have been found to support cognitive apprenticeship. In considering the syntagmatic dimension, the sequential process of exploration, authoring and collaboration in the KIE program may be seen as constraining when compared to actual inquiry which typically involves a great deal of unspecified exploration and unsequenced repetition that may be structured out of the formal representation presented.

Additionally, a media education perspective may consider how the use of expert knowledge models works to naturalize the rational logical representation of knowledge as common sense (Masterman & Mariet, 1994). Questions for student exploration may include: Who are the interested institutions and organizations with authority behind the production and use of this program? What forms of knowledge are structured out of this representation? And, how does this work with my learning style? While not intended as a full analysis, this section has attempted to describe and demonstrate how the theories and methods of the theoretical construct are employed in the conduct of this study.

Summary of proposal discussion

This discussion summarizes the study proposal as it has been presented in the first two chapters. In the complex debate on educational computing, this study seeks to propose a formal descriptive model of interaction which is interpreted in terms of a media education framework to yield a critical framework for practitioners and participants. A confluence of authoritative government, military and commercial interests have been identified in connection with a new form of computer program which is here termed the collaborative knowledge environment format. In current practice, this sophisticated format is observed to consist of some combination of Internet collaboration, cognitive hypermedia, content analysis and activity analysis. Constructivist learning theory is shown to be interpreted from a technological perspective in research that is termed a constructivist technology perspective. Given the proponents claims that participation will enable users to get smart fast, this new form demands investigation from a critical media analysis perspective.

The theoretical construct assembled to frame and organize the conduct of this investigation includes the democratic media education framework and structuralist and semiotic analysis methods. The media education framework provides the grounds for interpretation by focusing on the importance of providing a liberatory educational ecology where students problematize their world as they discover new and existing knowledge in a problem posing educational process. Important contributors include Shor and Freire (1987), Apple (1995), and Masterman & Mariet (1994). Authority in this view shifts from teachers to students as they cooperate in the realization of mutually developed goals and facilitate collective achievement for the group. The results of this study seek to develop a generic problem posing process for digital media literacy in collaborative knowledge environments; a critical framework for problematizing sophisticated information systems and their underlying assumptions and implications. Developing critical autonomy for participants and practitioners is intended to foster interpretive skills that serve stakeholders as they encounter increasingly sophisticated interactive systems in the digital media life-world of school, work and home.

The structural linguistic model as presented in a series of lectures by Saussure (1966) early this century forms the foundation of the methodological approach. Pettit's (1975) critical analysis of structuralism articulates variations on the approach and explores its application in the non-literary arts, while carefully describing the limitations of the theoretical model. The film and television theorists selected have extended and applied the linguistic model successfully in media analysis projects (DeVaney, 1991; Fiske, 1987). Metz' (1974) code set theory of film is employed in articulating format categories and in tracing the historical origins of identified interaction codes. Semiotic techniques such as syntagmatic and paradigmatic analysis and the dimensionality principle are employed to articulate formal structures from the study data. Finally, a hypothetical model of interaction in digital media is proposed that may be employed across a range of networked electronic communications in current and future practice.

Assumptions

The assumptions underlying the conduct of this study are discussed here in terms of the theoretical construct that has been assembled and the data selected for the analysis. A primary assumption of the Saussurean linguistic model is that the sign system may be inferred by examining individual cases in practice (Seiter, 1992). This assumption has been successfully adopted by many analysts and is seen as a requirement by Barthes (1977) who suggests that a deductive procedure is required by the impossibility of inductively examining the "Millions of narratives," (p. 81) faced by the analyst. In this approach a hypothetical descriptive model is proposed which is then employed as a tool in examining examples from current practice which can then be discussed in terms of their convergence and divergence from the model.

According to Pettit's (1975) critique, a structural approach may be deemed valid if there has not been a consideration of the possibilities which it opens for further inquiry. The literature survey found evidence of structuralist and semiotic approaches to interactive media but did not find any applications to programs from the collaborative knowledge environment format. This implies an assumption of significance which is termed the principal of reflective equilibrium (ibid., p. 41). This principal proposes that in a stylistic analysis the account of significant patterns must be in equilibrium with our intuitive sense of what effects matter in the analysis. The author contends that in this analysis the proposed interaction model provides excellent detail and introduces an important and useful non-technical terminology for analytical projects in digital media.

Get Smart Fast

An analysis of Internet based collaborative knowledge environments for critical digital media autonomy
Chapter Three: Conduct of the study

This chapter introduces the design, methods, data and procedures employed in the conduct of the study before presenting the pilot study analyses and interpretations.

Design

In this interpretive study a mix of formal and informal methods are employed in the effort to address the focus questions and study goals. The study design can be described in terms of three distinct stages namely, observation, analysis and synthesis (Danesi & Perron, 1999). The observation stage entails the examination of specific examples from the medium as it exists in practice. The study proceeds deductively from an examination of the data to the proposal of a hypothetical model of description, with the order of presentation reversed for clarity (Propp, 1968). This approach has been recommended to address the complexity of analyzing linguistic forms such as languages and narratives (Barthes, 1977).

The analysis phase involves the identification and articulation of the minimal structural units of construction and their organization. This study focuses on the functional components or codes of interaction in the domain of digital media and the codes are identified in terms of specific cultural practices and contexts which in this case are supplied by a consideration of the media education framework and constructivist learning theory. A subset of the data are then reconsidered in terms of the formal model in the development of the results and conclusions. This type of design substitutes the validation of a hypothesis with the generation of focus questions that are systematically addressed in the execution of the study. The goals of the analysis are to provide reasonable and supportable answers to the focus questions while also proposing new objects for exploring the domain.

Finally, the synthesis stage considers how the codes combine to produce meaning in specific contexts. An interpretive methodology explicitly acknowledges the social construction of knowledge and the influence of those participating in the investigative process by including self reflexive procedures and by structuring results in terms that privilege stakeholder autonomy (Guba & Lincoln, 1989). In this study, the goal of the synthesis stage is shifted to focus on interpreting the implications of the results in terms of the development of critical questions that empower participants to investigate and interpret their own meanings. The methods section below introduces the textual machine perspective and the structures for analysis before the interaction model for digital media is presented.

Methods

Structural analysis methods involve the application of the linguistic model to the analysis of literary arts such as cinema, television, or as proposed in this study, digital media. Analysis requires the close observation of specific examples from the medium as they exist in practice, and the identification and articulation of the structural units of construction and their organization. Among the various structural methods that have emerged, two distinct approaches have developed, each utilizing slightly different techniques in their application (Pettit, 1975). Straight analysis techniques involve an ad hoc examination of the object of study, and interesting patterns of construction are identified and articulated according to the priorities of the investigator. A large number of semiotic analyses in the literature employ this technique to varying effect given the variability possible with this approach.

In contrast, a systematic analysis involves the step by step application of a suitable descriptive model to the object of study. Descriptive models or frameworks may be developed to represent the characteristics of a class of objects within a specific format, providing an analyst with an organized scheme for analysis. A framework can be described as a meta-model, or a model that describes a range of models within the defined domain. Structures for analysis are defined and within each structure a set of variables and their exhaustive range of possible nominal values are identified. This approach has been successfully employed by a number of theorists involved in computer and media analysis projects (Aarseth, 1997; Andersen, 1990; DeVaney, 1991; Metz, 1974; Shneiderman, 1995), and is applied in the conduct of this study.

The model presented provides an organized scheme which serves to guide the analyses and facilitate the effort to identify and describe structural units and noteworthy patterns in the study data. The model should also be considered provisional and subject to modification if necessary; its purpose is to illuminate the important relationships and characteristics that have been discerned. In the following discussion the textual machine perspective is described as well as the structures that are introduced for this analytical project.

A digital media textual machine

In the effort to provide a rigorous terminology for describing program formats in digital media it is necessary to undertake the development of a scheme for the analysis of formal structure. The linguistic methods described thus far must be supplemented with an approach that includes the perspective of the user as a character in the action. Aarseth's (1997) textual machine model defines a useful perspective for analysis, one which proposes to illuminate the relationship of the operator with the functions of the electronic medium and the communicative signs that are generated.

The interaction structure employs this concept of a textual machine which attempts to clarify the variety of interaction possibilities by isolating, "The different ways in which the reader is invited to 'complete' a text - and the texts' various self-manipulating devices" (ibid., p. 20). In the case of digital media, Aarseth recognized that the sophistication of the interactive media must be taken into account as well so that in this view the text is seen as a machine, "...A mechanical device for the production and consumption of verbal signs... The machine, of course, is not complete without a third party, the (human) operator, and it is within this triad that the text takes place" (ibid., p. 21).

This is similar to Propp's (1968) efforts to study the fairy tale, "According to the functions of its dramatis personae" (p.20). The textual machine employs a similar perspective as it shifts the articulation of such functions from narrative characters to the operator or user of an interactive system. Thus, this study proposes that in an analysis of interaction a stable, repeatable collection of functions may be identified across the domain of digital media.

This analytical perspective is here termed a digital media textual machine which can also be described as representing important elements from the communications discourse model that include a consideration of the author and the reader as well as the text, the medium and the codes. The interaction model may also be described as a social-technical or semiotic model as each variable and value range reflects an observable characteristic of the system from the perspective of the codes that are shared by the authors and readers of computer programs. Figure 3-1 below shows the elements of the discourse model that may be reflected in the configuration of a digital media textual machine.

An interaction structure for digital media

In this section a formal descriptive framework for the analysis of interaction in networked electronic communications systems is presented (also called the domain of digital media). Interactive media typologies have been challenging, due to an emphasis on technical jargon or interface technology that changes rapidly. An important goal of this linguistic approach is to discriminate and isolate key characteristics of interactive functionality descriptively and historically in terms of communications theory and practice (Sterne, 1998).

The interaction framework is developed from an examination of software programs from current and historical practice, as well as from an analysis of the literature on media analysis, literary theory, computer semiotics and human-computer interaction. This framework seeks to provide a shared grammar of interaction from which multiple disciplinary perspectives and complex methodological approaches may build in continuing efforts to investigate the Internet and other forthcoming technologies of electronic communication from social and cultural perspectives (Jones, 1998; Sudweeks & Simon, 1998; Sterne, 1998).

In order to isolate the focus of the interaction structure it is necessary to describe what is not included. A consideration of the frame, which is defined as the contents of the audio and visual tracks, is excluded to avoid an examination of the variation in the appearance and characteristics of the content elements themselves (i.e. text versus graphics or animation, etc.). Interface hardware characteristics are excluded as well, allowing the interaction structure to provide descriptive power regardless of the capabilities of the interface device employed. This framework seeks to apply to applications across the domain of digital media from interactive television to wireless communications.

Employing these structures for analysis enables an investigation to proceed systematically within or across given classes of programs in the medium. For example, in a comparative analysis of news websites, the framework offers a standard descriptive language to differentiate the sites based on the interactivity offered to the viewers. This section has introduced the structuralist and semiotic methods that have been assembled in the conduct of this study. The next section demonstrates how this approach is employed to identify textual structure and structural relations in the collaborative knowledge environment format and how they may be interpreted from a media education perspective.

The following discussion uses the term operator to denote the user, and introduces the codes identified in the interaction structure. First, the interface, metaface, memory and reality codes are defined to describe multiple levels of observable functionality in a relationship between an operator and the content and interactive functionality of a system. These are the lowest level technical or syntax codes and they are defined to articulate important characteristics of the interface, content and memory mechanisms of the system.

Four higher level conventional representational codes are proposed to articulate formal characteristics of representations of presence, identity, perspective and function that may be discerned in interactive systems. Figures 3-2 and 3-3 below show the codes of the interaction framework and their organization in levels.

The following discussion introduces the code definitions and presents the range of nominal values and their definitions with some examples. The intent is to provide a general feel for the analytical posture imposed by the structure and more detailed examples are offered when specific codes are cited in the analysis or interpretation. Additionally, each code range may be described with two special modifier variables, scope and dynamics. The scope variable has values of individual, communal and universal to allow the specification of the range of application of a code within the defined domain. The dynamics variable may assume values of static or dynamic to indicate whether a code value may change in the course of an interactive session.

Technical interaction codes

The interaction codes introduced here constitute the technical codes identified in this framework. These low level codes attempt to capture the salient characteristics of interactive functionality as a multilevel relationship between the operator and the content and interactive functionality of the system. The codes that are introduced to discern the relevant characteristics of this relationship are called the interface, metaface, memory and reality functions.

Interface function

The values of the interface function distinguish a range of functional capabilities in a relationship between an operator and the content of a given system. The values defined are static, selective, configurative, constructive and autonomous.

A value of static describes a system without a content selection mechanism, such as a radio or television designed to receive a single station. A value of selective describes a system such as a television which provides a simple mechanism that allows the operator to select between multiple content sources. A value of configurative describes a higher level of selection that is configurable in relation to the content. For example, in a personalized news site an operator may configure a selection of news sources before selecting from articles provided by those sources.

A value of constructive describes an interface function that enables an operator to create content on the system. A telephone or video phone would be examples of systems with constructive interface functions. Finally, a value of autonomous is employed to refer to a system that generates content based on a hidden algorithmic process. A chatter bot program that processes the last operator statement to select and generate an answer would have an interface function value of autonomous. Table 3-1 below shows the interface function values and definitions.

Table 3-1 - The values and definitions of the interface function.

**Interface function** (Static, Selective, Configurative, Constructive, Autonomous)

The interface function describes a range of functionality in a relationship between an operator and the content of a system

**Static** \- Content is fixed within the system

**Selective** \- Content may be selected by a simple mechanism

**Configurative** \- Content is fixed but access is configurable

**Constructive** \- Content creation is enabled, syntax and scope describe range

**Autonomous** \- An hidden algorithmic process is employed to generate the content

Metaface function

The metaface function describes a range of functionality in a relationship between an operator and the interface level of a system. The functionality available for an operator to create and publish interface elements for other participants in the system is described by the values of the metaface function. The values defined for the metaface function are static, configurative, constructive, and autonomous.

A value of static describes an interface that presents fixed functions and elements. A value of configurative describes functionality that enable an operator to select, manipulate and configure the elements of an interface. For example, the interface elements of a digital cell phone may be unchangeable or static, while the interface elements in a paint program interface may be configured and arranged for customized use. A value of constructive denotes the functionality to create and publish interfaces that other system operators may access. For instance, a homepage building website would be described as having a metaface function of constructive. If a hidden algorithmic process is employed to generate interface elements for the operator, the metaface function is termed autonomous. Table 3-2 below shows the metaface function values and definitions.

Table 3-2. The values and definitions of the metaface function.

**Metaface function** (Static, Configurative, Constructive, Autonomous)

Describes a range of functionality in a relationship between an operator and the interface level of a system

**Static** \- Interface functions and elements are fixed

**Configurative** \- Interface functions and elements are selectable and configurable

**Constructive** \- Interface functions and elements may be created, syntax scope describes range

**Autonomous** \- A hidden algorithmic process is employed to generate interface elements

Memory function

The memory function describes a relationship between an operator and recorded representations of the use of the system. Memory representations in digital media may embody content or activity based perspectives. For instance, telephone records are typically activity based with call times logged by the system but not the content of the calls. In contrast, the history function of a web browser may record actual content that was viewed by the operator.

The memory function code is here defined from an ownership perspective with values of none, system or operator. A value of none describes a system such as broadcast television that has no memory mechanism available. A value of system indicates that a mechanism is available to the system owner and a value of operator describes a mechanism that is available to the operator. If further definition is required the characteristics of a memory mechanism may be described in terms of the interface and metaface functions.

The characteristics and ownership of memory representations can be significant from experiential as well as from political perspectives in an analysis. For instance, detailed content maps have been shown to benefit operator use and orientation within a hypermedia network (Chen, 1996). In corporate information systems, memory traces of activity in the system can be employed to build system wide organizational memories that facilitate knowledge sharing among employees (Zuboff, 1988). Meanwhile, records of telephone activity are routinely used as evidence in criminal trials and there are currently a growing number of subpoenas being issued for online chat room transcripts that can be used in legal disputes. Thus, the ownership of memory functions in a system may be significant in an analysis. Table 3-3 below shows the values and definitions of the memory function.

Table 3-3. The values and definitions of the memory function.

**Memory function** (None, System, Operator)

Describes the existence and ownership of a memory mechanism for a given system. Memory content characteristics may be described in terms of the interface and metaface variables.

**None** \- No memory mechanism is available for the system

**System** \- A memory mechanism is available for the system owner

**Operator** \- A memory mechanism is available for the operator

Reality function

The reality function is an open ended but important code that proposes a consideration of the relation between the digital content and the reality it seeks to represent. Two values are presented here to identify distinct technical approaches that may be observed in digital media representations namely, modeled and digitized. These terms are employed to discern quantitatively derived representations and digitized content respectively.

When reality is modeled in computer applications such as simulations, underlying computer models are employed in the construction of representations that are presented to the user. User understanding includes an apprehension of two key relationships. The first is how the representation relates to the model and the second is how the model relates to the reality of the actors, objects or processes it seeks to represent. When reality is digitized, the digital video content may need to be considered in terms of two levels of reductive abstraction from the reality captured. The first in its encoding onto video tape and the second when that tape is digitized into the system.

Digitization involves converting analog information to digital form and usually requires a reduction in the quantity and a transformation of the quality of audio and visual information present in the original signal. Explicit knowledge of these processes may be significant for interpreting representations rendered this way. This value range is not meant to be exhaustive but merely to identify two significant types of representation for further analysis. Table 3-4 below shows the values defined for the reality function. The next section describes the conventional representational codes from the interaction framework.

Table 3-4. The values and definitions of the reality function.

**Reality function** (Modeled, Digitized)

Describes a consideration of the relationship between the content of the system and the reality of the actors, objects, or processes represented. Not an exhaustive list.

**Modeled** \- Describes a representation based on a formal or quantitative model

**Digitized** \- Describes a representation based on a conversion from analog to digital form

Conventional interaction codes

The conventional representational codes at this level identify textual characteristics of interactive functionality in the system that may also be described as social-technical. The operator function and operator perspective codes are proposed to describe formal characteristics that impact an operators relationship to system content and activity while the operator identity and operator presence codes describe characteristics that impact the users' ability to interact with and manage representations of identity in the medium.

Operator function

The operator function code describes a range of functionality in a relationship between the operator and the proposed purposes for the textual machine. The values defined for this code are explorative, constructive, communicative and collaborative. This code condenses the descriptive power of the lower level technical codes described above, or may be used in conjunction with them to provide more specific analytical detail regarding the generic functional purpose proposed in the design of the artifact.

The values discern various classes of activity, similar to the categories of the activity structure defined in Chapter two. The value "explorative" describes functionality that enables an operator to interact with the content and interface elements of a system. The value "constructive" describes functionality that enables an operator to construct content on a system. The value "communicative" describes functionality that enables an operator to communicate interpersonally, either synchronously or asynchronously with other operators. The value "collaborative" describes functionality that enables an operator to synchronously control remote systems with other operators. Table 3-5 below shows the operator function code and value definitions.

Table 3-5. The values and definitions of the operator function code.

**Operator function** (Explorative, Constructive, Communicative, Collaborative)

Describes a range of functionality in a relationship between an operator and the proposed purposes of a textual machine.

**Explorative** \- Describes functionality that enables an operator to interact with the various content and interface elements present in a system

**Constructive** \- Describes functionality that enables an operator to create and publish permanent content or interface elements on the system

**Communicative** \- Describes functionality that enables synchronous or asynchronous communication with other operators

**Collaborative** \- Describes functionality that enables synchronous control of remote systems

Operator perspective

The operator perspective code articulates a range of content presentation styles in the system. Values of subjective, objective, universal and reflexive describe a range of specific content presentation styles that may be observed.

For example, when textual or audio-visual content is played for the user a value of "subjective" would be assigned to the operator perspective variable because the presentation style of the content suggests a subjective position for the user. While this doesn't determine how an actual user may respond, it is meant to describe a presentation style that doesn't relegate a large degree of choice or control for the user. In a website design that employs a hierarchical menu, a value of "objective" describes a relationship where a range of choices are offered and an operator may select from those given. The term "universal" indicates an presentation style that suggests complete knowledge of the content domain such as a search function or a topic map constructed with complete indexical knowledge of the program. A value of "reflexive" describes a perspective on the operators' own content or activity such as that provided by a memory function.

In interpreting the popularity of various interactive features for a given system, it may be useful to consider the operator perspective variable for content or activity in the system. For instance, search engines which claim a universal perspective on the content of the World Wide Web are popular destinations and search functions can be observed widely on websites in current practice. In telephone systems the popular Caller ID feature gives the receiver of a call an objective perspective on the identity of the caller allowing them time to decide whether or not to accept the call. Without this objective perspective on the identity of the other caller, the receiver can be described as assuming a subjective position to the caller's initiative by answering without knowing the identity of the caller. Also, the Caller ID memory provides a universal perspective on calls received on the line whether or not the caller chooses to identify themselves. These features have proven their wide appeal in their approach to expanding users perspectives to system content and activity. Table 3-6 below shows the operator perspective values and definitions.

Table 3-6. The values and definitions of the operator perspective code.

**Operator perspective** (Subjective, Objective, Universal, Reflexive)

Describes a range of observable presentation styles in relation to system content and / or activity.

**Subjective** \- A personal or first person subjective perspective is suggested by content that is presented without interactive function

**Objective** \- An impersonal or third person objective perspective is suggested by content that includes at least selective control within the domain

**Universal** \- A universal or God's eye view perspective is suggested by interactive function that implies knowledge of the whole domain.

**Reflexive** \- Describes functionality that enables an operator to access representations of past content or activity via a memory function

Operator identity

The operator identity variable describes a range of functionality in the relationship of an operator to the functional characteristics of the representation of the operators' unique identity on the system. The possible values for the operator identity code are, personal real, impersonal, constructive and multiple. A personal real value describes a system that requires the operators' real life identity. Some examples are work, education or commerce related systems that require an operator to explicitly identify themselves to access and utilize the system. For example, at the commercial site www.amazon.com user transaction records can be used to recommend books for purchase and the transactions and underlying user profile includes specific personal information about the operator, such as their home address and credit card information.

An impersonal value for the operator identity variable describes interaction without an explicit personal identity in a program, such as in an interactive fiction or non-personalized news site that doesn't make use of the users personal identity. A value of constructive designates that the operator may construct and assume a personal identity in the context of the system, and that identity may be anonymous or personally identifying. This may also be seen in current practice in the context of a non-fiction interaction, such as a chat room or electronic mail system, where viewers use anonymous handles, but interact as real people using the assumed identities.

In a system based on community profiling techniques, personal information is often used in an anonymous way to drive the content generation, so that the users preferences identify the operator as a unique individual, but information about the operators real life identity is not used. A value of multiple describes functionality that enables an operator to construct and manage multiple identities in engaging the system. The constructive features of the operator identity can be described in terms of the technical interaction codes for more descriptive detail. Table 3-7 below shows the values and definitions of the operator identity code.

Table 3-7. The values and definitions of the operator identity code.

**Operator Identity** (Personal real, Impersonal, Constructive, Multiple)

Describes a range of functionality in the relationship of an operator to representations of their unique identity on the system.

**Personal real** \- The operator is identified uniquely and personally with real life identifying information on the system

**Impersonal** \- The operator is not identified uniquely on the system

**Constructive** \- A set of identifying characteristics may be configured or constructed by the operator allowing anonymous interaction

**Multiple** \- Describes functionality that allows multiple identities to be selected, configured and managed during an interactive session

Operator presence

Finally, the operator presence variable describes a range of functionality in a relationship of the operator to the representation of the operators presence on the system. A value of configurative describes functionality that enables an operator to manipulate evidence of personal presence recorded in the memory function of a system. In a USENET news group for example, comments posted are not available for deletion or modification to the typical user and thus would be assigned a value of static.

A value of generative describes systems that record and utilize operator activity in the system. For example, a system that employs user profiling techniques might use information generated by an operators web page selections in the dynamic generation of output. An example is the community profiling systems used in current practice at some commerce enabled websites, such as the music retailer Cdnow.com. When an operator enters the site their past purchasing activity and demographic profile is compared with the purchasing and demographics of other visitors to the site. The site will then dynamically generate a list of recommended music based on purchases made by other visitors with a matching demographic. An (often erroneous) assumption may be made that the visitor may be interested in the music purchased by others users with similar demographic characteristics. Table 3-8 below shows the values and definitions of the operator presence code.

Table 3-8. The values and definitions of the operator presence code.

**Operator presence** (Static, Configurative, Generative)

Describes a range of functionality in a relationship of an operator to representations of their presence on the system

**Static** \- Evidence of presence may not be manipulated by an operator

**Configurative** \- Describes functionality that enables an operator to manipulate memory function evidence of personal presence available to others on the system

**Generative** \- Describes a system that generates a memory function trace based on operator engagement with the system

The complete list of variables, value ranges and definition of the interaction structure is presented in Appendix A

Procedures

The following section lists the procedures employed to conduct the study according to the methods described and to guide the analyses and interpretations as framed by the study goals and focus questions.

_Step 1:_ The literature on the technical features and educational applications of the Knowledge Integration Environment (KIE) (Bell et al., 1996) and Cognitive Flexibility Hypertexts (CFH) (Spiro & Jehng, 1990; Jacobsen & Spiro, 1995) are formally appraised for patterns of interest.

_Step 2:_ Identified codes are compared and contrasted with codes from the proposed interaction model.

_Step 3:_ Borrowed codes are identified and their domain of origin is traced.

_Step 4:_ Social and cultural origins of the borrowed codes are traced and potential paradigmatic meanings and transformations of the borrowed codes in the new domain are identified.

_Step 5:_ New patterns and codes are identified in the data and examined for potential social, cultural and paradigmatic meanings.

_Step 6:_ The implications are interpreted in terms of a problem posing process that seeks to problematize participant knowledge of common sense interface constructions in digital media programs in order to foster the development of critical autonomy for participants and practitioners.

Data

This analysis of collaborative knowledge environments employs as data from current practice the specific examples from the Knowledge Integration Environment (Bell, 1997), called the KIE, and the Cognitive Flexibility Hypertexts (Spiro & Jehng, 1990; Jacobsen, 1995), CFH, introduced in the literature review. This data set may be considered a representative sample and were chosen because of the exhaustive range of typical and advanced interactive features that they contain. A large amount of reference material was examined but left out of the study in order to simplify the presentation. Propp's (1968) analyses showed a great deal of repetition of functions in the tales that he analyzed, which can theoretically justify the use of a limited sample size. I propose a similar result in this analysis where my research has shown that the functions of interaction in digital media are also repeated across content and interface domains. The small sample set allows a focus on the quality of the analysis rather than the quantity of the material.

Pilot study discussion

An ongoing preliminary pilot study was conducted in the course of assembling the methods and procedures required for the conduct of the investigation. Some of the refinements and confirmations suggested by the pilot study are discussed briefly here. In regards to the study methods, adjustments to the selection and design of the variables and value elements of the interaction structure were suggested to accommodate both the data under examination as well as the emergence of focal points for the interpretation of results according to the media education framework. The application of the interaction structure as a guide in the analyses led to refinements in the selection and organization of the complementary methods, such as the analyses of code sets and structural relations, that are to be applied. For instance, the use of the syntagmatic analyses at the higher level activity structure was determined after applying the lower level interaction syntax codes to the data.

In terms of the validation and confirmation of the interaction structure, an identified correspondence between the established aims of the media education framework and the methods applied in the study serves to informally confirm the focus of the current design. For instance, the analysis of formal structure of the media education framework is addressed in the technical and conventional representation codes of the interaction structure. Although the study is concerned with programs within a single format in the domain of digital media, an informal comparison with telephony and television applications has demonstrated that the descriptive function of the interaction structure may be applied across networked electronic communications mediums. Additionally, the framework was validated informally by a small set of developers and analysts who commented on the selection of codes proposed.

Finally, the scope of this study reflects the convergence of learning theory and technology that have been assembled in these examples of the collaborative knowledge environment format. While this format may not yet be widely adopted in current practice, this study suggests that critical knowledge will need to be applied in order to fully understand and appreciate the sophisticated technologies being proposed for educational practice.

Definition of terms

The following terms are defined for clarity. Each term is followed by a list of equivalent terms in italics that may be used in place of the original term at times.

**Activity analysis.** _User profiling. Personalization._ Embedded computational analysis approach to discern activities undertaken in an interactive system.

**Authoring.** _Publishing._ Category of interactive functionality that describes the viewers ability to create and publish Web data such as text, graphics, sound, programming, etc.

**Codes.** _Syntax. Patterns_. Repeated communication conventions that can be identified and analyzed for social and cultural meaning in a communication.

**Cognitive hypermedia.** A hypermedia design that employs formalized representations of knowledge and cognition at the level of the interface. Examples described include Cognitive Flexibility Hypertexts and Knowledge Integration Environments.

**Collaborative knowledge space.** _Collaborative knowledge environment. Knowledge integration environment._ Networked hypermedia computer system that combines content analysis, activity analysis, cognitive hypermedia and Internet collaboration.

**Communication.** Category of interactive functionality that describes the viewers ability to conduct interpersonal communications.

**Computable structure.** This is the empirical, computable aspects of the user profile or activity model employed in algorithmic approaches to intelligent systems such as user profiling for electronic commerce and activity analysis for collaborative knowledge spaces.

**Content analysis.** _Semantic analysis_. Algorithmic approach to the analysis of textual documents. Includes lexical analysis and semantic interpretations to extract, rank and relate thematic concepts from a corpus of texts.

**Digital media.** Term coined to represent any networked electronic communications system that encodes information in digital form at any point in the transmission process. Intended to include a range of systems including those that utilize analog devices such as telephone voice mail systems.

**Exploration.** _Navigation._ Category of interactive functionality that describes the viewers ability to navigate, search and select from available information in a non-linear manner.

**Hypermedia.** _Hypertext._ The combination of multimedia with hypertext like selection and navigation has been termed hypermedia.

**Interaction.** _Interactive. Interactivity._ Complex concept spanning a range of activities related to functionality that allows human to computer and human to human communication. Unambiguous use requires the identification of interaction categories.

**Internet hypermedia.** Term coined to represent the entire domain of Web applications as well as specialized applications that combine the capabilities of hypermedia with the communication capabilities of the Internet. Includes computer applications that utilize the Internet but not necessarily through a web browser.

**Internet/TV.** _Set-top box. Net-top box. PC/TV._ New class of system combining television reception with Internet access. Typically uses a television for display and an infrared remote for input. WebTV is an example of an Internet/TV system in current practice.

**Operator.** _User. Viewer. Audience. Learner. Participant._ Refers to the individual or group that is receiving, interacting or in anyway participating in the electronic communication.

**Text.** _Program. Object._ Refers to the object of study in a structural or semiotic analysis.

**Textual machine.** _Digital media textual machine._ A social-technical construct introduced by Aarseth (1997) and employed here to describe interaction from the user perspective in terms of observable functional descriptive characteristics of the system.

**User profiling.** _Personalization._ Computational approach employed in computer applications to model individuals generically and specifically for dynamic interface and content generation in the course of interaction.

**Web browser.** _Browser. Client._ Computer program used to access data published on the Internet using the Web standard.

**Website.** _Web page. Web application._ Term describing the site of Web data originating on a Web server. May range from a simple static page of textual data on a single server to a complex application program that spans multiple servers in remote locations.

**World Wide Web.** _Web. Website. Program._ Internet standard that combines hypermedia capabilities to data residing on computers on the Internet.

Get Smart Fast

An analysis of Internet based collaborative knowledge environments for critical digital media autonomy
Chapter Four: Results

This chapter presents the data resulting from the application of the analytical methods described to the Knowledge Integration Environment (KIE) and Cognitive Flexibility Hypertext (CFH) programs. Two main sections are presented here, the structural relations and the code set analyses. The data were gathered from an examination of published papers (listed in Chapter Three: Data) that include detailed descriptions of the functionality and use of the programs as well as from screenshots and demonstration interfaces that are publicly available on the World Wide Web. The KIE project has recently been renamed the Web-based Integrated Science Environment (WISE) and demonstration screens and current information are available on the web at http://wise.berkeley.edu. This analysis will retain the KIE title for clarity.

Structural analyses of the KIE and CFH

This section includes three aspects of the analysis. First, the following discussion introduces Cognitive Flexibility Hypertexts and the basic aspects of Cognitive Flexibility Theory (CFT) that are employed in their design. Next, the KIE program is introduced and described. Three major stages of activity are then described in a syntagmatic analysis before the interaction structure is employed to guide the identification of important codes for analysis.

Description of the CFH program

Cognitive Flexibility Hypertexts are hypermedia texts designed specifically to address the unique problems of advanced knowledge acquisition in domains that may be considered complex, ill-structured, and that benefit from explicit support for multiple knowledge representations and their associated conceptual and case based perspectives (Spiro and Jehns, 1990). The metaphor of a criss-crossed landscape is employed to motivate hypertext designs that support the nonlinear traversal of domain specific content from multiple contextual and thematic perspectives with the goal of promoting the flexible schema assembly required for mastery in conceptually complex domains and the transfer of knowledge acquired to new cases. This approach is used to form the basis for a general theory of learning, instruction and knowledge representation which is described in the following statement:

One _learns_ by criss-crossing conceptual landscapes; _instruction_ involves the provision of learning materials that channel multidimensional landscape explorations under the active initiative of the learner (as well as providing expert _guidance_ and commentary to help the learner derive maximum benefit from his or her explorations); and _knowledge representations_ reflect the criss-crossing that occurred during learning. By criss-crossing topical/conceptual landscapes, highly interconnected, web-like knowledge structures are built that permit greater flexibility in the ways that knowledge can potentially be assembled for use in comprehension or problem solving (ibid., p. 170).

In Cognitive Flexibility Theory wide scope schemas are the organizational or conceptual schemas, themes or concepts defined to formally represent knowledge in complex domains. The schemas, which are constructed from multiple expert sources, are presented as a collection of approaches to understanding the underlying case content. Thus, in a collaborative effort researchers would each put forth their themes, supporting case evidence and expert commentary as a complete schema for understanding a subset of the underlying content domain. In this approach, learning and knowledge transfer are facilitated by having a large number of wide scope schemas available and by enabling learners to understand and employ the multiple theoretical or case perspectives and methods represented in a flexible manner. However, wide scope schemas at this level are considered inadequate abstractions or generalizations to represent the substantial across case variability found in complex knowledge domains; therefore the finer grained structures of the case and the mini-case are employed to further structure and organize the evidence.

Cases and mini-cases are the terms used to represent arbitrary divisions in the domain content that provide evidence for the claims presented in wide scope schemas. Cases represent the actual evidence that can be observed in the underlying content. Cases are the complex phenomena that comprise ill-structured domains and as such they resist one dimensional and oversimplified analysis and description. Because multiple wide scope schemas may present varied approaches and insights to understanding the same case, cases are considered open to interpretation from a variety of investigative perspectives. While a case may represent what happens in the world, it is still considered suitably complex to require further discrimination to avoid intact case based reasoning (which is considered problematic) and to present cognitively tractable yet suitably complex sub sections of the evidence for analysis (ibid., p. 185).

A mini-case is defined as a short scene that retains some of the conceptual complexity of the case. The mini-case is the shortest useful sequence defined, and to avoid over-simplification it must retain a substantial degree of complexity of the case from which it is drawn. The CFT research refers to it as a "microcosm" or case miniature as opposed to a separate case compartment. The hypertext design suggests the inclusion of a large number of mini-cases to provide learners with as many discrete examples of evidence and analysis as possible. This approach intends to provide experience consolidation at the level of the mini case and avoids an over reliance on prototype cases which are considered problematic. Also, the claim is made that flexible, situation specific schema assembly is facilitated through the availability of a much larger range of potential precedent case assemblies for use in understanding new situations.

Description of the KIE program

In the SenseMaker module of the KIE, researchers or educators use a frames model to represent knowledge from a pedagogical framework called Scaffolded Knowledge Integration (Bell and Davis, 1996). This approach uses real world situations and students' own knowledge in the effort to stimulate curiosity and encourage student testing, revision, reformulation and expansion of their existing scientific knowledge. This program is designed to model and support scientific thinking and argumentation, and it is used by the researchers or educators to formally organize and model the knowledge of expert or historical scientific arguments. The goal of making knowledge visible is supported by the use of a web based hypermedia design that seeks to mimic the mental schemas and cognitive processes of an expert engaged in scientific inquiry.

In this effort the authors employ configurable templates called frames to isolate and order key concepts and evidence and make their relationships and interconnections explicit through the use of hypermedia linking and organization techniques. Concept frames may be nested within frames and individual pieces of evidence are noted by dots which are colored to represent a range of possible rating evaluations that may be assigned. The exploration of scientific concepts from multiple perspectives is claimed to be facilitated through the use of this interface for organizing evidence from the Internet and other sources into conceptual categories. Instructional scaffolding may be supported by providing initial frames to students and students may collaborate to reflect and compare different group approaches to the interpretations required in an investigation.

In the KIE, individual and group authoring is enabled through the provision of tools that allow students to build and share their own hypermedia representations of scientific arguments. Student interpretation of scientific problems are argued and supported through the presentation of supporting research, evidence and data. The use of the same configurable template employed to model expert knowledge guides student practices of knowledge construction in this phase. The collaborative exchange and evaluation of arguments is enabled through the ability to web publish student arguments for discussion. Learners explore and evaluate the arguments of others in their class before engaging in a group process to bring the various perspectives together for discussion. A stated goal of the process is that, "Particular conceptual and epistemological ideas of the groups are made visible and can easily become productive topics of conversation" (Bell, 1997). In the educational research undertaken to study the KIE, empirical studies claim to demonstrate that as learners engage in discussion and debate of the claims and evidence presented, cognitive reflection practices are promoted and encouraged.

Description of identified activity structures

A high level activity analysis of the CFH and KIE programs reveals a shared syntagmatic code, a linear ordering of three major stages of activity namely, exploration, authoring and debate stages. An exploration stage is first where participants explore and critique the hypermedia content present in the system. In the KIE program the content consists of scientific theories and claims structured as expert argument and evidence models that are presented to the students. In the CFH study this is termed the reading stage, where students engage the formal knowledge representations of multiple wide scope schemas, cases and mini cases used to organize and present the information from multiple expert perspectives (Jacobsen and Spiro, 1995). Students were instructed to read the mini cases and the commentaries and make note of the applicable themes for each mini case section. Participant activity at this stage is primarily explorative as students are directed to engage the hypermedia information system to become familiarized with the existing content that has been presented.

Next is an authoring stage which requires student construction of arguments and evidence to address novel cases or controversial questions in the field. In the KIE the students create concept frames, assemble and annotate evidence, and organize the arguments and interpretations that support their view. Constructive functionality for the student creation of hypermedia documents at this stage is enabled through access to standard productivity applications and to web browser based authoring features in the KIE software. Authoring templates based on the knowledge representations employed are used to guide student construction of hypermedia content in this stage. Individual or small group arguments may then be published for interpretation and debate by the wider group.

The next CFH activity stage sought only to have students apply various wide scope schemas and assemble case specific knowledge for application to novel cases in the domain. Thus, a "study" stage was defined to investigate learning effects from "thematic criss-crossing," which involved having students reread different mini cases in the context of multiple wide scope schemas. Student activity included a Theme Identification exercise which had students identify applicable themes for a mini case which was then compared for accuracy to a master list of themes that the student could then use to study. In both the hypertext and activity designs at this stage, students were encouraged to traverse the conceptual map from multiple theoretical and contextual perspectives.

In the debate stage, Internet based and face to face group collaboration are employed to allow the comparison of different premises and conclusions represented by the students. In the KIE research, group discussions are proposed to provide peer feedback for cognitive reflection and to encourage the social interaction required to respond to the issues and questions raised by the group. Conceptual perspective taking may also be scaffolded by asking students to defend positions other than their own. In the KIE SpeakEasy program, student opinions and discussions may be summarized in an argument map that may be published publicly for examination by other groups or classes. The multimedia interface supports text, graphics, sound and video and is claimed to stimulate discussion and reflection (Bell and Davis, 1996). In the CFH the multiple perspectives and interpretations that constitute the knowledge domain are compared and debated in the group. In both programs studied it should be noted that while Internet based group discussion and debate may be enabled, they are primarily conducted in face to face settings that take place within an individual classroom. In the next section, the data is presented using the interaction structure to guide the analysis.

Interaction structure analysis of the KIE

In this section the data on the KIE program is presented using the specific variables and values of the interaction structure described in Chapter three. For each variable, a brief definition is repeated and an observable, representative aspect of each program is cited to support the values selected for description. At the end of this section, a table is presented to organize the results.

The interface function is defined to describe a range of observable functionality in a relationship between an operator and the content of a system. In the KIE program, interface function values of static, selective, configurative, constructive and autonomous may be observed. The use of a browser based graphic user interface is cited to support the static, selective, and configurative values and the integration of a collection of productivity software such as word processing, graphics and multimedia programs is cited to support the constructive value. Interface function is also termed autonomous in the KIE as hidden algorithmic analysis processes are employed in the automatic generation and presentation of content elements to the operator. A simple form of activity analysis is employed to provide prompts and guidance for students as they work to complete their projects. Activity prompts are project specific and focus on detailed considerations for individual activities while self monitoring prompts are designed to be generic across all projects and provide planning and reflection guidance as students proceed through the activity stages. These prompts may be sought out through the help system or may be triggered automatically by the learners selection of some particular content.

The metaface function describes a range of functionality in a relationship between an operator and the interface level of a system. In the KIE program, metaface function values of configurative and constructive may be observed. The use of typical WIMP style interface features that allow the user to organize and configure the display of interface elements such as tool palettes and content windows is cited to support the value of configurative. A value of constructive describes functionality that allows the operator to create and publish content with interface elements for other users of the system. The web authoring capabilities which are provided in the SenseMaker notebook tool of the KIE are cited to support the constructive value. In this program, students may go beyond the creation of local content from productivity tools to the publishing of Internet hypertext interfaces with their content.

The memory function of the interaction structure refers to the existence and ownership of a memory mechanism in the system. In the KIE, both system and operator memory mechanisms may be described. For students, a simple activity check list which may be used to monitor progress through the project is cited. Also, the constructive functionality of the notebook and the bookmark management features of the web browser may be used in an informal fashion to create and manage topically based memory traces. A system owned mechanism is not discussed but may be positively identified due to the web based architecture of the program.

A consideration of the reality function requires an appraisal of the abstractions of reality present in the underlying system content and representations. In the KIE, the reality function has values of digitized and modeled. The use of digital video evidence is cited to support the digitized value and the use of concept frames based on cognitive science knowledge models is cited to support the value of modeled. The next section describes the KIE program in terms of the remaining conventional representational codes from the interaction structure, namely operator function, perspective, presence and identity.

The operator function is defined to describe a range of functionality in a relationship between an operator and the proposed purposes of the textual machine. Operator function in the KIE program exhibits explorative, constructive and communicative capabilities in its features. The hypertext exploration and authoring capabilities are cited to support the explorative and constructive values. The Internet publishing and asynchronous information sharing capabilities that allow students to collaborate within and between groups are cited to support the value of communicative.

Operator perspective is defined to describe a range of presentation styles that may be observed with respect to representations of system content or activity. In the KIE, values for the operator perspective variable are subjective, objective, universal and reflexive. The hypertext interface is cited to support the subjective and objective values which refer to the individual information nodes and the presence of choices for selection respectively. A universal perspective within the domain may be provided by indexed search capabilities or an index to the other group projects in the class, but this is not discussed. The presence of a reflexive perspective is supported by the ability to access manually constructed memory representations or the published content of the groups.

Operator identity is defined to describe a range of functionality in a relationship of an operator to the representations of their unique identity on the system. In the KIE, the value is personal real and the use of personally identifying login names is cited to support the value. Because the operator has no functionality relating to representations of their presence on the system, an operator presence value of static is assigned. A summary of the structural analyses of the KIE program are presented in the next section.

Summary of the structural analyses of the KIE

To summarize, the results of the structural analyses of the KIE program have been presented as follows. First, a description of the stated goals and techniques of the approach were presented. Then, a syntagmatic analysis was used to articulate the sequence of activities employed in the classroom. Finally, the KIE program was characterized according to the variables and values of the interaction structure. The summary is presented here in terms (and in order) of the structural relations examined namely, syntagmatic, paradigmatic and analogical.

The high level syntagmatic analysis revealed a temporal structure to the activities employed in the KIE research. Three stages were identified, namely exploration, authoring and debate stages. These were employed to guide the student work within the time based constraints of classroom study. Additionally, an activity module named Mildred was employed to guide student work and to help students decide what activities to pursue within each of the stages.

The paradigmatic analyses were performed within the interaction structure analysis which included a range of nominal values (or paradigm of choices) for each variable considered. Of the two programs considered, the KIE demonstrated a much richer set of interactive features due to its' emphasis on constructive and collaborative functionality for both content and for the development interfaces that could be shared with other students or groups. In at least half of the variable categories the KIE program was assigned the complete range of possible values. Table 4-1 below lists the complete analysis of the KIE according to the variables and values of the interaction structure.

Table 4-1. The interaction structure coded for the KIE program

Codes and Values

**Interface function** \- static, selective, configurative, constructive, autonomous

**Metaface function** \- configurative, constructive

**Memory function** \- system, operator (individual)

**Reality function** \- digitized, modeled

**Operator function** \- explorative, constructive, communicative (universal)

**Operator perspective** \- subjective, objective, universal, reflexive

**Operator identity** \- personal real

**Operator presence** \- static

Finally, in regards to identified analogical relations, the KIE program was shown to have transformed concepts and constructs from cognitive science discourses into the hypermedia design. To achieve the stated goal of making knowledge visible, the concept of conceptual frames to organize scientific theories and evidence was used. This approach uses configurable templates called frames to isolate and order key concepts and evidence and make their relationships and interconnections explicit through the use of hypermedia linking and organization techniques. In the next section, the interaction structure analysis of the CFH program is presented.

Interaction structure analysis of the CFH

In this section the data on the CFH program is presented using the specific variables and values of the interaction structure described in Chapter three. For each variable, a brief definition is repeated and an observable, representative aspect of each program is cited to support the values selected for description. At the end of this section, a table is presented to organize the results.

The interface function is defined to describe a range of observable functionality in a relationship between an operator and the content. In the CFH program, interface function values of static, selective and configurative may be observed. The use of a graphic hypertext user interface is cited to support the static, selective, and configurative values. The metaface function describes the range of functionality in a relationship of an operator to the interface level. In the CFH program, metaface function is observed to be static due to the absence of any authoring functionality in the program.

The memory function of the interaction structure refers to the existence and ownership of a memory mechanism in the system. In the CFH, neither system or operator memory mechanisms are described. A consideration of the reality function requires an appraisal of the abstractions of reality present in the underlying system content and representations. In the CFH, the reality function has a value of modeled. The use of cognitive flexibility theory based knowledge models in the hypertext and activity designs is cited to support the value of modeled.

The operator function is defined to describe a range of functionality in a relationship between an operator and the proposed purposes of the textual machine. Operator function in the CFH program exhibits explorative capabilities in its features. The hypertext exploration features are cited to support the explorative value. The operator perspective code attempts to articulate a range of content presentation styles in the system. In the CFH, values for the operator perspective variable are termed subjective, objective, and universal. The hypertext interface is cited to support the subjective and objective values which refer to the individual information nodes and the presence of choices for selection respectively. A universal perspective within the domain is provided through the use of an index to the themes and cases.

Operator identity is defined to describe a range of functionality in a relationship of an operator to representations of their unique identity on the system. In the CFH, the value is impersonal because the operator is not identified uniquely on the system. Because the operator has no functionality relating to representations of their presence on the system, an operator presence value of static is assigned.

Summary of the structural analyses of the CFH

To summarize, the results of the structural analyses of the CFH program have been presented as follows. First, a description of the stated goals and techniques of the approach were presented. Then, a syntagmatic analysis was used to articulate the sequence of activities employed in the classroom use of the program. Finally, the CFH program was characterized according to the variables and values of the interaction structure. The summary is presented here in terms (and in order) of the structural relations examined namely, syntagmatic, paradigmatic and analogical.

The high level syntagmatic analysis revealed a temporal structure to the activities employed in the CFH research. Three stages were identified were similar to those in the KIE but were named, reading, study and debate stages. These were employed to guide the student work within the time based constraints of classroom study. The reading and study stages are both primarily exploratory, with the students first familiarizing themselves with the content from one or two major perspectives and then proceeding to examine the content more deeply by exploring and comparing additional perspectives and material.

The paradigmatic analyses were performed within the interaction structure analysis which included a range of nominal values (or paradigm of choices) for each variable considered. As described above, the CFH program did not provide the rich constructive and communicative features of the KIE but instead focused on the presentation and hyper-linking of theory and evidence from multiple expert perspectives. Because of the restricted feature set of the CFH, many of the coded variables were assigned only a partial range of possible values. Table 4-2 below lists the complete analysis of the CFH according to the variables and values of the interaction structure.

Table 4-2. The interaction structure coded for the CFH program

Code and Values

**Interface function** \- static, selective, configurative

**Metaface function** \- static

**Memory function** \- none

**Reality function** \- modeled

**Operator function** \- explorative

**Operator perspective** \- subjective, objective, universal

**Operator identity** \- impersonal

**Operator presence** \- static

Finally, in regards to identified analogical relations, the CFH program was shown to have transformed concepts and constructs from philosophical theories of knowledge as well as from cognitive science discourses into the hypermedia design. Cognitive Flexibility Hypertexts are hypermedia texts designed specifically to address the unique problems of advanced knowledge acquisition in domains that may be considered complex, ill-structured, and that benefit from explicit support for multiple knowledge representations. In this approach, the concept of a landscape of knowledge is employed to motivate hypertext designs that support the nonlinear traversal of domain specific content from multiple contextual and thematic perspectives. The organizing concepts of themes, cases and mini cases are translated into formal hypermedia design structures to isolate and order theoretical concepts and their supporting information. These knowledge models are used to make relationships and interconnections explicit through the use of hypermedia linking and organization techniques. In the next section the code set analysis of the study data is presented.

Code set analyses of the KIE and CFH

Code set theory (Metz, 1974) suggests the attempt to identify borrowed codes in a program and to articulate the domains (or discourses) where they have been previously used or have even originated. When a program can be shown to have borrowed or appropriated codes, then identifying the domain of origin and tracing the social and cultural meaning from the previous use may inform the analysis of the code in the current usage. Culturally derived, shared and unique codes may be distinguished, and if the codes appear to be unique to the medium then a new grammar may be indicated (DeVaney, 1991). An analysis may also seek to highlight the presence of old codes in a new context in order to examine the transformations undergone and the underlying assumptions and implications that may be identified.

Domains identified in the KIE and CFH programs

The following section presents the data in terms of the code set domains identified in the analysis of interaction conducted on the CFH and KIE programs. Each domain identified includes a brief description of the aspect of the program that supports the identification as well as an initial description to identify and define the originating domain of use. The domains identified are related to interaction and are considered in two groups, technical and social. The technical interaction domains identified include Internet hypermedia interface approaches as well as visualization and personalization technologies. The social interaction domains include social-cultural approaches and practices that have been identified such as constructivist learning theory and practices of professional inquiry from scientific and scholarly research. Figure 4-1 below shows some of the domains identified which share codes with the CFH and KIE programs.

Technical interaction domains

The technical interaction domains identified include the use of WIMP (windows, icons, menus and pointers) style graphical user interfaces in both programs. A web browser based interface is employed in the KIE and an Apple HyperCard interface is used in the CFH program. This code set domain is termed Internet hypermedia in this study to describe the use of graphical hypermedia content as well as access to Internet based functionality.

HyperCard is a graphical hypermedia programming environment first introduced by the Apple Computer corporation in 1985. HyperCard (Apple Computer, 1989) is an example of a hypermedia programming environment that allows an author to combine multimedia content such as images and sound, with hypertext style linking. The multimedia and hypertext capabilities preceded NCSA's (National Center for Supercomputer Applications) Mosaic web browser which was introduced in 1991 and combined a hypermedia interface with access to HTML content via the Internet based HTTP protocol used on the World Wide Web. In this type of interface, specific text or graphic images may be denoted as "hotspots" which, when selected by the user, trigger the activation of programming or the display of other content. On the Internet's World Wide Web, the hypermedia content or interactive functionality may reside anywhere on the global Internet, thus the term, Internet hypermedia. A web browser program is used to read the information and the navigation style is often termed browsing or surfing to describe this visual exploration and selective style of interaction.

Visualization technologies are defined as a domain to describe the use of graphic representations of knowledge and relationships between individual information nodes. In the KIE, graphically denoted concept frames are used to organize and present scientific knowledge in the SenseMaker program. In the CFH, graphic theme lists are used to organize the related links that connect the themes to the mini case data in the program. Visualization technologies are defined as a domain to highlight the trend in communication towards the visual representation of information that may have been otherwise represented in other modes, such as written or spoken language. From the spatial organization of the hypertext interface and knowledge structures to the graphical and video based evidence presentations, both the KIE and CFH programs exhibit an extensive reliance on visual modes of representation.

Personalization technologies are identified as a technical interaction domain to describe the use of algorithmic approaches to individual users in the system. Examples include programming that seeks to automatically guide and augment user activities in the effort to more effectively accommodate individual users and tasks. Personalization technologies may also be offered to the user in the form of configurable controls that allow the user to receive information from a provided selection of available information sources. In the KIE, activity prompts are employed to suggest subsequent activities based on an appraisal of current progress in the project. Hints or suggestions that are provided for the user are selected based on an analysis that seeks to intelligently interpret the most relevant hint based on the users activity in the system. In the CFH, no personalization technologies were identified. Table 4-3 below lists the technical interaction domains identified and supporting examples from the KIE and CFH programs.

4-3. Identified technical interaction code domains and examples

_Code domain -_ Internet hypermedia _; KIE program -_ Web browser based WIMP style GUI interface with interactive graphics in Java; _CFH program -_ WIMP style GUI interface in Apple HyperCard for Macintosh

_Code domain -_ Visualization technologies; _KIE program -_ Graphical concept frames to organize and represent knowledge _; CFH program -_ Graphical theme list to organize and present themes and links to relevant mini cases

_Code domain -_ Personalization technologies; _KIE program -_ Automated activity prompts and online "hints"; _CFH program -_ no evidence

Social interaction domains

The social interaction domains identified include theory based constructivist psychology and learning theory approaches as well as social practices from scientific and scholarly inquiry. Codes from constructivist psychology discourses are identified in the use of formal models of human knowledge and cognitive processes in both the KIE and the CFH programs. The KIE program seeks to make knowledge visible and to do so employs a schematic model of expert knowledge using concept frames to organize important ideas and their supporting evidence. The CFH program design is based on Cognitive Flexibility Theory, which proposes the use of schema, case and mini case objects to represent knowledge in complex, ill-structured domains. These theoretical concepts are used to inform the design and organization of objects employed at the interface level in the programs.

The activities proposed in the design of both programs may be identified in the research on learning theory. In the KIE, a theoretical framework named Scaffolded Knowledge Integration (SKI) (Bell, 1997) is used to guide the activities proposed in the program. For instance, the goal of providing social supports is one of the four main components of SKI and it is embodied directly in the debate stage activities of the KIE. In the CFH, the case based approach and thematic criss-crossed traversal of the knowledge domain directly reflects a stated goal of Cognitive Flexibility Theory, which is to view the problem domain from multiple expert perspectives.

Finally, both programs seek to parallel the social practices of domain experts in their conduct of professional research and inquiry. This is motivated by the stated goal of constructivist learning to provide authentic experiences based on real world practices. In the KIE, the practices of scientific research are modeled in the selection and arrangement of investigative and communicative activities undertaken by the students. In the CFH, scholarly inquiry in the humanities is modeled by highlighting the inter-connected and sometimes conflicting and contradictory perspectives that may be present in the apprehension of complex domains of knowledge. Table 4-4 below lists the social interaction domains identified and supporting examples from the KIE and CFH programs.

Table 4-4. Identified social interaction code domains and examples

_Code domain -_ Constructivist psychology; _KIE program -_ Schematic expert knowledge models used in concept frames _; CFH program -_ Complex knowledge modeled using schemas, cases and mini cases

_Code domain -_ Learning theory; _KIE program -_ Debate phase activities from Scaffolded Knowledge Integration framework _; CFH program -_ Thematic criss crossing and case based exploration from Cognitive Flexibility Theory

_Code domain -_ Professional inquiry; _KIE program -_ Investigation and communication activities from scientific research _; CFH program -_ Exploration and integration of multiple expert perspectives from scholarly inquiry

Description and trace of selected domains

In this section, a selection of the originating domains are described to include identifications of interested institutions and organizations with authority and descriptions of current or historical practices surrounding their use. The Internet hypermedia, constructivist psychology and learning theory domains are considered together in the analysis as constructivist technology. Next, efforts in computer personalization and their current usage in popular commercial web applications are described. Finally, visualization techniques are examined and some of their original applications in military intelligence efforts are described.

Code set: Constructivist technology

Computer based learning environments that combine concepts from constructivist psychology discourses with proposed outcomes and activities from constructivist learning theory frameworks in their design are here termed constructivist technology approaches. These programs have been shown to share codes from scientifically oriented educational research such as cognitive science and behavioral psychology as well as the technical communications architecture and commercial applications of the Internet. The origins of this research have been traced to the traditional troika of interested institutions and organizations with authority identified in historical examinations of educational technology namely, government, military and corporate interests (DeVaney, 1998; Popkewitz & Shutkin, 1994).

The research investigated in this study declares funding support from the U.S. government and military supported National Science Foundation. The following quote describes the mission of the agency as established originally by the National Science Foundation Act of 1950. "To promote the progress of science; to advance the national health, prosperity, and welfare; and to secure the national defense" (NSF website, 2000). The KIE website, which is now called WISE (http://wise.berkeley.edu), displays an NSF icon and states $1.1 million USD of financial support over three years from NSF grant number 9805420. The CFH research cited claims grant support from the NSF as well as from the Army Research Institute Office of Basic Research (Jacobsen and Spiro, 1995).

Code set: Personalization technology

Personalization technologies include systems with "intelligent" interactive capabilities that are designed to empirically model individual participants and their interaction with the system. Research has shown that there is a historic tradition of authoritative institutions and organizations that have been described as seeking predictive knowledge through scientific approaches to the study of individuals and groups in education, work, and consumer contexts (DeVaney, 1998; Popkewitz & Shutkin, 1994; Richardson, 1998; Wilson, 1988; Zuboff, 1988).

Currently, on the Internet it is quite popular to deploy classes of web applications known as recommender systems and personalization systems (Resnick, 1997). Utilizing sophisticated algorithms, particularly from statistical analysis and artificial intelligence research, to profile viewers and make comparisons to others within a community of users, these systems may combine individualized, configurable interfaces with automated content filtering to offer personalization features. These systems identify participant characteristics and behaviors individually, constructing personal and/or communal profiles depending on the application. A current technique used commonly to identify individual viewers works by storing an identifying data string on the visitors system, called a "cookie" (Resnick, 1997). This personal profile data may then be read and processed in subsequent visits to a website.

Currently, the underlying technology of the Internet is seen as offering unparalleled empirical opportunities and is being rapidly developed as a hidden analytical architecture. Newhagen and Rafaeli (1997) note, "The inviting empiricism inherent in Net behavior. Not only does it occur on a computer, communication on the Net leaves tracks to an extent unmatched by that in any other context - the content is easily observable, recorded, and copied. Participant demography and behaviors of consumption, choice, attention, reaction, learning, and so forth, are widely captured and logged" (p. 1). In many applications these activities may take place without any notification to the participant, who may be unaware of the extent of the analytical efforts applied to their activity on the system. These techniques are largely unknown outside of the technical community but must be examined and understood in educational communities that are charged with critically appraising this technology.

This discussion continues by describing applications of personalization technology in current practice on the Internet and proposed for government mandated digital television initiatives. While this technology is presented in terms of convenience and efficiency for the users, this analysis identifies practices that include economically motivated behavioral manipulation (Wilson, 1988) and the stratification or denial of services to consumers based on an analysis of consumption practices (Berst, 1998).

The technology of the Internet is also being integrated with advanced television technology such as the government mandated Digital TV initiative. At the ATVEF website (www.atvef.org), a consortium of technology companies and marketers present whitepapers describing the incorporation of open standard Internet protocols for digital television delivery. This will allow corporate media interests to implement the underlying analytical architecture developed for corporate Internet applications in the much larger public television space. Internet Service Providers, who provide the connection points for consumer Internet connections, are also beginning to use data mining techniques to analyze their customers surfing patterns on the Web (Babcock, 1998). ISP's are merely following the lead of telecommunications companies who have been analyzing customer behavior for years in attempts to understand how their systems are used in practice.

In commercial applications of personalization technologies, convenience and efficiency are offered in exchange for the opportunity to collect personal preference, characteristics and behavior information for use by corporate interests. These profiling approaches are termed the "Informational commodity" by Wilson (1988) and represent lucrative financial caches when they are exploited for commercial purposes. Social filtering techniques, which cluster participants according to demographic characteristics and suggest purchases that others in the category have made, are widely applied in electronic commerce applications on the Internet.

For an example of current corporate practice on the Internet, a popular digital audio jukebox program was recently found to be clandestinely uploading consumer music selections to the corporate website whenever users connected to the Internet (Robinson, 1999). The following quote from the report by the New York Times describes the extensive information collected by the program. "According to Richard M. Smith, an independent Internet security consultant from Brookline, Mass., who discovered RealJukebox's monitoring functions, each time the program is started on a computer connected to the Internet, it sends in the following information to the company: the number of songs stored on the user's hard drive; the kind of file formats -- RealAudio or MP3 \-- the songs are stored in; the quality level of the recordings; the user's preferred music genre, and the type of portable music player, if any, that the user has connected to the computer" (p. 1). While the company claimed it intended to use the information to provide value to the consumers, the monitoring was done in this case without their knowledge or consent.

Wilson's (1988) survey of interactive media for the home describes the Behavior Scan application developed to combine targetable media and supermarket product scanning (p. 41). In special cable television markets, where advertising could be targeted to specific groups of homes, the Behavior Scan program provided a mechanism to correlate daily super market purchase data with advertising exposure data. This allowed the creation of experimental and control groups for advertising approaches which focused on the effort to provoke action within a limited time frame and allowed marketers to study data from the shopping habits of the test groups based on their purchase activity at the supermarket.

Established corporate practices include stratification of services provided based on customer demographics or consumption behavior (including denial of service) as well as algorithmic cultivation of customers based on consumption patterns (Berst, 1998). Automated campaign management programs may be employed to analyze behavior and dynamically construct and deliver incentive programs designed to effect changes in customer behavior. For example, telephone and credit card companies routinely deny service when usage patterns appear to deviate from the calculated norms of the customer profile. When customer profiles are sold and shared among corporate partners, recombinant information practices such as triangulated identification may be employed to develop more detailed profiles than individual consumers may have agreed to disclose voluntarily (Budnitz, 1999).

These approaches are clearly based on the stimulus response model of behaviorism and appear to signal the widespread adoption of cybernetic marketing technologies in which both the stimulus and the response are in electronic form and available for automated processing. This attempt to close the cybernetic loop at the level of the interface reflects trends in marketing to become more efficient, accountable, intensive and manipulative. Wilson (1988) suggests that, "This ability to correlate stimulus to response at the group level - however the group is defined - suggests that improved techniques of social control are on the horizon" (p. 45). While this conclusion remains to be seen, the range of interested institutions and organizations with authority that have been identified and the activities that have been observed in current practice certainly prescribe an investigative interest and an educational response from the media education perspective.

Code set: Visualization technology

Visualizing large scale information space content and activity is a stated goal of military intelligence and corporate interests (Hetzler et. al, 1998). Content analysis techniques describe an important class of automated approaches to the construction of visual representations of textual content and are described here as a novel and important application of visualization technologies to the generation of alternative interface representations. Content analysis technology seeks to synthesize the contents of a specified collection of documents to produce intuitive representations to massive amounts of textual data.

Using lexical analysis techniques from linguistic theory, the algorithms employed perform an extraction, ranking and organization of thematic concepts from the raw textual data in the creation of visual representations that highlight themes and relationships between concepts represented while allowing a drill down capability directly into the underlying documents. This US Intelligence sponsored research (Cartia, 1998) has resulted in commercial products aimed at companies and organizations that seek to visualize the entire corpus of structured and unstructured information that makes up their enterprise.

The approach employs reductive computational linguistic and statistical techniques such as correspondence analysis to reduce the characteristics of a complex information space, which may be comprised of hundreds of dimensions, to two or three dimensions (Greenacre, 1993). This enables the generation of visual representations of the content and the thematic or conceptual relationships that may be found among the entire corpus of documents analyzed (called the docuverse). These approaches seek to reduce information search times by providing semantic search capabilities, as well as providing automatic and intuitive methods for synthesizing and representing the knowledge of the docuverse (Verge Software, 1999).

Figure 4-2 below shows a graphical map created by the Cartia corporation of a USENET newsgroup on the Y2K topic. The map represents the automatically analyzed content of over 3,000 message postings and the interface provided by the Themescape product (Cartia, 1998) allows users to zoom in on topic areas, read and respond to individual postings and search and share the results of semantic searches within the docuverse. What is termed a "cartographic interface" employs a visual representation model where elevation peaks signify high frequencies of documents with similar thematic concepts and concepts in valleys identify less frequently represented topics in the corpus of documents.

Here a military application of content analysis and user profiling technologies is briefly described. From a discussion of the development history on the Cartia (1998) website, the information visualization technology was originally developed for use by the US Intelligence community. In the course of their surveillance of electronic communications, they realized the need to find new ways to handle the analysis of massive amounts of communications data.

Visualization technology was employed to analyze Iraqi message traffic prior to their military occupation of Kuwait. In visualizing large quantities of message content and activity, analysts were able to detect visual changes in messaging patterns that signaled the beginning of the Iraqi military actions prior to their actual occurrence. Thus, the simulation was used to successfully predict the future behavior of the Iraqi forces in a way that, "Took only a fraction of the time typically required for deductions of this magnitude" (Cartia, 1998). This example of a technical analysis of communications content and activity in the domain of interest serves as evidence of the application of these technologies for the analysis and prediction of future behavior.

Summary of the code set analyses

The code set analyses presented above identified social and technical domains of borrowed codes from previous media technologies and applications. The technical interaction domains identified include Internet hypermedia interface approaches as well as visualization and personalization technologies. The social interaction domains include social-cultural approaches and practices that have been identified such as constructivist learning theory and practices of professional inquiry from scientific and scholarly research.

Three domains were described and traced in more detail to identify interested organizations and to provide examples of documented applications of similar technology from the literature. Constructivist technology approaches were defined and described as well as visualization and personalization technologies. Government, military and corporate interests and applications were identified in the literature regarding the development and use of these technologies.

Summary of the results

In this chapter the results of the study have been presented in two main sections. First, the data from the application of the structural analyses are presented. This section includes the program descriptions, activity analyses and the results from an application of the interaction structure to the study data. In the second main section, the results of the code set analyses are presented and the domains that share codes with the CFH and KIE programs are identified, defined and traced for the analysis. Interested institutions and organizations, such as the U.S. government, military and education departments, are identified and a selection of current or historic applications are described. In the next chapter, conclusions are presented and significant implications for educational practice and participation are proposed.

Get Smart Fast

An analysis of Internet based collaborative knowledge environments for critical digital media autonomy
Chapter Five: Conclusions and implications of the findings

This chapter presents and organizes the findings in terms of the two main goals of the study. The first study goal of developing a grammar of digital media interaction is examined in a discussion of the use and limitations of the model. A summary of the formal structure of the collaborative knowledge environment format is presented followed by a discussion of the implications of the findings for learning. The second study goal seeks to propose a framework for critical digital media practice and this goal is addressed in a discussion of implications for digital media theory, research and educational practice that includes analysis and design components. Finally, a framework of critical questions is proposed to inform problem posing processes and the development of critical autonomy for practitioners and participants in educational practice.

Study goal #1: A grammar of digital media interaction

In this section a summary of the interaction framework is presented which begins with a discussion of how the analytical structures and codes may be organized and considered in relation to the concept of social subjectivity, and the limitations which must be considered in the effort to interpret implications for learners or participants. Next, a summary of the formal structure of the study data is presented with a description of the generic domains that share codes with what is termed the collaborative knowledge environment format (CKE). The interaction model for the CKE format is described generically here to include a description of the information flow architectures and the underlying logics that support the discussion of implications for learning. Finally, some educational research that has bearing on implications for participant learning are discussed in terms of the formal knowledge representations and interface designs that were identified in the study data.

Organization and limitations of the interaction framework

This section presents the interaction framework in relation to current media education and social-cultural analysis perspectives. By describing how this organization is conceived and what was learned in the conduct of this study a summary discussion of the strengths and limitations of the model is presented.

To achieve the goal of developing a digital media interaction model that facilitates social and cultural analysis projects, it is important to articulate how an analysis of observable textual structure relates to a consideration of implications for participants. To properly apply the interaction model, an investigator must recognize a split between observable textual structure and an actively constructed social subject in the mind of the participant, which is called the social subject or subjectivity. Subjectivity is an extremely active and contested area in media research and as described in the literature review is an important area of inquiry in educational communications. Fiske (1987) states that a concern of the cultural approach, "Is the sense that various cultures make of 'the individual,' and the sense of self that we, as individuals, experience. This constructed sense of the individual in a network of social relations is what is referred to as 'the subject' " (p. 48).

Fiske goes on to describe how our subjectivity, "Is not inherent in our individuality, our difference from other people, rather it is the product of the various social agencies to which we are subject, and thus is what we share with others. These social agencies are so numerous and interact with each other and with the social experience or history of each of us in so many different ways that the theory does not lead to a reductionist, conformist view of society" (ibid., p. 49). This requires that any approach to the interpretation of meaning must be considered primarily in terms of the reader and their active socially constructed subjectivity at the time of the reading, and that the results of a textual analysis may outline only a carefully constrained range of possible interpretations and implications.

Therefore, this study will discuss implications for learners in a way that points to possible issues that participants should be aware of to inform their own settings and agendas with out over specifying the significance of the identified textual characteristics. Figure 5-1 below diagrams a relationship between the textual structure and social subjectivity and shows the hierarchy of textual interaction codes and their relation to the operator's social subjectivity in interacting with a digital media textual machine. Sense making is referred to as the interpretation of meaning (IM), and from both the textual and social subject perspectives it may be considered in terms of the organizing levels of ideology, representation and reality in an analysis.

Thus, this study proposes that social-cultural investigations of interaction in digital media environments may benefit from a careful articulation of three simultaneously realized spontaneous subject constructions. These include the observable textual structure, an actively constructed social subject in the minds of the participants and the invisible, empirically constructed computable structure. This study proposes the term "digital media textual machine" which is defined to include the observable textual structure and the invisible computable structure which may be realized in the underlying technical architecture and programming of the system. The potential political and cultural implications of the computable structure are explored in the section on the implications for participants and practitioners presented later in this chapter.

It is also proposed here that the codes of the interaction framework identified in this study may be considered in terms of an operator function model and an operator identity profile. This grouping of the codes helps to discern a higher level identity profile from the lower level syntax codes of the function model. As an organized scheme for analysis, the identity profile may provide enough analytical detail in certain applications that may not require the additional detail of the operator function model. This overlap between the structures is hoped to provide flexibility in configuration and an ease of use of the interaction framework for the widest possible range of investigations.

Finally, while the study initially attempted to employ the interaction structure in isolation, it became clear that a consideration of the additional structures from the framework were necessary to provide an expanded context for the investigation. In the analysis that follows two perspectives are employed, first a higher level perspective is defined to include additional structures which allow a separate consideration of representations at the interface level, of content, and of activity within the system. While certain codes of the model itself, such as the reality and memory functions, point to important external considerations, the structures of the wider framework enable a consideration of information organization and flow that is presented below. Then an analysis from within the codes of the interaction structure is presented which seeks to illuminate potential implications for learning. In the next section, a summary of the formal structure of the collaborative knowledge environment format is presented.

Summary of the formal structure of the CKE format

This section will summarize for discussion some of the identified formal structural aspects of the study data. This includes a definition of a generic interaction format discerned from the programs examined and also describes the primary information and participant organization structures.

Code set domains of the CKE format

From the analysis of the study data, an interaction format model may be proposed to describe any number of programs from current or future practice that share the same or related code domains. The term Collaborative Knowledge Environment format (CKE) is used here to describe this interaction format model. The CKE format is defined to include the technical interaction domains of Internet hypermedia, content analysis and activity analysis, as well as the social-scientific domains of knowledge, cognition and collaboration models. While the study data demonstrated codes from each of these domains to some degree, it is proposed here that future scientific, educational and corporate applications will continue to exhibit integration of codes from these domains. Figure 5-2 below shows some of the domains that share codes with the CKE format and identifies some of the originating institutions and applications that were identified.

Information flow and participant organization

Information flow and participant organization are often considered in terms of two widely acknowledged structural approaches namely, networked or hierarchical structures (Smith et. al., 1997). For example, from an information perspective these terms are used to contrast the popular hierarchical category index listing approach of Yahoo with the flattened network approach of the Cartia (1998) content map presented in Chapter four. In typical design practice, one approach may be favored but combined with aspects of the other. This discussion considers information organization from two perspectives, first from within the codes of the interaction structure and then from a higher level that includes the user relationships to representations of interface, content, interaction and activity within the system.

In the graphical hypermedia interface designs employed at the level of the interface, a spatial logic of the collection and organization of visual elements and their relation in the display may be identified. The primary information organization structure identified is a combination of hierarchical and networked organization. For example, in the KIE program the knowledge representation metaphors of organizing schemas, relational links and semantic networks of schemas and links employ a networked organization, while a hierarchical organization may be identified in the activity models and task lists that are employed to guide the participants.

Participant organization at this level may be described as networked. In the constructivist technology research the network architecture employed is claimed to embody an underlying logic that allows groups to produce knowledge through their collaborative activities (Knobel, et. al., 1998). This is in contrast to the information and participant organization structures that may be discerned when considering the higher level perspective that includes a consideration of content, interaction, interface and activity as described below.

When this perspective is employed, a hierarchical structure may be observed which is here termed the infomediary model. In both the KIE and CFH programs, content and interactive functionality are largely aggregated through the development efforts and partnering arrangements of the infomediary (in this case the educational researchers) before being presented to the user at the interface level. In Figure 5-3 below, a locus of control for key aspects of the user experience and the structure of the information flow can be observed to reside with the infomediary. Participant organization may then be described by considering the various stakeholder positions in relation to the information flow.

Participant organization must also be described as hierarchical in the study data from this perspective. In the programs examined, instructors and experts have access to more elaborate interfaces with features that allow them to monitor and evaluate student work or add and modify content for learning. In the CFH, students were limited to interacting with the content provided and were not able to access the wider Internet or create content as the students in the KIE could. Thus, a hierarchical organization may be described that places the educators and developers above the students in terms of the content, representations and interactive capabilities available to them. The discussion of potential implications for learners will therefore be considered first in terms of the interaction framework in the next section and second, in terms of the higher level infomediary model in the discussion of a framework for digital media autonomy.

Implications for learning with collaborative knowledge environment programs

This section presents a discussion of implications for learning with CKE programs from the perspective of the interaction structure that focuses on two structural forms identified in the analysis, the use of hypertext information organization and the use of what is termed a cognitive hypermedia approach that incorporates knowledge and cognition models in the interface design. In this analysis, positive implications are balanced with critical knowledge from the literature on learning with hypermedia based on the identification of similar formal structures in the study data. This is not meant to imply that a causal relationship exists, but only to illuminate the positive claims with a consideration of potential limitations, constraints or underlying assumptions that may impact learning and the development of new knowledge in systems that employ these formal structures.

In considering the hypermedia information structure from a visual perspective, multiple visual signs are displayed simultaneously including text, images, hypertext links and interface control elements such as graphical buttons and menu icons. Kress (1998) describes this visual logic as, "That of the co-presence of elements and their relation: the logic of the simultaneous expression of a number of related elements" (p. 69). Such an approach is claimed in the literature to enhance student autonomy by placing the learner in control of the content selected and the sequence of viewing which is claimed to provide the learner with the opportunity to pursue content relevant to their own knowledge and interests (Knobel et. al., 1998). Johnson-Eilola (1998) describes a de-centered, game like quality of such spatial designs that, "Can be navigated and negotiated from a simultaneous, surface perspective that does not attempt to find single facts or linear structures but has learned to process information along parallel lines without relying on a single focal point or goal" (p. 204).

The range of interactive opportunity which is simultaneously available through the hypertext interface and the lack of sequential structure may be described from the user perspective as constructing an open-ended problem domain that supports the pursuit of emergent goals through a self constructed order. Reports in the literature suggest that this design approach may potentially contribute to the perceived autonomy of the learner who may exhibit more commitment and enthusiasm given their greater power and freedom to control topics, activities and sequences in a hypermedia system (Knobel, et. al., 1998). In the programs from the study data, the direct manipulation interface is largely open to use in sequences defined by the individual learners, paralleling authentic computer environments used by scientists and researchers in practice.

But this lack of sequential structure may also be problematic for users who may not correctly interpret the information represented in the interface or who may not understand the shorter interactive sequences required to complete higher level tasks using the interface. In Smith's (Smith, et. al., 1997) review of hypertext usability, studies are cited that found users to have understood words and categories in the interface to have meanings other than those intended by the author and that in many cases users describe the information they are seeking in terms that differ from those used in the system. The review also found that user preferences for networked, hierarchical or combination information designs varied in relation to the task currently faced by the user and by their previous knowledge of the information system. The summarized research showed that, "Exploratory tasks are best supported by a network or combination information structure, whilst search tasks are best supported by a hierarchical information structure" (ibid., p. 69). Detrimental effects on task performance were also reported when the opposite information structure was employed.

Regarding the limitations and implications in the use of formal information structures in computer interfaces, research has found that while a given interface representation, "May give the user a powerful way of thinking about the domain, it may also restrict the user's flexibility to think about the domain in different ways" (Hutchins, 1986). The predominance of rational logical representations that may be observed in computer programs is partially attributable to their underlying formal abstract logic, a sequential and procedural binary logic of programming embodied in the architecture of the microprocessor. Graphical user interface designs typically employ a range of rational formalisms in their construction, from the spatial organization of command languages in drop down menus to the representations of interface functionality embodied in the interface metaphors and menu choices employed (Laurel, 1991; Shneiderman, 1992). In this view, a locus of control over interface representations can be seen to reside with the author of the system and the choices made must inevitably enable or constrain participants based on the degree of cultural experience and knowledge that is shared.

Authoring functionality in hypermedia is widely claimed to promote learning in the constructivist research by engaging students in authentic knowledge construction activities (Lehrer, 1991). But these claims are contradicted by empirical research reviews that have found that many students do not use the structural and functional features of hypermedia in a constructive manner (Tergan, 1997). For example, in research on learning with the INTERMEDIA hypermedia system that includes functionality to enable both the exploration and construction of content, following links instead of creating links was the predominant learning style engaged in by students (ibid., p. 264).

In Shipman's (1996) review of interface formalisms it was found that people do not easily accept new authoring modes and tend to resort to authoring practices that are familiar from previous experience (p. 3). This coincides with Tergan's findings that students who are more familiar with interactive features will benefit from their use and that powerful features are not spontaneously used by inexperienced users (Tergan, 1997, p. 267). These reviews suggest that most students need explicit modeling and scaffolding support, and benefit mostly from more experience in using hypermedia for constructive learning. The finding that students with high reasoning skills benefit most from the self-organization of hypermedia documents suggests that experience with the cultural practice of rational logical thinking and social factors that influence the degree of previous experience with computers may be privileged as factors for success in classroom environments that employ these technologies.

Hypertext design that attempts to formally incorporate models and processes from constructivist and cognitive psychology research is here termed cognitive hypermedia. This analysis has identified the discourses of behavioral psychology and cognitive science as originating domains for the knowledge representations employed in the program data. These formal structures are based on cognitive science and constructivist technology discourses that collapse the metaphoric distinctions between the conceptual objects and the human mental phenomena they propose to represent. The combination of constructivism and psychological approaches in education research has been termed a "constructivist psychology of cognition" (Popkewitz & Shutkin, 1994, p. 25). In this approach to making knowledge visible, learner beliefs, knowledge, and behaviors may become subject to explicit evaluation and modification. Shutkin describes the metaphoric collapse of the language used to describe the technology of hypermedia into the technical discourse from the constructivist technology approach from a critical educational perspective when he states:

Hypermedia provides a union with both mastery learning, a behavioral learning strategy, and schematic mapping, a practice in cognitive psychology....Hypermedia is being applied as a tool to organize student knowledge and behavior. Hypermedia application is seen as a technology for assessing student schematic organization and for developing instructional materials based on schema theory. These practices are to facilitate a reorganization of students' existing semantic frameworks to those defined by scientific discourses associated with technology. (Popkewitz & Shutkin, 1994, p. 30).

Thus, a subtle form of behaviorism may be identified in the transformation of the concepts and objects of the constructivist psychology discourses into the formal structures of Internet hypermedia for education. Regarding this transformation, recent critical reviews of the empirical research have cast doubt on claims regarding the efficacy of hypermedia designs based on knowledge and cognition models for learning. Tergan's review (1997) highlights two major critical issues; first that the web like structures of hypertext are not of the same order of complexity as human semantic knowledge structures and second, that the hypertext model of knowledge and memory is associative and static and therefore may be said to resemble a passive mind (p. 261). His review of empirical research on learning with hypermedia showed that a nonlinear structuring of subject matter did not improve comprehension and retention when compared to subject matter presented as linear text and that the quality of the goal directed activity appeared to have more influence on learning than the structural characteristics of the learning materials.

Another implication of the use of formal knowledge representations in hypermedia design is that the experience of problem solving may also then be described using formal terms. For instance, in the Cognitive Flexibility Theory, problem solving is said to involve dynamic and flexible schema assembly as required to interpret new case evidence in complex, ill-structured domains (Spiro, 1990). In the Knowledge Integration Environment research inadequate, pre-instruction student cognitive structures are described in terms of a "Repertoire of models... learners have relevant pieces of knowledge and intuitions about a topic that may not initially be well connected" (Bell, 1997, p. 4). Instructional objectives can then be described as a restructuring of cognitive structures and organizing schemas; a re-mapping process designed to reconfigure learner cognitive maps to more closely resemble those of scientific experts. These statements appear to confirm the subtle behavioral characteristics of the approach suggested by Shutkin above.

Cognition formalizations have also been described as problematic in the research due to the requirement that they be abstracted from the situated contexts of their origin. In Shipman's (1996) "Formality Considered Harmful" a survey of the problems found in the use of formalisms in computer interfaces, the research documented the problems associated with making aspects of human cognitive or social practices explicit, particularly the implicit or tacit knowledge of expert and social practice. The problem of formalizing expert cognition during a task is described as follows, "When such introspection becomes necessary to produce and apply a formal representation during a task it necessarily interrupts the task, structures and changes it" (p. 8). Suchman (1987) also discerned the separation of the abstract reconstruction or prescription of plans from the particular circumstances and situated actions that actually occur in practice by confirming that specific contextual and experiential aspects of knowledge are absent from the necessarily reductive attempts to reason about any actual course of events after or before they occur.

The application of cognitive plans has been described by Suchman (1987), Streibel (1989) and others as useful only in the context of resources for situated actions, thus recognizing the unique requirements and rich improvisation of practice and the generic and limited nature of pre-conceived plans in effecting real world situations which are socially and contextually dependent. Socially and culturally informed perspectives exist which suggest that the application of educational computing environments must be relegated to the role of resources for authentic activities that are conceived and enacted in the social ecology of the classroom (Salomon, 1998).

This suggests that even the most sophisticated and expert approaches to the reconstruction of expert knowledge must be considered as suffering from the over simplification inherent in an approach that is decontextualized and divorced from the situation in which it originally occurred. From a cultural perspective, a rational logical conception of knowledge and cognition is considered to be a western construct (DeVaney, 1998). Such a perspective is widely understood in the research to be reductive with respect to important aspects of cognition such as the recognition of paradox, uncertainty, imprecision, ambiguity, and a diversity of learning styles and intelligences. And while the links in hypermedia networks are associative, a diversity of connection types considered in human cognition by Salomon (1998) include, "Associational, correlational, causal, part-whole, and other logical, emotional and even subconscious connections" (p. 5). Thus, it is clear that many crucial but largely unmeasurable aspects of human knowledge are easily structured out in the development and use of the formal representations of knowledge as they are employed in cognitive hypermedia.

To summarize, it is proposed that the following conclusions should appear reasonable given the previous discussion:

Interaction in collaborative knowledge environments may be considered linguistically in the effort to illuminate issues of social privilege, cultural literacy, language choice and interpretive differences for authors and receivers.

Efforts in Internet hypermedia for education should be considered as an evolutionary extension of the Western tradition of rational logical representations of knowledge and communication and not as a revolutionary approach that by definition embraces and transcends the unspecified diversity of social and cultural knowledge and practices.

The formal approaches to knowledge and cognition of cognitive hypermedia may resonate most with learners who have experience with the cultural practices of the rational logical tradition while also constraining the generation of new knowledge in ways that reflect the characteristics of that tradition.

The next section represents an effort to interpret these findings in terms that foster the development of critical autonomy for participants and practitioners. To do so, the higher level perspective of the identified infomediary model will be employed in a discussion of a proposed framework that may be used to problematize common sense interface constructions from the user perspective.

Study goal #2: A framework for digital media autonomy

In this section the study goal of proposing a framework for digital media autonomy is addressed in two main sections. First, the codes of the interaction model are considered in a discussion of critical issues and questions for practitioners and participants to foster the development of critical autonomy with respect to the information systems of current and future practice. The second section presents a discussion of implications for digital media analysis and design to inform educational initiatives from the democratic media education perspective. A democratic approach to analysis methods and emerging design structures for collaborative knowledge environments is presented in this section.

This discussion attempts to present an initial consideration of the study findings in terms of the educational goals and pedagogical processes of the media education framework that were introduced in Chapter one (p. 10). To inform this perspective, an attempt is made to expand accepted approaches that have evolved in the domain of traditional media to accommodate the specific structures and characteristics identified in the analysis of digital media. Much of the discussion presented here is open ended and is meant to illuminate interesting areas for future research and development.

Critical questions for participants and practitioners

The following discussion is divided into two sections that begin with a consideration of the codes of the interaction framework and concludes with a discussion of potential implications for educational practice. In each section critical questions for practitioners and participants are proposed that may be used in the effort to problematize common sense interface constructions for the specific setting and agendas of participants.

The codes of the interaction framework are intended to provide descriptive detail regarding observable characteristics of system content and interactive functionality from the perspective of the user. A consideration of the operator function, interface and metaface variables allows an analyst or user to consider the range and scope of interactivity on the system in terms of the widely available features of Internet hypermedia. For example, in the CFH program operator function is explorative and content construction is limited to expert authors. This could be described as a limitation in terms of constructivist learning theory that requires students engage in constructive and collaborative activities for successful learning.

Various degrees of interactivity may be discerned through a consideration of these codes and the limitations of systems that are designed to provide less than a full range of interactive functionality may have implications for specific settings, audiences and objectives. The operator perspective code may also be considered at this point to describe the quality of the content or interactive functionality of the system. For example, in the KIE program there is a balance between presented content, which is termed subjective, and functionality that allows a user to construct content and communicate in the system. In some settings, a system that serves primarily to present pre-existing content may be considered overly deterministic and perceived as constraining in the development of emergent goals and new concepts for the user. Some important questions for participants may include:

Are exploration, construction and collaboration equally enabled or is one functional aspect emphasized over the others, and why is this so?

Are the characteristics of the interactive features and content unduly deterministic in shaping the use of the system or do they provide an open-ended environment that that supports an unspecified range of possible applications?

Is content exploration and creation limited to a closed system and proprietary formats or is the content and interactive functionality compatible with and connected to the Internet?

Does the constructive functionality include the ability to produce and publish interfaces for other local participants, external groups or globally on the Internet?

Are collaboration features an important part of the program or are participants isolated in their interaction with the system?

A consideration of the memory function and the operator identity and presence codes may be important from practical and political perspectives for the user. In practice, the memory function may include content and activity logging analysis for personally identifiable users and groups in a system. Personalization and content analysis techniques may be combined to include the generation of automatic activity based perspectives by maintaining, "A thread of snapshots of a knowledge worker's perspective on a document space" (Hayashi, et. al., 1998, p. 1). In this view of an organizational hypermedia system, everyday activities are viewed as continuous hypertext authoring processes.

From a practical perspective for the user, the use of automatically generated memory representations has been shown to help users avoid common navigation and orientation problems in large hypertexts (Chen, 1996). From a political perspective in work or educational contexts, such activity profiling may be used to serve a monitoring and evaluation function which was found to result in some organizations in a social practice called "anticipatory conformity" (Zuboff, 1988, p. 346). The values of the operator presence and identity codes seek to describe the degree of user control in relation to the system memory and help to identify a locus of control with respect to those features.

For instance, a system that protects or allows the user to manipulate evidence of presence in a significant way could be described as providing anonymity to the user. This may also be the case in systems that provide the user with the ability to construct and manipulate multiple identities. The desire for privacy on the Internet has resulted in a number of new technical approaches and commercial offerings to provide this type of feature. Thus, it is proposed here that an examination of the ownership, features and user control of the memory function may provide important information for the articulation of social and cultural perspectives that may underlie the construction of the digital media textual machine. Some critical questions related to the memory function and the related profile information produced may include:

What are the features and characteristics of the computable structure as constructed in this system?

What applications and features exist that generate and maintain memory traces of my activity in the system?

Are these applications and features owned and controlled by me or by the system owner?

Am I being identified uniquely and personally or anonymously and in aggregate?

How are my knowledge and activities in the system being represented in this profile?

Are these representations presented as a tool to assist my use of the system or as a record for surveillance or evaluation of my performance that is not available to me?

For what purposes will this profile be employed now or in the future?

Who owns my profile, who may access the contents, and what are my rights to the information and representations contained therein?

The reality function describes important characteristics of the underlying digital content. Values of digitized or modeled were presented and a consideration of each may be important from the user perspective. When considering digitized content such as the video evidence of scientific experiments that was available to the students of the KIE, two levels of technical reduction must be considered in addition to the choices made by the photographer when evaluating such an image and its relation to reality. First, the evidence must be considered in terms of the initial manipulation inherent in the selection and framing of reality for photographic purposes (Fiske, 1982). This important level should be examined according to accepted perspectives on the social and cultural dimensions of television and video, but these are addressed elsewhere (DeVaney, 1991; Ellsworth, 1987, 1990; Fiske, 1987; Seiter, 1992).

Next, the imagery undergoes a second transformation that is also reductive according to the technical properties of digitization that are undertaken at the time of the capture and conversion into digital form for use in the system. Information from the analog video is discarded in that process and the choices made in digitizing and compressing the image for use in the system may impact how "reality" finally appears to the user and how it is perceived. For example, Reeves and Nass (1996) found that qualitative technical appearances were linked to viewers social perceptions of the speakers in video media. In their experiments, video speakers whose audio was slightly out of sync were perceived less favorably than speakers with perfectly synced audio (p. 214). To investigate and expose the reductive processes of video capture and compression, the following questions may be proposed:

How does the video appear qualitatively (in terms of frame rate, image size and image quality) relative to television or camcorder production standards?

Are the audio and video synchronized to a satisfying degree?

What techniques were performed in the recording, capture and compression of this video and how did that process effect the quality of the results?

What aspects or elements of the phenomena that was recorded are transformed, minimized or privileged based on the qualitative appearance and sound of the video?

How do the qualitative results of the digitization process impact our interpretation of the actors, objects, or events represented?

Modeled content also presents unique interpretive considerations for an analyst or end user. A common critique of computer simulations which require a series of models and algorithms in the construction of their interface representations is that the underlying decisions made by the programmer are hidden from the user and are thus said to constitute a hidden layer of abstraction that may present a socially or politically biased representation embedded in the creation of the program (Bowers, 1988).

At the ideological level the underlying assumptions and beliefs implicated in this hidden layer have been described as constructing a technological worldview in many educational computing environments (ibid.). Damarin (1991) presents a cultural view on this process in acknowledging that, "The power relationship between the individual and the computer is dependent, at least in part, upon the individual's understanding of the modeling process and upon her or his beliefs concerning the relationship of the computer models to reality" (p. 343). Exploring this power relationship by investigating the underlying assumptions of a given representation may offer important opportunities for participants that seek to more fully understand the constructions presented.

For example, in the construction of a visual representation of a corpus of documents, a reductive process is required to reduce the topic space from hundreds of dimensions to just a few for visual mapping or graphing (Aarseth, 1997). In this process fixed algorithms for lexical analysis are applied to the texts and the thematic concepts and topics are automatically extracted, appraised and weighted for the construction of the visual representation. Information in a document that doesn't meet the criteria established algorithmically in the semantic analysis phase may be minimized or discarded from the resulting representation. To investigate this hidden layer the following questions may be appropriate:

What mathematical theories, methods and calculations are used to generate this modeled representation?

What are the known limitations or controversies regarding the application of these theories and methods as applied to the actor, object or process (X) being modeled?

What characteristics of the phenomena in question (X) are privileged and which are minimized in this reductive process?

In a consideration of the set of characteristics used to describe and analyze X and the identified interests, who's perspective and interests are privileged and who's are minimized?

Who benefits by representing X this way and what underlying biases or worldviews may be identified?

Finally, after examining individual aspects of a program in an analysis phase, interpretation and judgements should be considered in terms of a separate synthesis phase that attempts to unify the analytical knowledge from a range of higher level perspectives (Belland, et. al., 1991). It is proposed that the user or educator faced with interpreting the overall quality or usefulness of such a system must attempt to balance the positive or negatively perceived attributes in terms of the specifics of their setting and agenda. Here, important higher level questions are discussed in terms of social, cultural and political perspectives.

At a social level, user concerns may include the potential ability to learn existing knowledge, confidently demonstrate learning in formal evaluations and successfully apply the knowledge to new problems in the domain. If the preferences and learning styles of the user differ substantially from the privileged communication modes or representational approaches employed in the system they may find themselves at a disadvantage for the purposes of learning and evaluation. For instance, it is documented that some learners prefer or have greater experience and aptitude with information presented visually versus spoken or textual content (Knobel et. al., 1998). Given a diverse and non-reductive conception of learner knowledge and experience, it may be proposed that the widest possible range of modes and representations would provide the most benefit to the largest possible audience.

To help articulate this delicate relationship the following questions are proposed:

How do the representations of knowledge, interaction, activity and interface relate or correspond to my previous experiences in learning and communicating?

How is the digital media textual machine constructed relative to the visual and knowledge representations employed at the level of the interface and how does this relate to my experiences in learning and communicating?

How do the representations and perspectives employed in the system influence the potential meanings that may be experienced by an audience?

From cultural and political perspectives, larger questions emerge when we look beyond the issues of how and what students learn to consider the questions of what constitutes knowledge and whose interests does it serve. It is important that these questions be addressed by participants lest they suffer from a lack of knowledge regarding the growing technological sophistication of computing environments for education, entertainment and work. It should also be regarded as problematic to allow the complexity of our systems or our devotion to them to grow beyond our capacity to understand them in a critical way. Thus, it is important that the questions at this level attempt to discern and contribute to an examination of the political and cultural dimensions of these powerful artifacts. Important questions at this level may include:

Who are the interested individuals, groups, institutions or organizations with authority that may be identified in the development, production, distribution or use of this system?

How are knowledge, communication and collaboration represented in this system?

What historical, cultural and political perspectives and controversies exist with regards to these representations of knowledge, communication and collaboration?

What cultural knowledge, perspectives and practices are privileged in this system and whose interests are served?

Are there social or cultural groups locally or globally whose participation is structured out of this system and what responsibilities and privileges are implied by my participation?

Implications for educational practice

This brief discussion of the implications of collaborative knowledge environment programs for educational practice proceeds by identifying aspects of the approach that may present significant challenges outside of the confines of the research setting. First, the systems examined here may be described as technologically sophisticated, economically expensive, and methodologically complex. While some positive learning outcomes are described in the studies examined, they come at the expense of a technological architecture that is cast as a foundational requirement of the approach. In education, this requires a casting of the social learning space and its participants and practitioners into a networked hypermedia environment. So while the small scale, heavily funded and resource intensive research settings examined in this study may have positively impacted student learning, these successes would not be guaranteed in any attempt to scale across a larger educational landscape.

Second, in both of the research projects examined content and technical experts outside of the classroom were required to develop and maintain the systems. This requirement has implications for teachers and their classroom practice. In any system that shifts expertise and control in the work environment away from one group, the potential for _intensification_ and _deskilling_ may exist (Apple and Jungck, 1998). These qualities are growing in the daily lives of many teachers worldwide that Michael Apple describes as, "... Becoming ever more controlled, ever more subject to administrative logics that seek to tighten the reins on the processes of teaching and curriculum" (ibid., p. 133). The social and cultural impact on school and classroom practice that these systems would bring certainly deserves exploration from the perspectives of teachers and the labor process of education. In both studies, the role of teachers was not considered in a primary way due to the major focus on student learning.

Finally, another potential implication regarding the wide scale use of such systems is that concerns about cost, efficiency and evaluation may lead to an expansion of the digital distribution and central management features in such systems. Large scale CKE systems could be conceived for the delivery and evaluation of large portions of the current curriculum. For teachers, this could lead to a further erosion of control over curriculum development and classroom practice. From the learner perspective an intensification of the school experience could be conceived in the evaluation of learning at the cognitive level through the automatic analysis of activity in hypermedia information systems.

In light of the conclusions of this study this should be considered an inequitable approach given the large variability in previous computer experience and rational logical thinking skills that must be presumed across diverse student populations and even within culturally homogeneous segments. Considering the results of recent research reviews on learning with hypermedia that the quality of the goal directed activity turns out to be more important than the form or structure of the educational materials suggests that underlying motives, assumptions and implications need to be examined (Tergan, 1997). Ultimately, educators should be aware that calls for this type of system may have more to do with currently popular mandates for specified curricular content, systems management approaches to education and competency based testing than for student learning.

Implications for digital media theory and design

This study proposed that classical communication approaches such as structuralism and semiotics may continue to provide useful methods and perspectives for analysis of emerging cultural forms provided that they are carefully applied and that their limitations are understood and taken into consideration in the formulation of the conclusions. In this study the conclusions were formulated in terms that may inform efforts from the democratic media education framework to foster the development of critical autonomy with respect to the information systems of current and future practice. By using the methods to develop critical questions and processes for participants to investigate the significance of their own participation in their own settings and agendas, it is proposed that the static and over stated shortcomings of these modern era methods may be minimized.

As a necessary initial approach to new cultural forms such as the Internet based intelligent systems that are emerging in current practice, it is proposed that the models presented here may provide a needed systematic descriptive approach to dynamic and highly complex, multi-component objects. In this study, the model was developed generically in an effort to find a core set of codes that will demonstrate sustained relevance and stability over time and across emerging technologies such as handheld devices or immersive virtual reality interfaces. It is hoped that future research may build from this basic foundation and that a detailed descriptive model of digital media interaction from a user perspective will contribute enduring stability and understanding in a rapidly changing technical area. While the results of the analysis are proposed as non-reductive conclusions, it is hoped that the simplification inherent in the codes of the interaction structure can offer useful constructs for analytical efforts across the domain of networked electronic communications for analysts from a range of interested disciplines and theoretical perspectives.

In considering the results of this study in terms of implications for the design of digital media, two significant considerations stand out. First, is the emergence of technical design architectures that appear to parallel the some important aspects of the structures defined in the interaction framework. Second, is the emergence of Internet programs that employ dynamic, distributed peer to peer networks that effectively flatten the hierarchical information organization identified in currently popular infomediary models. FreeNet (http://freenet.sourceforge.net) and Gnutella (http://gnutella.wego.com) are two new Internet applications that have been explicitly designed to mitigate some of the potential social and cultural implications discussed in this study. Both programs can be considered as alternative architectures that run on top of the Internet but in parallel with the World Wide Web.

FreeNet, an Internet based network architecture which was designed to make it impossible from a technical standpoint to censor information, "Has no centralized administrative infrastructure of domain name servers (DNS) and IP addresses that can be used to track users" (Kahney, 2000, p.1). In the Gnutella architecture, search queries and responses are distributed among the current members of the network at the time of the search and both queries and results are sent without personally identifying information regarding the user. In a recent article describing the new architecture, Shirky (2000) states that this technology, "Points the way to a networking architecture that re-invents the PC as a hybrid client/server while relegating the center of the Internet — where all the action has been recently — to nothing but brokering connections" (p. 1). Thus, in both schemes content and functionality are distributed and the machines of individual participants play an expanded technical role in the management and dissemination of information. The following discussion will briefly address these developments from the user perspective by contrasting structural aspects of the hierarchical infomediary model with a distributed peer to peer model which is here termed the intelligent knowledge net model.

In **the infomediary model** , a hierarchical structure may be observed with respect to important relationships to representations of interface, content, interaction and user activity within the system (See Figure 5-3 above). Content and interactive functionality are aggregated through the market based partnering arrangements of the infomediary before being branded and presented to the user at the interface level. Thus, a locus of control for key aspects of the user experience and the structure of the information flow can be observed to reside with the infomediary. In this study, two important implications for the end user of this hierarchical model have been identified:

The user must engage the content, interaction and activity representations at the interface level that is published by the infomediary. Thus, the user is subject to the explicit technical, social and cultural codes of expression and representation that are presented.

Information flows through the content and interactive function partners of the infomediary to a privately controlled interface for the user. In entering a private system, the user may become a subject in an economic model that delivers user presence as a product to advertising sponsors. In order to extract economic value, sophisticated user profiling and dynamic behavioral manipulation techniques may be deployed.

In contrast, an **intelligent knowledge net model** situates Internet users at the center of a dynamic, distributed, peer to peer network of self aware content and interactive functionality. In this model, a network organization of stakeholder alliances may be predicted based on the emergence of non-proprietary software standards and freely available and trustable public information technologies. As communities of interest develop domain specific descriptive schema languages and deploy open information model applications and distributed content sharing technologies on the public Internet, the locus of control for choices that impact the material reality of the user experience shifts to the end user. In this model, the two implications from the infomediary model described above are seen to shift as follows:

Open information models at the content and interaction levels allow the user to determine a range of device and representation approaches at the interface level. Information may be translated, filtered, and interpreted to suit the unique linguistic and knowledge representation requirements of the user. For instance, as depicted in Figure 5-4 below, a "knowledge browser" application may employ a "cartographic interface representation" to allow the user to visually analyze, evaluate and access the universe of offerings. Potential approaches to reading and communicating are expanded and new opportunities may be enabled for under privileged stakeholders.

Descriptions of, and requests for information are published on the public network and flow in a model similar to Internet newsgroup message distribution. Users participate by entering a public information space and exchanging content through dynamically established and temporary peer to peer connections that allow users to both serve and receive information from other participants. In this scheme, users may potentially retain control over the generation and ownership of their identity and activity profiles. This information may then be personally analyzed or exchanged in the marketplace at the discretion and benefit of the user.

Thus, it appears that the separation of interface, content and interaction at a technical level may lead to application features that ultimately act to mitigate two of the major concerns identified in the programs examined in this study, the location of the locus of control for the generation of interface representations presented and for the ownership and control of memory traces generated through the use of the system. In the next section, a brief summary and final comments are presented.

Summary and final comments

In this chapter implications of the results were discussed in terms of the two goals of the study. First, the digital media interaction framework was discussed in terms of the application and limitations of the model. By interpreting the results in conjunction with the research on hypermedia organization and learning theory, a range of potential implications may be proposed which can be used to inform the specific settings and agendas of participants and practitioners. Implications for learning include linguistic considerations that favor learners who have more experience with hypermedia organization and tools. There are also possible constraints identified in the literature that suggest that formal structure may restrict ways of thinking and understanding in the problem domain.

Next, a summary of the formal structure of the Collaborative Knowledge Environment interaction format was presented with a discussion of the information organization and flow that was identified. Information flow was found to be hierarchical in the CFH program versus a network organization identified in the KIE program. The constructive and communicative functionality of the KIE served to open the flow of information in the program to sources that include students and other alternative information publishers from the wider Internet. In the CFH, the range of expert perspectives was selected and configured by the researchers and no facility for student contributions was available.

The second study goal proposed the development of a framework for digital media autonomy by examining implications and critical questions for practitioners and participants. The interaction model was employed in an approach that seeks to foster autonomy by establishing a process for problematizing common sense interface constructions. Critical questions are proposed to foster the development of problem posing inquiry processes that may be employed across educational settings and agendas as well as for a range of networked electronic communications systems of current and future practice. Structural qualities that should ideally be recognized and desired by participants include interactive functionality that supports emergent goals as well as the development and exchange of a wide range of alternative knowledge and information representations. Future efforts include developing the critical framework for digital media education efforts across specific applications that are found in current practice in education, entertainment, consumer and work domains.

In the discussion of implications for digital media theory and practice, elements from a social-cultural perspective are described as emerging in new technological developments on the Internet. This implies that technological approaches may evolve to mitigate some of the areas of concern identified in this study and also suggests that the trend in technical design to include social constructs and considerations may continue. Ultimately, it is proposed that socially oriented designers and theorists may continue to benefit from a synergistic relationship that privileges democratic and emancipatory characteristics of popular social systems such as accessibility, transparency, equity, flexibility and stability in the design of information systems.

References

Aarseth, Espen, J. (1997). _Cybertext: Perspectives on Ergodic Literature_. Baltimore, MD: The Johns Hopkins University Press.

Andersen, Peter Bogh. (1990). _A Theory of Computer Semiotics_. New York: Cambridge University Press.

Andersen, Peter Bogh. (1993). A semiotic approach to programming. In Andersen, P. and Holmvquist, B. (Eds.), _The Computer as Medium_. New York: Cambridge University Press.

Apple Computer, Inc. (1989). _HyperCard Stack Design Guidelines._ Menlo Park, California: Addison-Wesley Publishing Company, Inc.

Apple, Michael., Beane, J. A. (1995). The Case for Democratic Schools. In Apple, M. and Beane, J. (Eds.), _Democratic Schools._ Association for Supervision and Curriculum Development.

Apple, Michael., Jungck, Susan. (1998). "You don't have to be a teacher to teach this unit": Teaching Technology, and Control in the Classroom. In Bromley, H., Apple, M.W. (Eds.). _Education/Technology/Power: Educational Computing as a Social Practice_. Albany, NY: State University of New York Press.

Arijon, Daniel. (1976). _Grammar of the Film Language_. Los Angeles, CA: Silman-James Press.

Arnold, Michael. (1995). The semiotics of Logo. _The Journal of Educational Computing Research_. 12(3) 205-218.

Babcock, Charles. (1998). ISP's Weigh the Pros and Cons of Datamining. _Jesse Berst's Anchordesk_. [on-line serial] Available: http://www.anchordesk.com/a/adt0406ba/3263.

Bakeman, Roger., Gottman, John M. (1997). _Observing interaction: an introduction to sequential analysis_. New York: Cambridge University Press.

Bakeman, R. & Quera V. (1995). _Analyzing interaction: Sequential analysis with SDIS and GSEQ_. New York: Cambridge University Press.

Banks, J. A. (1998). The Lives and Values of Researchers: Implications for Educating Citizens in a Multicultural Society. _Educational Researcher_ _27_ (7).

Barthes, R. (1957). _Mythologies_. New York: Hill and Wang.

Barthes, R. (1977). _Image - Music - Text_. New York: Hill and Wang.

Bell, P. (1997). Using argument representations to make thinking visible for individuals and groups. In R. Hall, N. Miyake, & N. Enyedy (Eds.), _Proceedings of CSCL '97: The Second International Conference on Computer Support for Collaborative Learning._ 10-19. Toronto: University of Toronto Press.

Bell, P., Davis, E. (1996). Designing an activity in the Knowledge Integration Environment. _Proceedings of the American Educational Research Association conference, USA, 1996_.

Belland, J. C., Duncan, J. K., Deckman, M. (1991). Criticism as Methodology for Research in Educational Technology. In Hlynka, D., Belland, J. (Eds.). _Paradigms Regained: The Uses of Illuminative, Semiotic and Post-Modern Criticism as Modes of Inquiry in Educational Technology._ New Jersey: Educational Technology Publications.

Berners-Lee, T., Cailliau, R., Luotonen, A., Nielson, H. F., Secret, A. (1995). World-Wide Web. In Baecker, R. M., Grudin, J., Buxton, W. A. S., and Greenberg, S. (Eds.), _Readings in Human Computer Interaction: Toward the Year 2000_. San Francisco, CA: Morgan Kaufmann Publishers.

Bernstein, M. (1998). Patterns of Hypertext. _Proceedings of the ACM Hypertext 98_. Association of Computing Machinery.

Berst, J. (1998). E-businesses may be discriminating against you. _Jesse Berst's Anchordesk_. [on-line serial] Available: http://www.anchordesk.com/a/adt0406ba/3263.

Biocca, Frank., Levy, Mark R. (1995a). Virtual Reality as a Communication System. _Communication in the Age of Virtual Reality_. Biocca, F. and Levy, M. (Eds.), New Jersey: Lawrence Erlbaum Associates, Inc.

Biocca, Frank., Levy, Mark R. (1995b). Communication Applications of Virtual Reality. _Communication in the Age of Virtual Reality._ Biocca, F. and Levy, M. (Eds.), New Jersey: Lawrence Erlbaum Associates, Inc.

Bolter, Jay D. (1991). _Writing Space: The Computer, Hypertext and the History of Writing._ New Jersey: Lawrence Earlbaum Associates, Inc.

Bowers, C. A. (1988). _The Cultural Dimensions of Educational Computing_. New York: Teachers College Press.

Brandt, Per Aage. (1993). Meaning and the machine: Toward a semiotics of interaction. Andersen, P. B., Holmqvist, B., Jensen, J. F. (Eds.), _The Computer as Medium_. New York: Cambridge University Press.

Brin, David. (1996). Excerpt from The Transparent Society. _Wired magazine_. [on-line serial] Available: http://www.wired.com/wired/archive/4.12/fftransparent.html

Bromley, H. (1998). Data-Driven democracy? Social Assessment of Educational Computing. In Bromley, H., Apple, M.W. (Eds.). _Education/ Technology/ Power: Educational Computing as a Social Practice_. Albany, NY: State University of New York Press.

Brown, J.S., Collins, A., Duguid, P. (1989). Situated Cognition and the Culture of Learning. _Educational Researcher_. _1_.

Buckingham, D. (1991). Teaching about the media. In Lusted, D. (Ed.), _The Media Studies Book: A Guide For Teachers_. New York: Routledge.

Budnitz, M. E. (1999). Industry self-regulation of Internet privacy: The sound of one hand clapping. _Computers, Freedom and Privacy 1999 conference paper_. [on-line] Available: http://www.cfp99.org/program/papers/budnitz.htm

Butler, Jeremy G. (1994). _Television: Critical Methods and Applications._ Belmont, CA: Wadsworth Publishing Company, Inc.

Cartia. (1998). Themescape product information. _Cartia company website._ [on-line]. Available: http://www.cartia.com

Chen, C., Rada, R. (1996). Interacting with hypertext: A meta-analysis of experimental studies. _Human-Computer Interaction. 11_. 125-156. New Jersey: Lawrence Erlbaum Associates.

Collins, A., Brown, J. S., Newman, S. (1989). Cognitive apprenticeship: Teaching the crafts of reading, writing and mathematics. In Resnick, L. (Ed.), _Knowing, Learning and Instruction: Essays in Honor of Robert Glaser._ Hillsdale, NJ: Lawrence Erlbaum Associates.

Damarin, S. (1991). Recontextualizing computers in education: A response to Streibel. In Hlynka, D., Belland, J. (Eds.). _Paradigms Regained: The Uses of Illuminative, Semiotic and Post-Modern Criticism as Modes of Inquiry in Educational Technology._ New Jersey: Educational Technology Publications.

Danesi, M., Perron, P. (Eds.). (1999). _Analyzing Cultures: An Introduction and Handbook_. Bloomington, IN: Indiana University Press.

DeVaney, Ann. (1991). A Grammar of Educational Television. In Hlynka, D., and Belland, J. (Eds.), _Paradigms Regained: The Uses of Illuminative, Semiotic and Post-Modern Criticism as Modes of Inquiry in Educational Technology_. New Jersey: Educational Technology Publications.

DeVaney, Ann. (Ed.). (1994). _Watching Channel One: The Convergence of Students, Technology and Private Business_. Albany, NY: State University of New York Press.

DeVaney, Ann. (1998). Can and Need Educational Technology Become a Postmodern Enterprise? _Theory Into Practice_. _Winter_.

Dewey, J. (1902). _The Child and the Curriculum_. Chicago, IL: University of Chicago Press.

Eco, Umberto. (1976). _A Theory of Semiotics._ Indianapolis, IN: Indiana University Press.

Eisenstein, Sergei. (1949). _Film Form: Essays in Film Theory_. New York: Harcourt Brace Jovanovich Publishers.

Ellsworth, E. (1987). Media interpretation as a social and political act. _Journal of Visual Literacy_ , 8(2), 27-38.

Ellsworth, E. (1990). Educational films against critical pedagogy. In E. Ellsworth and M. Whatley (Eds.) _The Ideology of Images in Educational Media_. New York: Teachers College Press.

Fiske, John. (1982). _Introduction to Communication Studies_. New York: Routledge.

Fiske, John. (1987). _Television Culture_. New York: Routledge.

Freire , P. (1970). _Pedagogy of the Oppressed_. New York: The Seabury Press.

Greenacre, Michael J. (1993). _Correspondence analysis in practice_. London: London Academic Press.

Grint, Keith. (1992). Sniffers, lurkers, actor networkers: Computer mediated communications as a technical fix. In Benyon, J. and Mackay, H. (Eds.), _Technological Literacy and the Curriculum._ London: The Falmer Press.

Guba, Egon., Lincoln, Yvonne. (1989). _Fourth Generation Evaluation_. London: Sage publications.

Hasle, P. (1993). Logic grammar and the triadic sign relation. In Andersen, P. B., Holmqvist, B., and Jensen, J. (Eds.), _The Computer as Medium_. New York: Cambridge University Press.

Hawkes, Terrence. (1977). _Structuralism and Semiotics_. Berkeley, CA: University of California Press.

Hayashi, K., Nomura, T., Hazama, T., Takeoka, M., Hashimoto, S. and Gumundson, S. (1998). Temporally-threaded workspace: A model for providing activity-based perspectives on document spaces. _In Proceedings of the ACM Hypertext 98 conference_.

Heim, M. (1993). _The Metaphysics of Virtual Reality._ New York: Oxford University Press.

Hetzler, B., Whitney, P., Martucci, L., Thomas, J. (1998). Multi-faceted Insight Through Interoperable Visual Information Analysis Paradigms. In _Proceedings of IEEE Symposium on Information Visualization_ , InfoVis '98, October 19-20, 1998, Research Triangle Park, North Carolina. pp.137-144. [on-line]. Available: http://multimedia.pnl.gov:2080/infoviz/insight.pdf

Hlynka, Denis. (1989). Applying semiotic theory to educational technology. _Educational Communications and Technology conference_. Dallas, Tx.

Hlynka, D., Belland, J. (Eds.). (1991). _Paradigms Regained: The Uses of Illuminative, Semiotic and Post-Modern Criticism as Modes of Inquiry in Educational Technology_. New Jersey: Educational Technology Publications.

Hutchins, E., Hollan, J., Norman, D. (1986). Direct Manipulation Interfaces. In Norman, D., Draper, S. (Eds.), _User Centered System Design_. Hillsdale, NJ: Lawrence Erlbaum Associates.

Jacobson, M. J. and Spiro, R. J. (1995). Hypertext learning environments, cognitive flexibility, and the transfer of complex knowledge: An empirical investigation. _Journal of Educational Computing Research_ _12 (4)._ 301-33.

Johnson-Eilola, J. (1998). Living on the surface: Learning in the age of global communication networks. In Snyder, I. (Ed.), _Page to Screen: Taking Literacy into the Electronic Era_. New York : Routledge.

Jones, S. (1998). Studying the net: Intricacies and issues. In Jones, S. (Ed.), _Doing Internet Research: Critical Issues and Methods for Examining the Net._ London: Sage Publications.

Joyce, Michael. (1995). _Of Two Minds: Hypertext Pedagogy and Poetics_. Ann Arbor, MI: The University of Michigan Press.

Joyce, Michael. (1998). New stories for new readers: Contour, coherence and constructive hypertext. In Snyder, I. (Ed.) _Page to Screen: Taking Literacy into the Electronic Era_. New York : Routledge.

Kahney, L. (2000). Alternative Net Protects Pirates. _Wired News._ March 8, 2000. [on-line]. Available: http://www.wired.com/news/technology/0,1282,34768,00.html

Kjorup, S. (1977). Film as a meeting place for multiple codes. In D. Perkins and B. Leandar (Eds.), _The Arts and Cognition_. Baltimore, MD: Johns Hopkins University Press.

Knobel, M., Lankshear, C., Honan, E., Crawford, J. (1998). The wired world of second language education. In Snyder, I. (Ed.) _Page to Screen: Taking Literacy into the Electronic Era_. New York: Routledge.

Korac, Nada. (1988). Functional, cognitive and semiotic factors in the development of audiovisual comprehension. _Educational Communications and Technology Journal_. _36(2)_. 67-91.

Landow, George P. (1997). _Hypertext 2.0: The Convergence of Contemporary Critical Theory and Technology_. Baltimore, MD: The Johns Hopkins University Press.

Laurel, B. (1991). _Computers as Theater_. New York: Addison-Wesley Publishing Co.

Laurel, B. (Ed.). (1990). _The Art of Human-Computer Interface Design_. New York: Addison-Wesley Publishing Co..

Lee, J. Y. (1996). Charting the codes of cyberspace: A rhetoric of electronic mail. In Strate, L., Jacobsen, R., and Gibson, S. R. (Eds.), _Communication and Cyberspace: Social Interaction in an Electronic Environment_. Cresskill, NJ: Hampton Press, Inc.

Lehrer, Richard. (1991). Authors of Knowledge: Patterns of Hypermedia Design. _Paper presented at the annual meeting of the American Educational Research Association 1991_. Chicago, IL.

Levonen, Jarmo J. (1995). Semiotics of interactive and manipulative graphics in computer learning environments. _Machine-Mediated Learning_. 5(3&4). 177-184. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.

Ma, Yan. (1996). A semiotic analysis of icons on the world wide web. In Griffin, R. (Ed.), _Eyes on the Future: Converging Images, Ideas, and Instruction_. Blacksburg: VA: International Visual Literacy Association.

Magnusson, P. C. (1975). Student rights and the misuses of psychological knowledge. In Apple, M., and Haubrich, V. (Eds.), _Schooling and the Rights of Children_. Berkeley, CA: McCutchan Publishing Corporation.

Masterman, L. & Mariet, F. (1994). _Media education in 1990s Europe: A teachers' guide._ The Netherlands: Council of Europe Press and Croton; New York: Manhattan Publishing Co.

Metz, Christian. (1974). _Film Language: A Semiotics of the Cinema_. Chicago: University of Chicago Press.

Monaco, J. (1981). _How to Read a Film: The Art, Technology, Language, History, and Theory of Film and Media_. New York: Oxford University Press.

Monke, L. (1997). Computers in education: The web and the plow. [on-line] Available: Http://www.grinnell.edu/individuals/MONKE/online_doc.html#Plow.

Moore, B. (1991). Teaching about the media. In Lusted, D. (Ed.), _The Media Studies Book: A Guide For Teachers_. New York: Routledge.

Morrow, James. (1975). Toward a grammar of media. _Media and Methods_. _10_.

Muffoletto, R., Knupfer, N. (Eds.). (1994). _Computers in Education: Social, Political and Historical Perspectives_. Cresskill, NJ: Hampton Press.

Newhagen, J., Rafaeli, S. (1997). Why communications researchers should study the Internet: A dialogue [on-line]. _Journal of Computer-Mediated Communication_ , 3(4). [on-line serial] Available: http://www.ascusc.org/jcmc/vol1/issue4/rafaeli.html

NSF website. (2000). National Science Foundation website. [on-line]. Available: http://www.nsf.gov

Oppenhiemer, T. (1997). The computer delusion. _The Atlantic Monthly_. _7._ [on-line]. Available: http://theatlantic.com/issues/97july/computer.htm

Pettit, Philip. (1975). _The Concept of Structuralism: A Critical Analysis_. Berkeley, CA: University of California Press.

Popkewitz, T., Shutkin, D. (1994). Social science, social movements, and the production of educational technology in the U.S. In Muffoletto, R. and Knupfer, N. (Eds.), _Computers in Education: Social, Political and Historical Perspectives_. Cresskill, NJ: Hampton Press.

Propp, V. (1968). _Morphology of the Folktale_. Indianapolis, IN: American Folklore Society and Indiana University.

Reeves, B., Nass, C. (1996). _The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places._ Cambridge University Press.

Resnick, Lauren B. (Ed.). (1989). _Knowing, Learning, and Instruction. Essays in honor of Robert Glaser_. New York: Lawrence Earlbaum Associates.

Resnick, Paul. (Ed.), (1997). Special section: Recommender systems. _Communications of the ACM_. _40(2)_.

Richardson, F. C., Rogers, A., McCarroll, J. (1998). Toward a dialogical self. _American Behavioral Scientist_. _41(4)_. 496-515. London: Sage Publications Inc.

Robinson, S. (1999). CD software is said to monitor users' listening habits. New York Times. Nov. 1, 1999. [on-line] Available: http://www.nytimes.com/library/tech/99/11/biztech/articles/01real.html

Salomon, Gavriel. (1998). Technology's promises and dangers in a psychological and educational context. _Theory Into Practice_. _Winter_.

Sandersen, Penelope., Fisher, Carolanne. (1994). Introduction to this special issue on exploratory sequential data analysis. _Human-Computer Interaction_. _9_. 247-250. New York: Lawrence Erlbaum Associates, Inc.

Sartorius, P. J. (1975). Social-psychological concepts and the rights of children. In Apple, M., and Haubrich, V. (Eds.), _Schooling and the Rights of Children_. Berkeley, CA: McCutchan Publishing Corporation.

Saussure, F. (1966). _Course in General Linguistics_. New York: McGraw-Hill.

Scholes, R. (1982). _Semiotics and Interpretation_. New Haven, CT: Yale University Press.

Seiter, E. (1992). Semiotics, structuralism and television. In Allen, R. (Ed.), _Channels of Discourse, Reassembled_ : _Television and Contemporary Criticism_. Chapel Hill, NC: University of North Carolina Press.

Shirky, C. (2000). Content shifts to the edges. _Business 2.0._ April 2000. [on-line]. Available: http://www.business2.com/articles/2000/04/content/break_3.html.

Shipman, F. M., Marshall, C. C. (1996). Formality considered harmful: Experiences, emerging themes, and directions. _Xerox Palo Alto Research Center_ _website_. [on-line] Available: http://bush.cs.tamu.edu:80/%7Eshipman/formality-paper/harmful.htm.

Shneiderman, Ben. (1992). _Designing the User Interface: Strategies for Effective Human-Computer Interaction_. New York: Addison-Wesley.

Shneiderman, Ben. (1995). A taxonomy and rule base for the selection of interaction styles. In Baecker, R. M., Grudin, J., Buxton, W. A. S., and Greenberg, S. (Eds.), _Readings in Human Computer Interaction: Toward the Year 2000_. San Francisco, CA: Morgan Kaufmann Publishers.

Shore, I., Freire , P. (1987). _A Pedagogy for Liberation: Dialogues on Transforming Education_. Westport, CT: Bergin and Garvey publishers, Inc.

Smith, D., Keep, R. (1988). Computer software as text: Developments in the evaluation of computer based educational media and materials. _Aspects of Educational Technology XXI: Designing New Systems and Technologies for Learning_. Nichols Publishing Company.

Smith, P. A., Newman, I. A., Parks, L. M. (1997). Virtual hierarchies and virtual networks: some lessons from hypermedia usability research applied to the World Wide Web. In _The International Journal of Human-Computer Studies_. Vol. 47. pp. 67-95.

Spiro, R.J., Jehng, J.C. (1990). Cognitive flexibility and hypertext: Theory and technology for the nonlinear and multidimensional traversal of complex subject matter. In Nix, D. and Spiro, R. (Eds.), _Cognition, Education, and Multimedia: Exploring Ideas in High Technology_. New York: Lawrence Erlbaum Associates.

Sterne, J. (1998). Thinking the Internet: Cultural studies versus the millenium. In Jones, S. (Ed.), _Doing Internet Research: Critical Issues and Methods for Examining the Net._ London: Sage Publications.

Steuer, Jonathan. (1992). Defining virtual reality: Dimensions determining telepresence. _Journal of Communication_. _42( 4)_.

Streibel, Michael. (1989). Instructional plans and situated learning: The challenge of Suchman's theory of situated action for instructional designers and instructional systems. In _Proceeding of Selected Research Papers presented at the Annual Meeting of the Association for Educational Communications and Technology_. _1989 Dallas, TX_. ED 308 844.

Streibel, Michael. (1991). A critical analysis of computers in the classroom. In Hlynka, D., Belland, J. (Eds.). _Paradigms Regained: The Uses of Illuminative, Semiotic and Post-Modern Criticism as Modes of Inquiry in Educational Technology._ New Jersey: Educational Technology Publications.

Suchman, L. (1987). _Plans and Situated Actions: The Problem of Human / Machine Communication_. New York: Cambridge University Press.

Sudweeks, F., Simoff, S. J. (1998). Complementary explorative data analysis: The reconciliation of quantitative and qualitative principles. In Jones, S. (Ed.), _Doing Internet Research: Critical Issues and Methods for Examining the Net._ London: Sage Publications.

Taylor, Robert P. (Ed.). (1980). _The Computer in the School: Tutor, Tool, Tutee_. New York: Teachers College Press.

Tergan, S.O. (1997). Misleading theoretical assumptions in hypertext / hypermedia research. _Journal of Educational Multimedia and Hypermedia_. _6(3/4)_. 257-283.

Tucker, Susan, A. Dempsey, John, V. (1991). Semiotic criteria for evaluating instructional hypermedia. _Paper presented at the annual meeting of the American Educational Research Association 1991_. Chicago, IL.

Tyner, K. (1998). _Literacy in a Digital World: Teaching and Learning in the Age of Information._ New York: Lawrence Erlbaum Associates, Inc.

U.S. Dept. of Education. (1996). Getting America's Students Ready for the 21st Century: Meeting the Technology Literacy Challenge. _U.S. Dept. of Education website._ [on-line]. Available: http://www.ed.gov/Technology/Plan/NatTechPlan/execsum.htm.

Verge Software. (1999). Verge Insight Product Information website. [on-line]. Available: http://www.vergesoft.com

Wilson, K. G. (1988). _Technologies of Control: The New Interactive Media for the Home_. Madison, WI: The University of Wisconsin Press.

Winn, William D. et al.. (1995). Semiotics and the design of objects, actions and interactions in virtual environments. Presented at the symposium, _Semiotics and cognition: Issues in the symbolic design of learning environments_. Annual meeting of the American Educational Research Association, San Francisco, CA..

Zettle, Herbert. (1990). _Applied Media Aesthetics: Sight Sound Motion_. Belmont, CA: Wadsworth, Inc.

Zuboff, S. (1988). _In the Age of the Smart Machine: The Future of Work and Power_. New York: Basic Books Inc.

Appendix A: Interaction structure variable and value definitions

Interaction structure: Syntax (Technical representational codes)

**Interface function** (Static, Selective, Configurative, Constructive, Autonomous)

The interface function describes a range of functionality in a relationship between an operator and the content of a system

**Static** \- Content is fixed within the system

**Selective** \- Content may be selected by a simple mechanism

**Configurative** \- Content is fixed but access is configurable

**Constructive** \- Content creation is enabled, syntax and scope describe range

**Autonomous** \- An hidden algorithmic process is employed to generate the content

**Metaface function** (Static, Configurative, Constructive, Autonomous)

Describes a range of functionality in a relationship between an operator and the interface level of a system

**Static** \- Interface functions and elements are fixed

**Configurative** \- Interface functions and elements are selectable and configurable

**Constructive** \- Interface functions and elements may be created, syntax scope describes range

**Autonomous** \- A hidden algorithmic process is employed to generate interface elements

**Memory function** (None, System, Operator)

Describes the existence and ownership of a memory mechanism for a given system. Memory content characteristics may be described in terms of the interface and metaface variables.

**None** \- No memory mechanism is available for the system

**System** \- A memory mechanism is available for the system owner

**Operator** \- A memory mechanism is available for the operator

**Reality function** (Modeled, Digitized)

Describes a consideration of the relationship between the content of the system and the reality of the actors, objects, or processes represented. Not an exhaustive list.

**Modeled** \- Describes a representation based on a formal or quantitative model

**Digitized** \- Describes a representation based on a conversion from analog to digital form

Interaction structure: Conventional representational codes

**Operator function** (Explorative, Constructive, Communicative, Collaborative)

Describes a range of functionality in a relationship between an operator and the proposed purposes of a textual machine.

**Explorative** \- Describes functionality that enables an operator to interact with the various content and interface elements present in a system

**Constructive** \- Describes functionality that enables an operator to create and publish permanent content or interface elements on the system

**Communicative** \- Describes functionality that enables synchronous or asynchronous communication with other operators

**Collaborative** \- Describes functionality that enables synchronous control of remote systems

**Operator perspective** (Subjective, Objective, Universal, Reflexive)

Describes a range of observable presentation styles in relation to system content and / or activity.

**Subjective** \- A personal or first person subjective perspective is suggested by content that is presented without interactive function

**Objective** \- An impersonal or third person objective perspective is suggested by content that includes at least selective control within the domain

**Universal** \- A universal or God's eye view perspective is suggested by interactive function that implies knowledge of the whole domain.

**Reflexive** \- Describes functionality that enables an operator to access representations of past content or activity via a memory function

**Operator Identity** (Personal real, Impersonal, Constructive, Multiple)

Describes a range of functionality in the relationship of an operator to representations of their unique identity on the system.

**Personal real** \- The operator is identified uniquely and personally with real life identifying information on the system

**Impersonal** \- The operator is not identified uniquely on the system

**Constructive** \- A set of identifying characteristics may be configured or constructed by the operator allowing anonymous interaction

**Multiple** \- Describes functionality that allows multiple identities to be selected, configured and managed during an interactive session

**Operator presence** (Static, Configurative, Generative)

Describes a range of functionality in a relationship of an operator to representations of their presence on the system

**Static** \- Evidence of presence may not be manipulated by an operator

**Configurative** \- Describes functionality that enables an operator to manipulate memory function evidence of personal presence available to others on the system

**Generative** \- Describes a system that generates a memory function trace based on operator engagement with the system

Interaction structure: Variable scope

**Syntax scope:** Describes the scope of the application of a variable or value as individual, group or universal within the defined domain.

**Individual** \- Variable applies to an individual operator

**Group** \- Variable applies to a defined group within the system

**Universal** \- Variable applies to all operators system wide or within the defined domain

Interaction structure: Variable dynamics

**Syntax dynamics:** Describes the dynamics of a variable in the context of its application as static or dynamic; implies whether it is fixed or manipulated during use of the system.

**Static** \- Value of the variable is fixed for a given session

**Dynamic** \- Value of the variable is changeable within the course of a session

Appendix B: Interaction structure coded for the study data

The interaction structure coded for the KIE program

**Interface function** \- static, selective, configurative, constructive, autonomous

**Metaface function** \- configurative, constructive

**Memory function** \- system, operator (individual)

**Reality function** \- digitized, modeled

**Operator function** \- explorative, constructive, communicative (universal)

**Operator perspective** \- subjective, objective, universal, reflexive

**Operator identity** \- personal real

**Operator presence** \- static

The interaction structure coded for the CFH program

**Interface function** \- static, selective, configurative

**Metaface function** \- static

**Memory function** \- none

**Reality function** \- modeled

**Operator function** \- explorative

**Operator perspective** \- subjective, objective, universal

**Operator identity** \- impersonal

**Operator presence** \- static

Identified technical interaction code domains and examples

_Code domain -_ Internet hypermedia _; KIE program -_ Web browser based WIMP style GUI interface with interactive graphics in Java; _CFH program -_ WIMP style GUI interface in Apple HyperCard for Macintosh

_Code domain -_ Visualization technologies; _KIE program -_ Graphical concept frames to organize and represent knowledge _; CFH program -_ Graphical theme list to organize and present themes and links to relevant mini cases

_Code domain -_ Personalization technologies; _KIE program -_ Automated activity prompts and online "hints"; _CFH program -_ no evidence

Identified social interaction code domains and examples

_Code domain -_ Constructivist psychology; _KIE program -_ Schematic expert knowledge models used in concept frames _; CFH program -_ Complex knowledge modeled using schemas, cases and mini cases

_Code domain -_ Learning theory; _KIE program -_ Debate phase activities from Scaffolded Knowledge Integration framework _; CFH program -_ Thematic criss crossing and case based exploration from Cognitive Flexibility Theory

_Code domain -_ Professional inquiry; _KIE program -_ Investigation and communication activities from scientific research _; CFH program -_ Exploration and integration of multiple expert perspectives from scholarly inquiry

