The world is increasingly dividing along digital lines that follow regional borders; in fact, they can also be seen to divide cultural groups [4]. The divide manifests itself as inequality in the use of information and communication technology (ICT). There has been intensive research on the new economy, working culture, and leisure time that modern ICT brings [4]. These studies focus on how technology affects society and culture, and also how they interact. Still, there is a notable lack of research on how culture and society affect technological decisions in ICT. A popular view states that technology is central to social change, developing independently of society [9].
On the contrary, technological decisions on different levels are often not made because of characteristics of technol0gy, but also because of economic, political, ideological, or cultural motives (for example, [9]). For example, several motivations can be attributed to the development and use of GNU/Linux. Arguably, GNU/Linux is advanced (technical motivation), free of initial investment (economical motivation), its roots are in hacker ethics and the free software movement (ideological and social motivations), and sometimes it can emphasize a cultural or political message (for example, IMPI Linux in South Africa and RedFlag Linux in China. These two distributions have their roots in concerns of "digital colonialism" and government support for an independent operating system, respectively.)
It is a common view that science and technology are different spheres [9]. Contrary to this common belief, technology has contributed as much to science as vice versathink of the great dependence of science on the computer [9]. For example, computational mathematics, chemistry, and physics have led to new, important findings. Adding a societal component from this dialectic standpoint, we suggest an approach toward ICT production that:
CS is a young and diverse discipline. Its roots lie in mathematics (theory), engineering (design), and natural sciences (modeling) [6]. Some aspects of computing such as organizing information, counting, and sorting have existed for millennia [1, 11]. Even the word "algorithm" was derived from the name of the noted Persian mathematician Mohammed ibn-Musa al-Khwarizmi (770840 a.d.) The theories and tools of storing and manipulating information are also diverse, and some of them ancient [1]. However, CS was not identified as a discipline until the advent of the modern, stored-program computer [6]. In the early years of CS, the military and intelligence agencies were major contributers to the development of computing technology, followed later by economics, other sciences, and businesses [2, 11].
The juxtaposition of the aspects of computing-scientific theory and practical goals-has been a driving force in the development of CS [6]. Scientific work is often thought of as pure research, and the fruits of this research are applied to technology [9]. There is tension on different levels of this confrontation: the aims of industry and academy differ, their methods differ, even their language and values differ. The tension between pure and applied can also be seen as tension between abstract and concrete, between universal and particular, global and local, and between academy and industry. Despite claims to be inseparable [6], these two sides of computing are sometimes thought of as different spheres.
In the 1970s several electronics hobbyist groups brought the computer out of the laboratory, making it available to the public [9]. When computers spread to Western societies, the new users were not devoted to technology, and more demands arose for better usability [2]. The focus of research on human-computer interaction began shifting away from programmer-computer interaction during the 1980s and 1990s [2]. However, the early architects of CS and computing technology remained dominantly Western, middle- and upper-class males (although there are some female and non-Western pioneers). CS was born and raised in the Western world, shaped by and responding to the varying needs of Western society. When ICT increasingly pervades non-Western cultures, will there arise more demands for cultural usability?
Throughout time, people have felt that science is something more than human. Science is thought of as some extreme form of knowledge driven by a scientific paradigm, which is the set of assumptions, concepts, values, and practices that constitute a way of viewing reality [8]. However, as eternal as the scientific paradigms seem, they are, arguably, still merely agreements among members of the scientific community [8]. Scientific research, even research that aims at paradigm articulation, does not aim at unexpected noveltyand when successful, finds none [8].
The positivist paradigm is the dominant view of CS. According to the positivist viewpoint, reality is seen as tangible, static, universal, and driven by immutable natural laws. Recently, along with growing interdisciplinarity, CS has become more open to different paradigms: witness the diversity in ACM special interest groups. If we dismiss some rigid definitions of CS, we can claim this diversification brings out one of the strengths of CS: its inherent ability to fit into interdisciplinary studies.
The positivist viewpoint leads easily to technological determinism view that technology is separated from the outside world, developing independently, following its own autonomous logic, and then having effects on society [9]. The determinist viewpoint is that, in the long run, what matters is intrinsic technical efficiency: the intrinsically best technology will ultimately triumph, whatever local contingencies affect particular developments. However, the technology that is best from one point of view is not necessarily best from another. For example, in the mid-1990s the technologically superior operating system OS/2 Warp lost the market to Windows 95 due to non-technological reasons. Technological determinism focuses our minds on how to adapt to technological change, not how to shape it [9]. The idea of adapting to technological change implicitly carries the idea of control and the controlled, whereas the idea of shaping technology carries with it the notion of participatory technology. In adapting, the pure and the applied are separated; whereas in shaping, the abstract and the concrete interweave.
In the computer scientists' discourse, technological determinism seems to be highly represented. It reduces the view of computer technology to a simple pure-applied juxtaposition. It is easier to model this unidimensional view and to manage its trends than to model and manage a complex system with several interacting actors. Moreover, we produce science that we afterward experience as something other than a human product. Scientists discover truths rather than formulate them. In the minds of some people, some models not even meant as laws in the strict sense, such as Moore's, Rock's, or Wirth's Laws, sometimes attain the status of natural laws.
However, technological determinism and a simple science-industry juxtaposition is clearly deficient, for it ignores some major actors in the development of technology. From the unidimensional viewpoint, systems are modeled as organizations, while in reality they are more like organisms. The development of modern-day ICT is affected by the technology producers, academic circles, influential social and institutional actors, and diverse user groups. These systems cannot be investigated without understanding the complexity within the systems, and without identifying and understanding individual actors within them. Currently the technological view is often locked in the organizational model, where the entire system serves a single goal, and the key words are efficiency and predictabilityeven in innovation projects! However, technological development also includes nontechnical and nonscientific participants, such as governments, institutions, private corporations, and all the end usersall driving their own interests. The organism model is already visible in open source development: the goals and development threads are multiple, the key words are diversity and interaction, and this offers room for innovation and creativity.
If computational ideas are shaped by their creators' values, appreciations, ideologies, beliefs, or aesthetics, they may vary between cultures. Representations of current ideas may vary, and people with different cultural backgrounds may have novel ideas that differ from those of the northwestern cultures into which CS is currently locked (for example, [7]). For example, the Hindu, Chinese, and Japanese cultures have greater acceptance of fuzzy logic than Western science has [12]. The concept of culture must be understood broadly, each person being an intersection of numerous, partially overlapping, cultures. Furthermore, culture is not a set of static taxonomies, but an ongoing, adaptive process. We believe the thesis of social construction of reality [3] also holds true in CS. Science is not constructed by scientists alone, but is negotiated among several agents. Technological systems are socially produced, and social production is culturally informed.
ICT is not a value per se, but only becomes a value when it responds to the needs of a particular group. There is no simple juxtaposition between technology and needs, but rather a complex system of different parties with different motivations in a constant process of negotiation. From the social constructionist point of view, computational artifacts and theory are social products; they are created, institutionalized, and made into tradition, into mainstream mental models, by people in the societies in which they emerge. However well artifacts and theory may work in the society where they were created, problems arise when they are taken into cultures where people are not familiar with the knowledge system that the artifacts and theory embody. For example, consider a simple procedure such as boiling water before drinking itteaching this habit failed in rural Peru because of the innovation's incompatibility with the local beliefs and past experiences [10]. The case of computers and computer software is more complex: Does the high prevalence of (pirated) mainstream software in many developing countries imply a good cultural fit, a lack of choices, a hegemonic position of software giants, all of these, or something else?
From the fact that ICT is not culturally neutral also arises the power and potential of ICT systemsrelevance. First, in our approach to ICT production, modern ICT tools are not detached from other technologies, but because complete systems are bound to and based on the design decisions of pre-existing tools [9], they must be relevant to the existing infrastructure. ICT can be implemented in highly variable situations, as long as the local infrastructure (electricity, phone lines, or OSI layers) is known. Second, the ICT systems must be relevant to local needs. Technologies that are not advantageous from the viewpoint of the users, are not easily taken into useno matter how great their objective advantage [10].
Science is not constructed by scientists alone, but is negotiated among several agents. Technological systems are socially produced, and social production is culturally informed.
Third, ICT systems must be relevant to the local users. Systems that are difficult to use are adopted more slowly than those easy to use, or they may be rejected altogether. The more new skills and understandings a technology requires the adopter to develop, the more slowly it is adopted, compared to a technology that links to knowledge the user already has [10]. Fourth, ICT systems must be relevant to the local culture and society. The structure of a social system may facilitate or impede the diffusion of technologies [10]. For example, the adoption of family planning and contraceptives differ greatly in different social systems [10]. The adoption of an incompatible innovation often requires prior adoption of a new value system, which is a relatively slow process [10].
Aspects of knowledge and learning (generation of knowledge, its social and intellectual organization, and its diffusion) are usually studied in isolation from one another, and they are identified with disciplines labeled cognition, epistemology, history, sociology and education [5]. In our approach, we look at knowledge in a multidimensional way. This approach, aimed at really understanding computing, focuses on the interconnections between components of knowledge, skills, and values. We computer scientists need to trace the historical and societal constructions of the computational practices of different cultural groups. We must understand the philosophical framework of computational and technological concepts in the circumstances in which they emerge. We need to become acquainted with the needs and problems of the society in question. We must examine the relationships between language, society, arts, tools, artifacts, and computing technology. We must rethink how to teach the use and development of technological tools in culturally relevant ways. And if we really wish to understand our own discipline, we also need to apply all these to our own societies.
Computing technology is an active force for social change, but also a dynamic subject of change. In this dialectical relationship, ICT professionals are agents of change. As such, we will need to watch out for extremes on either side: universalism or particularism. Universal and particular are both important; common "mainstream mental models" are useful in constructing a fast track for building technology skills at large, and contextualization is useful if we want to ease the unnecessary cognitive overhead that current Western bias causes.
Regardless of to which side one leans, understanding local culture and knowledge is essential. The importance of local understanding goes without saying for those who concentrate on contextualization. In addition, those who aim at universals must understand the implications, cultural interpretations, and societal interaction of technology in sufficiently many cultures, or else universals are merely universals for their architects. That is, it is daring to call anything studied only in one part of the world "universal."
Universal and particular complement and support each other, and are both imperative. If Western science and pragmatic tools continue to penetrate developing countries, the question is: which elements can we make culturally fair? Clearly, universal standards such as TCP/IP, HTTP, GSM, and CDMA enable communications and computing regardless of any particular culture, but since even these technological standards are suitable for different settings (for example, efficiency, bandwidth, and range) and because companies promote specific technologies, it is difficult to distinguish technological arguments from other kinds of motivations. Again, taking into account clearly relevant social or cultural factors-such as language, reading habits, metaphors, or analogieswould also benefit local users. However, everything cannot and should not be contextualized.
In order to facilitate the observation of local computational practices, there should be a common taxonomy of computational concepts. Our definition of ethnocomputing refers to the local systems of computational knowledge, starting from its very basic ideas and advancing to more sophisticated concepts. Ethnocomputing refers to local points of entry to:
For example, the quipu of Incas demonstrate complex data structures (in form of knotted, colored cords), and algorithms for manipulating them as well as existing tools and applications for them (see [1]). With local ethnocomputing all the different cultural groups can contribute to the development of better universal understanding of different aspects of computing. People with different cultural backgrounds have different tacit knowledge and this knowledge may work in widening the understanding of the theory of computing. Usability and technology today are built on metaphors and analogies that may not exist or may have different connotations outside the Western world. The best-known examples of this are the North American mailbox with a flag showing new mail, and the folder icon for directorymailboxes and folders differ a lot around the world and these metaphors are not universally understood. Some technological and algorithmic terminology, such as "master/slave" and "divide and conquer" are also examples that may rouse different feelings among different people. If we become more aware of different expressions of ethnocomputing, we can produce technologies more relevant to different cultures. The case of CS education is the same; instead of a confrontation with the students' knowledge and identity, we can shape CS education to utilize the knowledge the students already have.
Ethnocomputing does not take computing out of the center of CS. Instead, it rests on the principles of computing, and aims to widen the current perspective by adding some complementary points of view to technology, science, and society. These different dimensions are not necessarily at odds with one another but can support each other. The challenge is: How to create a smoothly working combination of disciplines that would benefit people in the form of intuitive technologies and ICT education with minimal cognitive overhead. It should benefit societies by allowing technological development without undermining local cultures or traditions and by supporting local identity rather than undermining it. The ICT industry should benefit from better user satisfaction and larger markets. Finally, this new perspective should benefit CS in the form of different points of view on old concepts or even offering novel concepts. Researchers and users from developing countries would be able to bring in new resources, fresh viewpoints, and novel innovations.
1. Ascher, M. and Ascher, R.Mathematics of the Incas: Code of the Quipu. Dover, 1981.
2. Baecker, R.M., Grudin, J., Buxton, W., Greenberg, S. A Historical and Intellectual Perspective. Readings in Human-Computer Interaction: Toward the Year 2000, 2nd Ed. Morgan Kaufmann Publishers, San Francisco, 1995, 3548.
3. Berger, P.L. and Luckmann, T. The Social Construction of Reality: A Treatise in the Sociology of Knowledge. Allen Lane, 1966.
4. Castells, M. The Internet Galaxy: Reflections on the Internet, Business, and Society. Oxford University Press, NY, 2001.
5. D'Ambrosio, U. Ethnomathematics: Challenging Eurocentrism in Mathematics Education. A.B. Powell, and M. Frankenstein, Eds. State University of New York Press, 1997.
6. Denning, P.J. (chair), Comer, D.E., Gries, D., Mulder, M.C., Tucker, A., Turner, A. J., and Young, P.R. Computing as a discipline. Comm. ACM 32, 1 (Jan. 1989), 923.
7. Eglash, R. African Fractals: Modern Computing and Indigenous Design. Rutgers University Press, New Brunswick, NJ, 1999.
8. Kuhn, T. The Structure of Scientific Revolutions, 3rd Ed. The University of Chicago Press, 1962.
9. MacKenzie, D. and Wajcman, J., Eds. The Social Shaping of Technology, 2nd Ed. Open University Press, England, 1999.
10. Rogers, E.M. Diffusion of Innovations, 5th Ed. Free Press, 2003.
11. Williams, M.R. A History of Computing Technology. Prentice-Hall, 1985.
12. Zadeh, L.A. Interview: Coping with the imprecision of real world. Comm. ACM 27, 4 (Apr. 1984), 304311.
©2006 ACM 0001-0782/06/0100 $5.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2006 ACM, Inc.
No entries found