

THE MYTH OF INFORMATION TECHNOLOGY

AMAZING WONDERS OF BIOLOGICAL COMMUNICATION

DR. RAJKUMAR MD FRCPath

Smashwords Edition

Copyright © 2014 Dr. Rajkumar Chetty MD FRCPath

License Notes: This ebook is licensed for your personal enjoyment only. This ebook may not be re-sold or given away to other people. If you would like to share this ebook with another person, please purchase an additional copy for each person you share it with. If you're reading this book and did not purchase it, or it was not purchased for your use only, then you should return to Smashwords.com and purchase your own copy. Thank you for respecting the hard work of this author.

Ebook formatting by www.ebooklaunch.com

**TABLE OF CONTENTS**

1. Information needs of complex systems

2. Cyber warfare of the biological kind

3. Managing Information overload

4. Encoding information

5. Biological information - diverse and unique

6. Information Capture

7. Information cables

8. Personalised communication

9. Signal amplification

10. Information Processing

11. Information-based control systems in your body

AUTHOR PROFILE

Dr. Rajkumar is a medically qualified doctor who pursued higher medical education in the United Kingdom. He has specialised in Biochemistry and Pharmaceutical Medicine and is currently working as the Head of Laboratory Services in a hospital. He is a British national though by birth he hails from India.

Dr. Rajkumar is a Fellow of the Royal College of Pathologists, United Kingdom. He is also a Fellow of the Faculty of the Pharmaceutical Medicine, The Royal College of Physicians, UK. Dr. Rajkumar is also an accredited specialist in Clinical Chemistry and Laboratory Medicine registered with the European Federation of Clinical Chemists.

Dr. Rajkumar has more than 25 years of professional experience in hospital practice, teaching, and research gained in India, United Kingdom, Saudi Arabia and UAE.

He has a deep-rooted interest in scientific philosophy and revels in thinking, writing and talking about profound worldly phenomena and their scientific basis.

SYNOPSIS

Need for communication is not restricted to our social life. There is an amazing degree of highly complex information exchange going on between the trillions of cells that constitute your body. The information complexity of an organism increases when the number of cells in the 'Cellular society' increases. I suppose this is like the difference in the set up needed for communications in a small office as opposed to a multinational company with offices dotted around the globe! The human body contains roughly about 10,000 billion cells. In terms of absolute numbers, our body has roughly 1500 times more cells than there are humans on this planet. The sheer size of cellular numbers within our body demands an infinitely more complicated communication system to meet the task of regulating cellular metabolism and growth processes.

Telecommunication has transformed our human society. But, 'Biological telecommunication', in the form of the nervous system, evolved hundreds of millions of years ago to meet a natural need for establishing communication between tens of thousands of cells in multi-cellular life forms. A distance of 5-6 feet (typical height of a human being), or 150-180 centimetres, is an enormous distance considering the cellular dimensions if you want to link up your nerve to a target cell. An average cell is about 10-20 microns in diameter. A micron is a millionth of a metre. A distance of 1.5 to 1.8 metres from head to foot is 150,000 \- 180, 000 times the dimension of an average cell. In human scale, a distance of 180,000 times a human body would be roughly 300,000 metres, which is about 300 kilometres! In other words, nerves can 'telecommunicate' over a distance of 300 Km-equivalent!

More importantly, a nerve is incredibly similar, in every minute detail, to a modern information cable in terms of structure, organisation and of course the function!! A central conducting core made of numerous cables (axons in the case of nerves), an insulating layer (myelin sheath in the case of nerves) and a protective coating (a fibrous layer called epineurium in the case of nerves) is a bit too much for the uninformed reader because it breaks the myth that we alone can innovate.

The information highway in our nervous system (Spinal cord and the branching nerves) is no less complicated than our highly advanced telecommunication systems in our society. Information capture by sensory receptors all around the body travels through the spinal cord towards the brain, going through several routing junctions, filters, decision points and several code transfers.

I have traced the evolutionary development of the animal nervous system and I have compared it with the evolution of our computer networks to show how incredibly similar are the processes that led to the inter-linking of information-processing structures as a way to enhance the power with which information can be processed. Neural networks in the brain are a 'biological internet' that preceded ours by millions of years. In fact, computer scientists are trying desperately to understand the information processing by neural networks to design better computers and artificial technology!

Unlike a silicon computer, the brain is capable of running many applications at the same time as if it were a parallel processor. Moreover, Neurones can be linked in such a distributed network to generate cognitive capacities not possible with single or few neurones. It is nature's way of doing distributed supercomputing.

Life forms are carbon-based 'computers'. Silicon, used in real computers, is closest to Carbon on the periodic table and both are so similar in their properties. This is uncanny in that nature and nurture have homed in on nearly identical atomic choice to design their processors!!

The brain contains about 100 billion nerve cells that control the rest of the cellular population in the body by receiving and sending information via the nerves. Each neurone, on an average, forms links with about 1000 other neurones. Since there are 100 billion neurones in the human brain, the total number of nerve-to-nerve linkages is about 100,000 billions! A synapse is an interface between two neurones. In the human forebrain, the most complex information processing structure in the known universe, a single neurone can be linked to 40,000 other neurones! The figure of 100,000 billion nerve-to-nerve links is frighteningly large. No wonder the human brain is the most exquisite and most powerful information processor! It still weighs just about a kilogram! How lovely it would be if you can have laptops this lightweight!

Information is perhaps a fundamental property of all systems in the universe, both physical and biological. This book argues that information determines the sophistication, size and order of any complex biological, physical or social system. Bio systems and social systems necessarily revert to a primitive, unregulated and disorderly state, and also to a smaller size, in the absence of the ability to exchange information between the constituent units. A striking example of this is the correlation between amount of DNA held by an organism and the number of cell types it has. By differential decoding of a set of genes by each of these cell types the total information held by the DNA is managed which would otherwise be impossible for a single cell to do and we call this tissue differentiation. This is perhaps like managing all the human knowledge by professional specialisation wherein no single person will be expected to know them all.

In order for the human body to orchestrate the life processes the billions of cells need first to communicate between them. Life systems, including humans, have evolved highly sophisticated means of communication because the inter-cellular communication needs are as complicated, if not more, as that of our human society. The truth is our modern information technology is yet to achieve the sophistication seen in the biological kingdom!! The glory of the modern information revolution may be misplaced and undeserved!

A gamete, for example, is an information capsule in every sense of the word. A sperm and an ovum may be tiny single cells but are packed with all the information needed to form a human being and run it for seven or eight decades! The gametes are a condensed form of programmed genetic information ready for transfer and can carry the information down generations after generations. All genetic modifications undergone by the parent organism are recorded here. Gametes are the links in the evolutionary chain of information transmission.

Cells 'talk' to each other using a number of 'molecular languages'. Each language is a chemical molecule capable of conveying a specific message. Receptors on target cell surfaces actually 'decode' the message. The communication molecules are 'encrypted' messages between two types of communicating cells.

The cellular information transduction machinery generally consists of one of many types of a primary messenger acting as the information source, a unique cell-surface receptor linked to custom-built second messengers, and internal effector molecules which convert the message into cellular action. This organisation amazingly fits very well with Claude Shannon's famous information theory, which he postulated in 1944 as a framework for electrical signal transfer.

By and large, our body gets by without major upheavals in the cellular society. But, now and then, like during a disease, starvation or injury, there is mayhem. All hell breaks loose and it is a Herculean task for the body cells to keep order. Of course, as you can imagine, there are grades of severity in crisis. It can be a simple skin abrasion or bleeding. Or, it can be a major organ failure. The degree of cellular communication needed will vary but one thing for sure is that the information transfer strategies employed by cells are truly amazing.

Information generates order and this makes the information networks the Achilles' heel of complex systems in terms of maintaining that order.

An information-dependent complex system, such as a large organism like the human species, is vulnerable to attacks on information networks, which are not unlike information hacking we have seen in our own society. The very fact that information plays a central role in the sustenance of the system makes them become vulnerable to deliberate attacks on their critical information networks. This is an unexpected dimension to the problem of information management. Anti-information tactics are common in biology as a means of predation or defence. This is an exciting revelation because information hacking was hitherto considered a human social problem! I have shown with numerous examples that there is as much information hacking inside bio-systems as there is in our society. I have argued that sabotage of information systems by cyber-criminals and cyber-terrorists is not unique to our information-dependent society. Organisms, both single-celled and multi-cellular, can be shown to be involved with anti-information tactics as a means of defence or predation.

I guess, as a species, we humans are more vulnerable than most other smaller organisms with respect to information failures within our body. Because of the sheer size of our cellular communication needs, we could face rapid breakdown of cellular functions should our cellular information networks fail. The same cannot be said of smaller organisms like bacteria and viruses. They are small and independent. They are less vulnerable to information crashes than us.

The human brain is the information processor for your body and it holds data vital for our survival. The brain is not immune to assaults on the integrity of its 'memory databases' and the neuronal processors. It can be damaged by lack of power (oxygen), viruses (Like the BSE virus, encephalitis and meningitis viruses) and bacteria. I have shown, with abundant scientific support, how microbes can play havoc on our other cellular information networks within our body. Just as computer viruses destroy information in our computers, the real microbes like bacteria and viruses strategically attack our information networks that link the cells of our body.

Sabotage of information systems is now one of the hot areas in the modern society, with the emergence of e-commerce and total dependence of humans on the cyberspace for their communication needs. What I have shown in this book is hacking is prevalent in the biological kingdom too! Microbes (bacteria and viruses) and cancer cells are known to do that. I have detailed how infectious bacteria and viruses attack huge organisms like humans and animals striking exactly at the information systems to get the better of us! They are hackers to the core, having been in operation for millions of years!! I can't help thinking of the way our computer viruses now paralyse our computers and our ability to use our own cyber-information. Biological information hacking is as much a reality as your cyber-terrorists!!

Information from the outside world keeps constantly streaming in all the time. When the person is sleeping may be visual information is blocked but other sensory modalities are working as briskly as when you are awake. When a person is awake, the brain is constantly tuned to its various 'information channels'. Apart from the eye and the ear, there are so many other routes through which information from the external world can reach our brains. There are 11 major types of sensory receptors cells, which constantly inform the brain about what is happening in the environment. These receptor cells are the 'information windows' for the brain to capture information. They are the equivalent of sleuths who do the intelligence operations. They can also be viewed as the interface between you and the environment.

Typically, information about the external world can be about people, weather, and place. In terms of survival value, most of the times the external information can be dull and uninteresting and of no significance. Danger situations warrant a heightened sensory information capture and initiation of an emergency response from cells concerned. '999' calls from cells that detect the danger get the priority attention and a co-ordinated series of actions are initiated by a number of cell types.

Our body doesn't just deal with external information. There are a whole lot of internal housekeeping functions that need to be attended to, which requires continuous monitoring of the internal environment. A large number of internal receptor cells relay information about the internal environment of the body. Such information is never allowed to reach our conscious part of the brain because there are no direct neural connections from these receptors to the thinking part of the brain.

The other reason, which from a biological perspective is the more important reason, is that such chemical information does not need conscious thinking and therefore there is no advantage in clogging the information transfer pathways with such information. The unconscious senses include special receptors for temperature of the body, blood pressure, pH of the body fluids, nutrient concentrations etc.

The sense receptors that pick up information about these chemical states of the body are fully automated to communicate the information to specialised regions in the base of the brain. These basal regions of the brain contain cells that are not involved in the conscious processing of the information. They are chemical automatons that can compute an appropriate response based on the chemical inputs.

When a person is in coma none of this internal information capture mechanisms stop working which is also the case when someone loses consciousness. But all external information capture comes to a standstill. At the point of death both internal and external information captures fail. I always wonder if death was the cause or effect here.

Information privacy is a very important concept in biological systems as well as our own society. Sensory information capture is like walking into a supermarket and picking only those things you want. We have a variety of information capture tools at our disposal to gather vital information about the environment we live in. It could be visual sensory receptors, auditory receptors, or other forms of sensory receptors responsible for touch, pain, etc. What is incredible is the fact that diverse information-capture tools like sense receptors, TV antennae, Radio aerial, insect antennae etc. are all fundamentally similar in the way they are designed.

Organisms have evolved 'personalised' sensory systems to selectively pick out information they want. Dogs can hear sounds in the ultrasonic range (above 20000 hertz), which we can't hear. Most animals can't hear them. Any stimuli coming in this portion of the information spectrum will only be accessible to the dogs. Dogs have an extremely sensitive sense of smell too. They can pick out the faintest of odours and that is why we use them for crime detection and hunting. Insects use scent molecules, called pheromones, to communicate with their potential mates. Bees and migrating birds seem to have the capacity to sense the magnetic field of the earth to help them navigate. Most other animals are not capable of this feat. Man speaks hundreds of different languages and dialects. For me anything not spoken in English, Tamil or Kannada is gibberish! They can talk about most sensitive military secrets sitting next to me!

I view the artificial gadgets like UV spectrophotometers, Infrared cameras, radio telescopes, X-rays as natural appendages to our sensory system. They enhance our information acquisition capacities. We are trying to access information we are not biologically entitled to. Can we call it sensory greed? Man goes about accessing all kinds of information, which his ancestors never did, and cries out for help. The information avalanche will ultimately get the better of him I guess.

Another fundamental issue in the world of information transfer is the task of encryption. How do you encode information in a simple scheme? Just what does it take for something to 'hold' some information?

All of us know what a binary language is. It is a series of 0 and 1, the arrangement of which is used by computer programmers to encode information. Flow of current is arbitrarily '1' and no current flow is '0' and the switching of the current flow is carried out by what is called the transistor.

Arrangement of '0' and '1' in designed sequences encodes information just as your English language alphabets do in your common language. It is a point to ponder why man can't think of some other means of encoding information other than re-arranging monomeric alphabets, whether it is the common language alphabets or the binary alphabets. Is it the ultimate solution to the problem of information generation? (Why do we need anything better than this?).

Biological systems also apply the same principle to encode information. Use of 'alphabets' to imprint biological information has been around 3500 millions of years on the planet (while your binary language is not even 50 years old!)! The language of DNA has four alphabets only, namely, adenine (A), guanine (a), thymine (T) and cytosine (C). This is what I call a quaternary language! Boy, what a 'language' it is? It has been in use for billions of years (still going strong) and has produced unimaginable and endless variety in expression. One could even argue that proteins are information-rich languages using about 20 amino acid alphabets! Another similarity between DNA and a language, apart from the use of alphabets, is their tendency to evolve in time. Everyone knows that languages 'evolve' and give rise to dialects and newer languages so much similar to evolution of a biological species.

If you look at protein synthesis, codons are read like they are single units even though they are composed of three bases. It is like a byte. A byte is 8 bits together. A bit is a 0 and 1. Early computers often used numbers that consisted of 8 bits, which was called a byte. Even modern computers, when working on 32 or 64 bits at a time, still use the byte as the unit of memory. I find it incredible to note that the DNA's use of monomeric bases as units of information is further enhanced by using three bases at a time (a 3-bit processor), in the form of codons. There is a codon for every amino acid. They have the same meaning irrespective of whether it is a horse or a man or a lion. It is a universal language code like the ASCII code we use in our computers for texts.

I have also shown in the manuscript how life systems have other ingenious ways of encrypting information. The clue to understanding these messages lies in understanding the coding convention used by the sender and the recipient, which has to be actively learnt. What I have done at the outset is to explore the ways and means information is packed in symbols, messages, languages, ASCII code used in computers, DNA (infinitely information-rich quaternary language with just 4 nucleotide alphabets), proteins (with 20 amino acids as alphabets used in a variety of sequence to generate information), computers (binary convention with just 2 alphabets, 0 and 1), ISBN library codes, Universal Product Codes (UPC)/European Article Numbering (EAN) used in supermarkets, cryptic wartime messages like Enigma Code used by Nazis, Morse code, code flags, bird song, animal sounds and primitive communication tools like drum beats, smoke and fire. I have also addressed the fascinating world of cell-to-cell communication using hormones, neurotransmitters, and growth factors etc. which are all biological information codes in my view. I have shown in the manuscript that encoding conventions in information transfer are based on some recurrent themes, irrespective of the nature of the systems concerned.

The concept of biochemical tinkering is a strikingly impressive way information is generated and coded in biology whereby cells modify simple precursors like amino acids and fatty acids to generate molecules of high informational content. As a specialist biochemist I feel I am well qualified to explore this concept of biochemical tinkering for the purpose of encrypting information. Classical examples include tyrosine amino acid, which can be converted to thyroid hormones (by a very simple iodination reaction in thyroid cells), adrenaline, noradrenaline (in two steps of addition of hydroxyl group in adrenal glands) and dopamine (by a simple removal of carboxyl group). With such simple addition/removal of chemical groups, information-less molecules like tyrosine and so many others can be packed with biochemical information. You can form vital hormones and neurotransmitters this way using raw materials that do not have any information content whatsoever.

A number of information-rich molecules like hormones, growth factors, neurotransmitters, cytokines etc. carry vital information and, because only the target cells have the respective receptors, there is specificity and privacy in inter-cellular communication. These receptors act like 'molecular aerials' and are proteins by nature. Again, your genes code for these receptors **.** About 30, 000 genes provide the 'molecular software' for running our information-intensive metabolic programs inside us!

A cell can be viewed as a factory where a number of things are going on at different geographical locations within the cell. There are many programs running in a cell at any given time. When one part of the cell is engaged in glycogen degradation, another part of the cell could be proceeding with preparations for cell division, cholesterol synthesis, amino acid synthesis, amino acid catabolism, energy generation, control of ionic fluxes, protein transport, gene decoding etc. In some case, the product of one pathway enters another. There are many cases where the intermediates of one program join another program. Biochemical reactions proceed in different directions, their diversions determined by molecular influences acting as logic gates. It is a biochemical network.

Unlike a silicon computer, a cell is capable of running many applications at the same time as if it were a parallel processor. It indeed is a microprocessor, no doubt, in my opinion. It integrates inputs like any silicon computer or a neural network. A silicon computer has transistors to regulate information flow in specified, programmed directions by controlling the current flow. A cell has equivalents of transistors in the form of enzymes, which can be switched between active and inactive states quickly by a number of mechanisms. It could be protein attachment or removal, cleavage of a bit of it, by controlling the raw material availability or compartmentalising the reactants so that they do not encounter each other. This is in addition to the gene decoding controls.

Biological systems typically have devices such as eyes, ears, nose, mouth, heart, lungs, brain, kidneys, liver, muscles, bones, reproductive organs etc. They also as capable of running diverse applications like eating, digestion, breathing, heartbeats and circulation, excretion, locomotion, reproduction, ageing, reproduction, immunity etc. In terms of computer language the bio-systems also can run processes such as muscular contraction, nerve stimulation from brain, thinking from the brain, balance and coordination from cerebellum, vision, memory, hearing, internal sensory information capture, energy generation, and so on. To run an application many processes need to run in the background.

I guess the DNA ultimately has all blueprint for constructing all the body devices but also the blueprint for running the physiological applications and processes, as well as the biochemical and neurological processes with them. It is truly the Operating system par excellence. The fact that DNA is present in all forms of life (except RNA viruses and prions) means that it is a universally successful operating system. Just as in a successful operating system in a computer there is that possibility in living forms to provide upgrades rather than dismantling the computer (or the life form) every time you want to improve the operations. Upgrades come in the form of evolutionary changes.

Steve Burbeck, an independent IT consultant, has authored a paper on ' _Complexity and the Evolution of Computing: Biological Principles for Managing Evolving Systems_ ', which is available at his homepage. The theme of this paper is that the computers collaborate on the Internet much the way cells collaborate in multi-cellular organisms. He argues that computer scientists should learn from the way multi-cellular systems have evolved their communication tools. The basis of his argument is that Internet/Web today involves hundreds of millions of computers in the network and there should be a better way of enabling communications between them. He calls it multi-cellular computing. The challenges of communication and collaboration between networked computers are similar to those between cells in a multi-cellular organism.

# 1. INFORMATION NEEDS OF COMPLEX SYSTEMS

It is amazing how quickly things can change when it comes to information revolution. Time can look a bit irrelevant in the history of information revolution. World Wide Web is only about three decades old. Can you believe that? Internet is not any old either! Look at what we can do with these things today! Now every one of us is almost taking the information revolution for granted and we expect more and more. We are nothing short of arrogant when we feel irritated by even a few seconds delay for your computer to load the page you wanted or get the information you needed from the web. We curse the computer for being slow. The truth was that the information you got from your search after a few seconds delay would have taken days and probably months even as recently as a few decades ago. But, now we don't remember that. As a doctoral student I remember painstakingly collecting research articles that took weeks for me. Now I can do it at the click of a mouse!! Now we can Google almost anything and get the answer in seconds. Our lives have become so much dependent on information and you crave for it all the time. Not only we want the information but want it almost instantaneously. I am sure all of us would agree that anything more than 2 - 3 seconds after Googling means we go to the impatient behavioural mode and get ready to moan about the 'slow and inefficient' (!?) computers they make these days or the poor internet service you got!

Until 1944 we did not even know that DNA was the mode of information transfer between successive generations of organisms. It wasn't until 1953 we knew exactly how it did. Watson and Crick's model of DNA structure explained how DNA could encode information. That was only less than 60 years ago.

Man has fought innumerable wars since time immemorial and two world wars by then, bothering so much about military information, without knowing anything about how we manage our biological information. Contrast this with the fact that human species have been around for hundreds of thousands of years, and life forms for over 3 billion years! Nobody knew DNA was the seat of biological information as recently as the 1950s!

Man is information-hungry, both from the inside as well as outside. And so are every type of life forms, from single-celled bacteria to huge, multi cellular conglomerates like elephants and sharks! We think that our information needs are confined to our social lives only. But, little do we realise that constituent cells within multi cellular life forms generate incredible amounts of 'information traffic' 24/7. Communication may be an art, or just a technology, in your social world. But it is a pure survival tool in the biological world. If cells cannot 'talk' to each other there is utter chaos within the cellular society we call a multi cellular life system.

Our body cells inside us depend on as much information, and equally complex methods of information transfer, as the society we live in. I would not hesitate even a fraction of a second to say that biological information is infinitely more varied and sophisticated than our technological information.

There is a buzz of information traffic inside your body even when you are sleeping. In the absence of inter-cellular information exchange all life processes within your body will rapidly come to a standstill, consistent with death.

The human body contains in excess of 10,000 - 100,000 billion cells. The total number of human beings on earth is only about 6 billions! In terms of absolute numbers, our body has roughly 1500 - 15000 times more cells than there are humans on this planet. The sheer size of cellular numbers within our body demands an infinitely complicated communication system to meet the task of regulating cellular metabolism and growth processes. Just for the sake of giving you an idea again of the enormity of cellular numbers in our body I should mention a couple of examples.

Take the case of red blood cells in your body. There are billions of them in our blood and they have a life span of only 120 days. New red cells replace the dead ones and, at any given time, the total red cell population is tightly controlled. It has been estimated that about 200 billion red cells are destroyed every day only to be replaced by the same number! That is about thirty times the human population in terms of the numbers involved! A 200 billion cells per day works out to a turnover rate of 2 million cells per second!

The human brain is the information processor for your body. It contains about 100 billion nerve cells that control the rest of the cellular population in the body, by receiving and sending information via the nerves and synaptic connections. As you can see, this number is more than 15 times the entire human population! So many cells just to process the information within the human body! In addition to the actual nerve cells, the nervous system contains supporting cells called the _glia_ , which aid the function of the nervous system. It is said that there are about 1000 to 5000 billion glial cells in our brain! In other words, there are 10 -50 glial cells for every nerve cell.

The cellular communication network within your body comprises of different modalities falling broadly within two categories - wired and wireless. Intuitively, people readily think of brain and the nervous system when it comes to wired biological information exchange. Of course there is an abundance of information exchange by means of nerves and axons, the equivalent of the wires and cables we see in our modern technology. Your nervous system looks and functions in a manner precisely similar to your telecommunication cables and networks in your society. We will see in chapter 4 more exciting discussion on the incredible similarities between nerves and your modern day telecommunication cables.

I have to point out that it is a less well-known fact that, inside your body, communication is not always wired. Immediately people may think of DNA when I mention non-wired information transfer. Biological information is not just a question of propagating the species by transmitting the genetic information however. Every day life of a cell requires the continuous use of the information contained in the DNA for activation of metabolic programs necessary for running the cellular operations.

A lot of other types of cellular information transfer and processing are going on inside your body other than the genetic information. Though I have dealt with biological information transfer through the agency of DNA later on it is my intention to bring the wonders of cell-cell communication through informational molecules. And I also want to emphasise the mechanisms how the molecular information is deciphered by the cells to accomplish day-to-day metabolism. There is virtually little out there in the popular press on this topic. It is a pity that the extensive cell-cell communication that happens outside the spheres of nervous system and DNA does not get the attention that it truly deserves.

The numbers of cells in your body is dauntingly high. Recall what I said a little while ago about the total number of cells in your body and brain in particular. There are so many types of cells in your body with distinct functional capabilities, millions of cells in each type. Imagine what would happen if your body cells acted as if they are single-celled organisms. There must be absolute chaos, each cell doing its own thing. That is the beauty of multi-cellular life forms. The cells in a multi-cellular life form obey rules of cooperation and the most fundamental requirement for this to happen is the ability of these cells to communicate with each other. In most instances the cooperative behaviour of the constituent cells are imposed by regulatory actions of hormones, neurotransmitters etc secreted by the brain centres and endocrine glands.

Hormones and neurotransmitters are molecules made by some special glands and nerve cells. Their purpose is to 'carry' a signal from the command centre to the target cells within your body. The command cells make these 'message molecules' as and when needed depending on the prevailing situation in the body. There are various sensor mechanisms involved in this decision which means there is another level of information capture happening even prior to that. Once the information molecules are out the target cells respond to the information contained in these molecular messages and elicit appropriate biological actions. The only situation when cells in your body behave like single-celled independent organisms, with little regard to the rest of the cellular society, is cancer.

Microbes are unicellular life forms. They account for apparently three-fourths of the biomass on this planet. Why are they so successful? Does it pay to be single? The bacteria may not be lonely creatures after all. It has been claimed that microbes such as bacteria also live in cooperation with fellow microbes in a loosely knit, federal organisation. 'Cityscapes' like ours have existed on earth for billion of years, built and populated by plain, humble bacteria, such as _E.coli_ and _Salmonella_ , wrote Andy Coghlan in an issue of New Scientist some years ago. He calls it 'slime cities'. More properly, they are known as biofilms or mucilages which we see everywhere. They can be seen with naked eye in water pipes, kitchens, as slippery green coatings.

Bill Costerton from Montana State University in Bozeman, estimates that 99% of all planet's bacteria live in bio film communities. Slime cities are the real ecosystems for the bacteria and nobody has studied them before. Bio film researchers have used a powerful microscopic technique, called Confocal Scanning Laser Microscopy, since the 1990's. Its great advantage is that bio films can be magnified as they are. It is not necessary to dry them out or chemically treat them before subjecting to microscopic analysis, which is the case in conventional microscopic techniques. Mostly, laboratory studies of bacteria have always relied on growing them in Petri dishes as a single layer totally in isolation from other species of bacteria. In other words, we have been looking at them in artificial, unrealistic situations. The Confocal microscopy enables spying on bio films as if viewing a city from a satellite.

Studies at Costerton's lab have shown that bio films are permeated at all levels by a network of channels through which water, bacterial garbage, nutrients, enzymes, metabolites and oxygen travel to and from **.** The origin of these connecting channels is a mystery the researchers have to solve yet. Is it the handiwork of the bacteria or is it the water that pushes through weak spots in the slime structure remains to be answered. It appears that bacteria are not the only inhabitants of the Slime City. Fungi, algae and protozoa add a cosmopolitan look to it. Protozoa have been shown to hunt for bacteria just as large animals hunt down smaller prey!

Bill Costerton thinks that bio-films are less like colonies of self-serving automatons and more like cells and tissues of multi-cellular organisms. There is communication, cell co-operation, cell specialisation and a basic circulatory system as in plants or animals. For example, he says, the bio films in cow's rumen contain five types of bacteria necessary to digest cellulose in a co-operative manner.

The inhabitants of these slime cities communicate by means of chemicals. Once inside this social network the individual bacteria no longer need to have tough, individual cell walls. Therefore, they are made to shut down synthesis of proteins needed to make their own cell walls. This makes it difficult to kill them using antibiotics and chlorine-based disinfectants, which makes it a big problem for health care providers. This is because these antibacterial agents target the cell walls and what can you do if there is no cell wall? The dwellers of these cityscapes need to contribute to making molecules needed for building the cityscape more firmly. It is a trade off between individual and collective needs that governs all good cities and societies!

This is so much similar to the organisation and regulation of your own body. Individual body cells and tissues collectively cooperate, under the influence of regulatory molecules. There is mutual benefit for the body cells in this scheme of things. The cells of course have to divide the labour and get some thing back in return similar to the microbial bio films. Communication is at the very heart of this cooperative behaviour.

An organism is a 'society' of cells of various types. Human body, for example, contains literally hundreds of cell types. It has been estimated that there are about 256 cell types in the human body, with each type represented in millions to billions, all in a network. Even though major organs are only a few types, like brain, kidney, liver, lungs etc, each of these organs have multiple cell types within them. The complexity of an organism increases when the number of cells in the 'Cellular society' increases. There are obviously interactions of a more complex nature between cells than you would expect in a simple, small organism. A bacterium has just a single cell. It cannot compare with multi cellular organisms in terms of the functional capacities. But, it is true that a larger organism has to do quite hard work trying to regulate the functions of all its constituent cells and bring order into a huge cellular community.

For example, if you look at a small company with only a few employees, the management issues are relatively simple. Compared to a large, multinational corporation the issues to deal with are proportionately less. If the employees need to be conveyed a message it might even be possible to drop into their offices individually and let them know. But, imagine doing that in a huge corporation with offices situated across the country, or even beyond the borders of the country. You then need a more powerful communication set up. Don't you?

A human settlement that existed in the pre-historic times may have managed with a set of few rules. It must have been very easy far them to communicate to each other because of the small size of the group. For the sake of comparison, look at the modern human society. There are millions of people living in cities spread over distances measuring in hundreds and thousands of kilometres. It not only requires better communication methods but also a lot more complicated sets of rules and regulations to bring about social order. When organisms remained unicellular, their communication needs were similar to the early, pre-historic settlements. There was no great need for complex communication strategies. As organisms became more complex, their increasing size and increasing number of cells demanded a long distance form of communication to bring coherence in their function. That is when the nervous system evolved as a means of telecommunication in the biological world. I will come back to this issue as we go along.

Advances in our modern information technologies are directly responsible for the ever-increasing size of the big industrial and commercial organisations. If you take a look at the business section of our national daily newspapers, you would realise that not a day passes without a hostile, or non-hostile, take over of a business corporation by a competitor. Quite often, the 'prey' organisms are huge enterprises by their own right. Obviously, the 'predator' company that takes over its management has to be necessarily bigger. The merged organisations are in many cases so large that they can even dwarf the gross domestic products of some nations! Their offices span the globe, with offices dotted in hundreds of countries. The sheer size of the information that has to be managed is daunting. But this has not deterred organisations from this practice.

I cannot imagine this happening in the early to mid-1900s, or earlier, because there was no way the company could have managed the necessary information flow. Business houses of the 19th century, and before, were local enterprises that were small and just sufficient to fulfil the needs of the local community. I can compare them to small prokaryotic organisms, or may be small sized, multi-cellular organisms. Their communication needs were small though it is true that their productivity was also small. They had to be less sophisticated. The same holds good for early, pre-historic human settlements as well. Probably, all of them represent stages in the development of complex systems.

What is the limit to the growth of biological organisms, societies and business organisations? I am sure this will be directly determined by the information management capacities of the systems involved. The system would collapse at the point it fails to transfer information to its constituent units because it cannot then survive as a whole. We are fascinated by talking and reading about the dinosaurs. It has provided meaty themes for movies, TV serials and scores of books. People never seem to run out of steam discussing it. The TV serial in the BBC, _'Walking with Dinosaurs_ ' had attracted so much interest some years ago. An estimated 12 million people are said to have watched it. There have been quite a few films made on dinosaurs too. From the perspective of information needs did the dinosaur meet the challenge or not?

A dinosaur must have had huge information transfer needs. It was so large an organism that it must have had an incredible number of cells in its body. Those cells had to be regulated to function in a meaningful manner. Just like any other organism, the dinosaur's metabolism must have depended on inter-cellular co-operation. We have no idea about the actual numbers of cells in a dinosaur's body but we do know that its brain was not proportionately big enough. Naturally, its information processing capacities must have been less than ideal.

Perhaps, a multi-cellular organism of this size was an evolutionary mistake because it was way too ahead of a better brain capacity that was to evolve 50 million years later in a species called man. There have been endless theories on why dinosaurs vanished form the face of the planet. I wonder if there was any failure of their information machinery, due to an external or internal insult, underlying their extinction.

An individual cell's function is within an organism influenced by a number of other cells, some near and some far. All cells are dependent on each other for their own survival. They need to be able to respond to signals from other cells and this intercellular communication is an integral component of the metabolic reactions within the cell. There are a number of ways the cells can communicate with each other. Cells have evolved not only informational molecules but also means of capturing the signals encrypted in them. The ability to capture information is as important as encoding information, just as in your real world. What is the point is speaking a language if no one can understand? The target cells for each of these types of information-rich molecules have dedicated information-capture tools called receptors (see chapter 3). These receptors are like molecular aerials. Their job is to decode the messages coming in the form of a hormone or a growth factor or something else.

Communication between cells can also be short range, such as when it happens between two neighbouring cells. Most often, it is a long distance call. Hormones represent biological signals between two or more cell types. These signals are transported down the blood stream to target cells situated far away. For example, the pituitary gland, which is one of the most crucial control centres of your body, is situated near the base of the brain. It controls a number of body functions by secreting a number of informational molecules called hormones.

Each pituitary hormone represents one or more specific biological programs to be initiated by the target cell(s). One of the targets is the reproductive organ, situated in your lower abdomen. It is more than two to three feet away from the base of the brain where the pituitary gland is situated. The hormones have to travel by the blood stream over this 'huge' distance. A distance of two feet, or 60 centimetres, is an enormous distance considering the cellular dimensions. An average cell is about 10-20 microns in diameter. A micron is a millionth of a metre. A distance of 0.6 metres is 60,000 times the dimension of an average cell. In human scale, a distance of 60,000 times a human body would be roughly 100,000 metres, which is about 100 kilometres! Cells rely on molecular signals floating around in the blood 'sea' to communicate with cells so far away in a form of wireless transmission! Can you expect it to be anymore specific than throwing your package into the sea and expecting it to reach the precise destination? Will all cells be accessing the signal indiscriminately? You will be surprised to learn that the cells accomplish a level of specificity in communication matching our own! I will return to these issues in a later chapter. At the moment it suffices to reiterate the point I have already made that cellular informational molecules like hormones have corresponding cell-surface receptors (also molecules) situated only on top of the target cells but absent from others. This brings about the incredible specificity and privacy in biological communication.

For wired communication this specificity is already in-built through nervous connections. Even then, going by cellular dimensions, a distance from the top of your head to the foot could be anything from 1.5 to 2.0 metres for a human being. A distance of 2.0 metres translates to 200,000 times the length of the average cell. In terms of human dimensions, this should be equivalent to 200,000 times the length of a human being, which would be roughly 300,000 to 400, 000 metres or 340-400 kilometres! Our body therefore has the ability to transfer information across the cellular society over a distance equivalent to 400 kilometres on the human scale! That is the power of biological telecommunication for you!

Social information can reside in various forms like books, pictures, inscriptions and magnetic discs for a lasting time. But, when you are talking about biological information, there has to be a finite limit to its longevity. In a living organism, information is in constant use. An organism is considered dead when the information it contains is not used for a number of reasons, directly linking longevity to use.

A gamete, for example, is an information capsule in every sense of the word. A sperm and an ovum may be tiny single cells but are packed with all the information needed to form a human being. They are a condensed form of genetic information ready for transfer and can carry the information down generations after generations. All genetic modifications undergone by the parent organism are recorded here. Gametes are the links in the evolutionary chain of information transmission. A seed does the same job for the plant. A seed holds all the information necessary for a plant to grow. Just drop a seed on to your garden soil. By doing so you would be activating the information programs encapsulated in the seed! Miraculously the tiny plant grows out of this seed! It is one of nature's true wonders.

Some of the bio systems can suspend genetic information access for indefinite periods of time. Bacteria are known to go into a state of dormancy if conditions are not favourable for growth. Microbiologists call it sporulation. The bacteria cover themselves up with a thick outer coat to protect themselves against harsh conditions in the surroundings. A bacterial spore is not actively alive but it is not dead. This is a rather intriguing statement. It still retains the information to spring to life when external conditions change for the better. If water and nutrients are provided in the immediate environment of the spore, the spore comes alive. Gene programs are quickly activated to return to the state of life.

The kinds of bacteria that normally form such spores are those belonging to the species _Bacillus_ and _Clostridium_. Some of the members of these species are deadly pathogens. _Bacillus anthracis_ can cause the disease called anthrax in cattle and humans. It is the same Anthrax that caused such a scare in the US following some reports of terrorist attempts to spread it in the US through postal mail.

_Bacillus cereus_ can cause food poisoning in humans. _Clostridium tetani_ can cause the deadly disease called tetanus. _Clostridium perfringens_ causes gas gangrene. The spores of these bacteria are resistant to high temperature, chemicals and radiation that can kill normal bacteria. They can resist boiling temperatures for several hours! Their outer coat is purposely so dehydrated that it becomes resistant to heat. If you have to really get rid of them, you need to boil them at 121 degrees centigrade for at least 15 minutes at 15 lb per square inch pressure. It is a big problem cleaning all forms of food, dressings, wounds contaminated with these spores! Obviously, spore-forming bacteria are such a menace to the medical community. They cause really serious diseases and are difficult to control by our routine aseptic procedures in our hospitals.

What would you call a bacterial spore? Is it alive or dead? Is it just information frozen in time? Obviously, the fact that the spore can spring back to life means the information was kept in a useable form. I find it difficult to believe an organism can protect its information content for such long periods in an unused form. We always think biological information is dynamic but in some cases at least it looks like it can be static too!

Spores of microbes, found in fossils thousands of years old, have been brought back to life! It is common knowledge that spores can be alive for hundreds of years! Recently, researchers brought back to life a bacterial life form that lived on our planet before the dinosaurs did! Russell Vreeland and his colleagues at West Chester University in Pennsylvania isolated the ancient bacterium from the Salado salt formation in New Mexico (Nature, vol 407, p897). These salt crystals are believed to have formed 250 million years ago and tiny spores of _Bacillus permians_ seem to have been trapped in a cavity in the salt 560 metres down the surface. These spores were liberated from hibernation by exposure to growth medium and were observed to grow into rod-shaped bacteria! A bacterium retaining the information necessary for life in a state of dormancy for millions of years! I am fascinated to think on the nature of the information held by the bacteria in the spore form. Is it some kind of information in suspended animation?

Sometimes life forms go into an extremely slow rate of metabolism, almost to the point of appearing to be non-alive. Bovine shrimp embryos called _Artemia_ form cysts that play dead for years before springing into adult life. This is a 300-year old mystery and scientists have speculated that these organisms stop their metabolism completely. But, how then can you call it living? Aren't living organisms expected to continuously expend free energy? Now there seems to be an explanation in hand. James Clegg of the Bodega Marine Laboratory in California have detected metabolic activity in the baby _Artemia_ that is extremely small compared to other life forms, giving these crustaceans the slowest metabolic rate ever measured. It is calculated that they must be spending just ten-thousandth of a calorie per year for every milligram of shrimp weight!

In December 1999, the journal _Science_ recently carried a report of a startling discovery about finding life forms 12000 feet below the East Antarctic ice cap! A team led by Dr. David Karl, of the University of Hawaii, found the bacteria isolated from the ice cores to be alive, possibly undisturbed for millions of years! A second team, headed by Dr. John Priscu of Montana State University, found the bacteria too but they could not show that they were still alive. They concluded that, if they were alive, they were growing so slowly that they could not detect their growth rates. Dr. Priscu agrees that, if they are indeed alive, they are in some kind of a 'maintenance mode' of metabolism!

At this point, I also have to point out the intriguing aspect of the bacterial ability to make use of DNA of dead members of its own type, effectively prolonging the 'life' of information. When bacteria die as a result of harsh environmental conditions, their DNA is liberated into the environment. The cell wall of the bacteria, which held the DNA inside the bacterial cell, dissolves after death of the bacteria. The DNA is let out as a free, lifeless molecule. This DNA surprisingly is no more part of a life system but still retains the information as if it was some kind of a book or document. It still 'lends' itself to information capture if only a life system can act on it. It is observed that the living bacteria can access this free DNA and pick up a bit of it by a gene exchange mechanism like our cells do during meiotic cell divisions! In microbiological jargon, it is called DNA transformation. Colloquially I do not know whether I can call it 'DNA scavenging'!! It is precisely this ability of the bacteria that is exploited in genetic engineering when you are enticing the bacteria to take up that foreign gene, usually ones of pharmacological importance! How ingenious of man to exploit a natural process that has been going on for hundreds of millions of years!

Now let us move into the world of viruses now. Many viruses hardly have a few genes in them. That is all they have to qualify as information for their survival. Naturally, they have to rely on strategies like 'information parasitism'. They live using information held by host organisms like us. There are some viruses, such as Retroviruses, which physically integrate with our genomes because they do not have enough biological information for independent existence!

It is indeed incredible that there are tiny viruses, which infect even the bacteria! It is a parasite of the parasites! They are called the bacteriophages. Phages are used in genetic engineering as carriers of the foreign genes precisely for their ability to infect bacteria, which will then start using these genes to make products for us as if they are 'invisible factories'!

Most viruses do not have the infrastructure to make proteins on their own. If you had only a few genes on the whole, how can it go through the complicated process of protein synthesis? It is like having a manufacturing facility when a company is so small and has only a limited amount of capital and work force! Protein synthesis requires so many molecular structures in the cell, which obviously does not occur inside the viral cell. But, the virus has to convert the information held in its genes to the protein form for its own survival. Its information content may be small but the virus has to use it anyway. Otherwise, it cannot live. The viruses rely on its host cell protein synthetic machinery to convert its gene information to the protein form. This is like outsourcing their needs. They rely on the manufacturing capacity of another organism for their needs. They do it, unfortunately, in a subversive manner. They cunningly take over the protein synthetic system of the host cell so that it is no longer possible for our cells to make our kinds of proteins anymore. Only viral proteins are made here!

The way a virus infects a host cell like ours is interesting. A virus typically has a protein envelope surrounding its DNA or RNA core. This protein envelope is used for binding to the human or animal cells. Once bound, its DNA is literally injected into the host cell! This viral information throws the whole information network of the host into disarray! As said earlier, I can't help thinking of the way our computer viruses now paralyse our computers and our ability to use our own cyber-information.

The virus makes hundreds of copies of it and can spread to other host cells in the body, starting the process of infection and inactivation all over again. Invariably, the virus also spreads across the community, spreading from person to person ravaging the whole community. It is unbelievable that a tiny creature, without enough biological information for independent existence, can inactivate information systems in our mighty cells. If you had any doubts about the ability of viruses and their 'information parasitic' abilities, try to recall the terror they unleash time and again in the form of AIDS, influenza and Ebola outbreaks etc with the potential to wipe out a substantial amount of human population.

There is an exotic theory for the origin of the viruses, which are so incapable of independent existence. They are believed to be fragments of DNA of larger organisms, which formed in the course of DNA transfers and exchanges. These fragments acquire the status of a 'living organism' in the course of time. This is an incredible hypothesis that cannot be proved in the laboratory. The reason why this theory came into being is the surprising ability of many types of viruses to integrate into host DNA. Why do they have this affinity for the DNA of human or other higher organisms?

It is incredible that many viruses use molecules on their outer coat, which mimic human molecules, for their illegal entry into the human cell. The viruses use these 'humanoid' molecules to bind the receptors, which the human cells use for binding their own molecules. For example, the Vaccinia virus, the virus that causes pox, uses a molecule on its surface resembling our human Epidermal Growth factor. Our cells use this growth factor as a communication molecule, to stimulate the growth of cells. Because the surface of the virus has a humanoid version of the growth factor, our body cells cannot distinguish it from the real growth factor. The end result is that our cell receptors meant for the human growth factor bind and internalise the virus inadvertently! The virus gains illegal entry into the cells like using forged passports! The point is from where did the virus get hold of our molecular genes? They could have obtained it from our own cells. This could have happened if normal gene exchanges produced fragments of DNA containing these genes or parts of them. This is the basis for the hypothesis on the origin of viruses as human DNA fragments that spilled over as if they are 'DNAcrumbs' like breadcrumbs!

There are 'organisms' called viroids, which are nothing more than a fragment of DNA. It can be called no more than a molecule of DNA. But, they are also believed to be a form of life. Viroids defy a definition for what constitutes life! Prions, a form of life with only proteins and no DNA or RNA, is another baffling form of biological information. The organism connected with the Mad Cow disease is of this type. There was a time when existence of prions was questioned but it is no more a debate because they are now known to be real living entities! The discoverers of prions have actually been awarded the Nobel Prize for their work. The Mad cow disease, also called CJD, is a big public health menace in the western world. Countries in Europe and elsewhere are taking international steps to regulate meat imports simply to avoid the risk of transmission of the prions across the borders. There were restrictions of selling meat on the bones, visits to farms and even compensations paid out to farmers who culled their animal stock in suspected areas of transmission.

The point is that very tiny organisms, with little or negligent amounts of information held inside them, can create medical and public health havoc. The other way of looking at this is organisms can parasitize on information held by others just to survive. Unfortunately, the outcome of information parasitism is often of huge public health importance to us though the lowly organisms were only trying to survive.

It is worth pondering why these tiny organisms stay tiny despite hundreds of millions of years of evolution. Given the rapid rate of multiplication at least the bacteria would have advanced to a more sophisticated state of organisation. Why have they done so? They remain single-celled though they may live a community-based organisation as seen in bio films. Are there any disadvantages to becoming multi-cellular? Or, the demands of information management putting hurdles in the way?

When it comes to human society we have now transformed the communication landscape beyond recognition. When man was living in small groups, as in the pre-historic past, there was no great need for communication. We were like the tiny single-celled or unicellular organisms. All that was required was probably a hearty shout that everybody in the community could hear. This was not possible when the numbers became hundreds and thousands. Maintaining order amongst the growing numbers needed ways of communicating with people even when they were far away. Sound, light, smoke, horse-bound messengers proved useless when societies expanded across continents and the numbers became millions. Fortunately, the concept of telecommunications came into being at the turn of the century. Cables carrying sound messages connected the speakers & the receivers separated by huge distances. Telephone & telegraph were set to transform the human society.

But, people living many hundreds of years ago were not so fortunate. In the sixth century B.C, in Persia, slaves with loud voices were put atop tall towers to shout messages across towers. About 30 messages a day could be transmitted that way. In a battlefield, a little bit of secrecy was maintained. Warriors would pass orders by word of mouth down the line, forming a 'live telephone'. _'_ _Notes about the Gallic war'_ _,_ written by Gaius Julius Cesar, has the description of this practice. We take so many things for granted nowadays when it comes to information transfer. We do not realise for a moment that things were not as easy long ago.

Beating the drums was a method of information transfer, practised by the natives of America and Africa. Each tribe used drums of its own design. Duty operators beat the drums round the clock and any time of the day a message could come in from the neighbouring villages and it would be passed on immediately by whoever was on duty. For many centuries, sound signalling was in operation in Africa. As late as the turn of the century, during colonial wars, the drum telegraph was in use by the Africans to convey messages about the European troop movement. The messages could cover a distance of up to 300 km a day.

Over a long period in history, man used sound signals as a means of communication. Horns, trumpets, bells, have also been used like the drums. After the invention of gunpowder, it became possible to use the sound of a rifle or canon shot for this purpose. People living in Moscow in the olden times used bells to convey a message that a fire had broken out. One of the biggest drawbacks of sound signalling is it travelled quite slowly. As human societies expanded, we needed speed in everything.

Light caught our attention as a potential signalling mechanism. Campfires had the advantage of speed. Aeschylus, in his _'_ _Agamemnon_ _'_ tells us how king Agamemnon, who led the Greek troops in the Trojan War, told his wife Clytemnestra he would let her know at once when Troy had fallen and the war was over. Men sent to the tops of mountains, on the islands between Asia Minor and Greece, had to set up signal fires. Eight signalling posts covered a total distance of 550 Km from Mount Idea, near Troy, to Mount Arakhneipe, not far away from the castle of Mycenae. Clytemnestra knew that Troy had fallen when she saw a bright fire one morning.

Horses were used to carry messages by the 14th century in some countries. Stations were set up all along the route where the horse riders would change the horses. A distance of 150-200 Km could be covered in a day easily. By the end of the 17th century, Russia had over 3200 horse-relay stations and about 3700 horses to take care of the horse-relay mail service.

' _Pony Express'_ was a postal service linking the Eastern United states with the Far West, started in 1860. It was the time when no railway went farther than the Mississippi and Missouri rivers. All mail for the west had to travel by the stagecoach, on a long & slow route. Pony Express used a shorter route starting at St. Joseph, running across the salt desert of Nevada and Utah. Between 16 and 24 kilometres apart all along the route, a series of relay stations were set up, where horses were kept. About 120 Km apart, the riders were stationed. Riders carried the mail between relay stations. They changed horses at each station, taking not more than 2 minutes. From St. Joseph to Sacramento, California, the mail took only 9 days. It saved two weeks when compared to the mail coach route. It was unfortunate for the Pony Express (but not for mankind) that the electric telegraph cable across the U.S. was completed about that time enabling messages to be sent in minutes instead of days. Pony Express ended 18 months after launch.

We have had such unimaginable advances in communication technologies that would have been considered impossible not long ago. Technological advances have made it possible to dial somebody thousands of miles away. You are able to talk to a person instantaneously though physically separated by thousands of miles! Every time I do that, I am amazed! E-mail, Fax, Videophone and such things have transformed the way we communicate. Information transfer is now as vital to the human society as food. It would be difficult to imagine what man would do without his communication tools.

# 2. CYBER WARFARE OF THE BIOLOGICAL KIND

We are all now too aware of the Y2K millennium bug. At the turn of the millennium (year 2000) there was a widespread fear that spread across the world because our computers will not be able to recognise the year as 2000 but as 1900! This is because we were accustomed to entering the year by the last two digits. Because of this it was thought that the computers will have big anomalies in the chronology of records that will create unexpected problems. There was so much written about a global crisis in terms of breakdown of the information networks that were enabling the information transfer in areas such as food distribution, air traffic control, health care, power generation, finance etc. There was a global panic about the effects of such a failure in our information networks. The mania reached epic proportions when there were rumours that even air travel was risky as planes would fall off the sky because of errors in the computers in the planes! I remember reading a newspaper article about a guy who was learning to use bow and arrow for hunting in the fear that there will be a food crisis due to transport chaos! He was giving an interview to the newspaper proud of himself for being in possession of such foresight and planning ability! Billions of pounds and millions of man-hours were spent all over the world fixing the problem. Still there were fears that some computers somewhere in the world could still be faulty and could cause some costly breaks in the information chains. We have grown so much dependent on our information technologies that it is no wonder the millennium bug has consumed quite a bit of effort from the human society as a whole.

Computer viruses are going to cause serious problems to the human society in the years to come. As more and more people start using the Internet, the reach of a computer virus is widened. The famous _Melissa_ virus was written and distributed by a guy in the U.S and, once inside a PC, it was capable of automatically e-mailing itself to the first 50 people in the victim's Microsoft Outlook address lists. Within days, _Melissa_ had infected 1 million PC's!

The anti-virus software company _, Sophos_ , warned that a sudden explosion of new Net seekers can only create an escalation in computer virus attacks. Internet Storm Centre (ISC), a program of the SANS institute, is a cooperative cyber threat monitor system that monitors the daily level of malicious activity on the internet with some sort of a traffic sign-like 'Green, Yellow, Orange and Red' alert levels. A few years ago it was estimated that, on average, an unpatched Windows PC connected to the Internet will survive for about 20 minutes before it is compromised by malicious software. In 2006, it was estimated that the number new Windows viruses and worms was in the region of 5000 - 20,000.

Perhaps this is the price we have to pay for accepting total dependence on information technology. May be, it is certainly worth the trouble given the vast improvements in our life style over the last couple of decades. But, we have to be ready to expect more threats to the integrity of our information-hungry society. Computer viruses keep threatening our networks from time to time. This is perhaps one reason organisations still rely on paper as the means to storing information despite advances in computer technology. A survey, conducted by the independent research company MORI in Western Europe many years ago, commissioned by Iomega who is the world's largest provider of data storage devices, estimated that 57,500 companies could face total collapse if they lost information stored in their computers. This could cost up to £69 billion pounds for the country in lost revenues. Only 14% of the companies in U.K, France and Germany would report no financial loss if they all lost the computer data. I am not sure if this is still the case considering the fact this survey was done many years ago. But, I am sure these organisations would have some sort of back up files to keep their data secure.

We also face a totally different form of threat to our ways of life, given the increasingly information-dependent nature of our society. We are becoming targets of attack from people who think they can paralyse our society by hitting where it hurts. What can be a better target than our information networks?

Many years ago, military research groups in Russia and US were believed to be racing to produce a _'logic bomb',_ a computer virus that could be placed in an enemy computer network and then activated at will, scrambling their systems. US were worried that China, Iraq, and Libya may be developing similar worm viruses too. They could be unleashed to disrupt cash transfers in banking industry, shut down electrical and water supplies or devastate other networks, including weapons systems management. Pentagon planners were favouring cyber-warfare as a bloodless alternative to conventional methods of warfare. Military strategists are beginning to think that the time has come to use keyboards, as much as bombs, to demolish the enemy. In the late 1990's, during the U.S military initiative against Yugoslavia, an all-out cyber attack on Serb military targets and civilian services was considered. But, the lawyers for the US Defence department warned the government that such an attack violates the war ethics and could leave the US open to war crime charges because the civilians too are going to be affected by the cyber-attack!

The world has already seen such assaults on communication systems. In June 1997, hackers proved they could cripple the US electricity grids during a cyber war game exercise organised by the Department of Defence. July 1998 saw the Portuguese Hackers against the Indonesian Tyranny hack into and erase a number of Indonesian government and related websites. When the Chinese embassy was bombed by mistake by the NATO, some years ago during the Kosovo conflict, the Department of Energy and other US government websites were hacked into. In London, the financial capital of the world, we saw up to 10,000 cyber attacks against the city financial industry by activists demonstrating against third world debt.

In February 2000, the world saw a new form of crime where computer vandals carried out attacks on popular websites like Yahoo, Amazon.com, Buy.com, eBay and even the CNN media site. These sites were crippled by thousands of junk messages and bogus visits that prevented legitimate callers from gaining access effectively denying the service these firms are expected to provide.

A survey conducted in 1998 showed that 129 out of 520 large American corporations had reported such vandalism to varying degrees. These attacks have happened over the last few years but not with the recent co-ordinated intensity. The FBI is now running the National Infrastructure Protection Centre to monitor computer threats across the country. A senior official at this centre said that automated hacker tools are widely available on the Internet and even a 15-year old could launch these attacks! Though these attacks in the U.S have not resulted in real devastation, it is believed that they have cast doubt on the future of e-commerce.

In January 1999 a group of hackers broke into the computer systems of at least 12 multinational companies and have managed to steal confidential files. They demanded ransom payments for safe return of these data. Amazingly, these files are said to contain the credit card details of the customers and even corporate trade secrets! The criminals threatened to post the card details on the Internet for anyone to make use of it! A spokesman for the _Visa_ confirmed that their company was hacked into early that year with some hackers gaining access to their information systems potentially to crash it. _Visa_ deals with about a trillion pounds of business every year from 800 million customers and one can imagine the extent of loss if its systems crashed!

Your vulnerability to such cyber-attacks will directly depend on the degree of inter-connectedness of your information systems **.** Yugoslavian computer network, for example, is unsophisticated to say the least. The rudimentary nature of their information systems would have made them less vulnerable to cyber-attack. I guess the over-dependence on information networks has a negative side too. They can make a system collapse with minimal effort.

I cannot imagine for a moment this happening to the human society before the 1900s. World was not this connected then. Now the world is affectionately called a global village because we are able to chat on the Internet with people, known and unknown, sipping your drink, just as you would be doing in a public house in your local community. National borders are no more barriers. Distance is a non-entity. Our world is now one huge whole unit.

Economical changes in Far Eastern countries trigger a violent reaction in the western financial nerve capitals. A volatile political situation in a country far away is followed very carefully to see how it would affect us. An earthquake happening in Japan is seen by people living in Africa and America in less than 10 minutes after it starts occurring in Japan. Forest fires in a country is known immediately and analysed for its impact on global climatic changes. In the olden days, people were used to talking about the existence of faraway lands, and different people living in them, as if they were legends and not facts. Because they had no way of knowing if that was true. Their belief stemmed from oral accounts of travellers who had been there. Today, we see every event happening anywhere in the world almost instantaneously as they happen. In a way, these media are serving the role of sensory receptors in your body. They capture information and relay it to the users.

I guess, as a species, we humans are more vulnerable than most other smaller organisms with respect to information failures within our body. Because of the sheer size of our cellular communication needs, we could face rapid breakdown of cellular functions should our cellular information networks fail. The same cannot be said of smaller organisms like bacteria and viruses. They are small and independent. They are less vulnerable to information crashes than us.

From the point of view of our own individual body, are there any threats to our ability to perceive and process personal information? Our brain is the most important part of the body because it holds and controls data vital to our survival. It can control the most important functions of our body through nervous connections. The brain is not immune to assaults to the integrity of its 'memory databases'.

Meningitis is a real inflammation of the brain coverings due to a real bug and not a cyber-bug. Encephalitis is real inflammation of the brain again due to microbes. Both viruses and bacteria can cause the brain to inflame, affecting the functions of the brain. They are deadly diseases. We are reading reports of isolated episodes of meningitis in children occasionally. The fear is what would happen if there were a massive outbreak of it.

We can lose the data controlled by the brain by a single affected person but there is the danger to the society as a whole if the disease spreads across the community. If large sections of population are affected then it could be a significant threat to the knowledge held by the society as a whole. I wonder if these organisms are waging a cyber-warfare against us.

Bacteria, viruses and fungi are known to cause disease by affecting a number of different tissues and organs. Usually, any given organism will preferentially affect one type of organ or tissue only. For example, the jaundice virus attacks the liver. The AIDS virus affects the white blood cells only. There are some organisms, which attack our brain, the central control centre. These are the organisms that cause meningitis and encephalitis I mentioned about in the previous paragraph.

Even poliomyelitis is a disease caused by a virus that affects the nerve cells in your spinal cord. This results in failure of nerve cells to function normally. The spinal cord conducts impulses from the brain to the spinal cord, enabling transfer of commands from the brain cells to the muscle cells. Unless this happens the muscles cannot be made to contract. The result is loss of movement for organism. In essence, the poliovirus interferes with the transfer of information between the brain cells and the muscle cells. I feel this is nothing but of disruption of information by a real virus targeting humans no different from the computer virus we all have to deal with.

All of us know about the BSE crisis. The mad cow disease, caused by the bovine spongiform encephalitis virus, had cost Britain something in the region of 400 billion pounds. Millions of cows were slaughtered. All because we were scared about an invisible virus that threatened to infect the human brains too and cause a breakdown of our nervous system. In other words, we were scared of the BSE bug as much as our computer bugs for their ability to paralyse our brain's information networks.

Another category of biological information hacking involves venoms and toxins found in nature. We all fear them and try to avoid them if we can. If you look at poisons and venoms secreted by life forms you would realise how much of biological warfare these life forms employ to ensure their survival.

The primary purpose why they produce the poisons and venoms in the first place is to use them to kill or paralyse the victims. The victims are usually those competing life forms, which threaten their own existence. It is illuminating to examine the mechanisms by which poisons and venoms act. I see them as pure information hacking tools of the highest order. Venoms are produced by animals for use as a hunting or defence tool, often the venom delivered by injection using a specialised organ. A poison on the other hand is produced by animals or plants and is harmful when touched or consumed.

Snakes are the most widely known animals to actively use venoms to kill their prey. Snake venoms contain a variety of protein-degrading (proteases) toxins and also enzymes (nucleases), which can degrade DNA! The effect of protein degrading is widespread and varied depending on the host protein that is degraded. For example, degradation of skin and connective tissue proteins in the victim, including man, enables spread of the venom in the body. Breakdown of proteins involved in control of blood clotting results in haemorrhaging and loss of life due it! What is even more amazing is that snake venoms target information systems. Cobra venom can block acetylcholine receptor. What this means is that the signal transfer through acetylcholine, which happens at the neuromuscular junction, cannot happen. The result is muscular paralysis. The extent of muscular paralysis is so large which can only be achieved by high doses of Curare for example. The snake venoms also target ATP, the energy currency inside the cells, by means of enzymes that degrade them. These enzymes are called ATPases. The effect is inability of the prey to use its energy fuels and therefore they become paralysed.

Australian funnel-web spiders are very aggressive, nocturnal creatures. They are regarded as the world's most dangerous spider. Its venom can kill a human being in 15 minutes. Spider venoms are generally neurotoxins and are lethal to humans and insects. They target ion channels, receptors involved in important physiological processes. The lethal toxins from the Australian funnel web spider, called _atractotoxins,_ are capable of targeting voltage-gated sodium and calcium channels in the somatic and autonomic nervous systems, interfering with the conduction of nerve impulses. In general, most venomous substances are neurotoxins. The aim is to paralyse their prey so that they will succumb without any fight.

Puffer fish, and other types of fish like the porcupine fish, ocean fish etc have a toxin called the _Tetradotoxin_. It is a nerve toxin that can prevent nerve transmissions by means of the effect on the voltage-gated sodium channels. The toxin is used as a defensive bio toxin to ward off predation, or even as a predatory venom. _Tetradotoxin_ has also been isolated from a widely differing animal species, including worms. It is 100 times more poisonous than potassium cyanide. The skin and organs like testes may contain the toxin enough to paralyse even the diaphragm, interfering with neuromuscular transmission, and produce respiratory failure, whereas the flesh of puffer fish may not be so toxic. It may not be fatal but can leave a person in a state of near-death for many days, while the person continues to be conscious.

It is interesting to note that local anaesthetic drugs used in surgical practice work very similar to Tetradotoxins. They also block sodium ion channels and interfere with neural transmission. In the presence of the local anaesthetic agent the pain sensory receptors are unable to relay the pain sensation because of this sodium ion channel block. In the absence of any pain signal coming from the area the brain does not perceive any pain even though the surgeon may be cutting and stitching your skin!

_Saxitoxin_ is a toxin that typically comes from eating shell fish contaminated by 'red tides' or algal blooms. The toxicity takes effect rapidly leading to respiratory collapse and death. The precise molecular mechanism is blockade of nerve impulses by preventing passage of sodium ions as described above in the case of other toxins.

Snake, bee, scorpion or sea anemone toxins are also known to block voltage-dependant potassium channels in the central nervous system and the cardiovascular system. These toxins interfere with conduction of nerve impulses in the brain and heart, paralysing and killing the subjects. _Dendrotoxins_ are small proteins isolated from mamba snake venom. They can block potassium channels in the nerve cells of their victims, paralysing them.

Even microbial toxins are highly targeted information hacking tools. Their targets seem to be the DNA-RNA-Protein information cascade. In other words, they seem to interfere with the flow of information from DNA to RNA, and subsequently, to the individual proteins. They kill by affecting the decoding of genetic information. The commonest motif in microbial hacking is by hitting the protein synthetic machinery of the bacteria. Let us suppose we can compare the protein synthetic apparatus of a bacterial cell as a manufacturing plant. This plant produces a lot of different proteins which are comparable to gadgets and devices in our technological world. The proteins made by the bacteria in the protein-synthetic apparatus, consisting of ribosomes and messenger RNA, are vitally important for the bacteria to run their own internal technology. The best way to hit them is to somehow block this process. What is achieved by blocking this process is the transfer of information held by the bacteria in the form of DNA from getting converted to the final user form, the proteins.

Fungi are known to produce a number of anti-bacterial substances that affect the conversion of information from DNA to RNA to proteins. The idea behind this is to create warfare between the fungi and bacteria in the ecological niche to ward of competition. In other words, the fungi are able to hit the bacterial information systems to kill them.

The funny thing is we humans, mighty humans, have unashamedly learnt to exploit these antibacterial substances of the fungi for medical use! All the antibiotics used by us are fungal hacking weapons!

Some of the bacteria that cause troubling human diseases like Tetanus, Gangrene, Diphtheria, Dysentery and Cholera use information-hacking tools to cause these diseases.

Microbial toxins such as Diphtheria toxin inactivates elongation factor-2 by ADP-ribosylating it, preventing protein synthesis in several tissues such as throat, heart, and nerve. Dysentery, caused by Shigella dysenteriae, is due to the shiga toxin STX, which is a N-glycosidase that can depurinate 28S and 60S ribosomal subunits halting protein synthesis. ?-sarcin, a fugal toxin, cleaves large RNA subunit and Ricins from castor beans are N-glycosidases that can inactivate ribosomes by cleaving a single adenine from the large subunit RNA affecting translation.

Some times, even microbes can produce toxins that can target the nervous system similar to animals described above. Botulism is due to a neurotoxin released by _Clostridium botulinum_ that blocks release of acetylcholine by proteolytically cleaving _synaptobrevin_ on the synaptic junction causing muscle paralysis. _Clostridium tetani_ , causative agent of tetanus, produces a toxin called _tetanospasmin,_ which acts at the synaptic level blocking release of inhibitory neurotransmitter resulting in continuous effects of excitatory neurotransmitter. As you can see, in both these cases, there is interference in the transmission of signals between the nerves to the muscle. The outcome is improper muscular contraction.

I guess the above examples amply illustrate one point, if anything. You are not alone in the in the information-hacking world! Venoms seem to be targeting mostly ion channels in the nerves, muscles or interfere with neuro-muscular transmission of the electrical impulses. The outcome of all these is the same. The muscles cannot work. The result is paralysis or death.

Pharmaceutical companies have also realised that there is a lot to be gained by deliberately targeting cellular information pathways as a cure for diseases. Amazingly, two-thirds of all experimental drugs in discovery stages across all companies are based on cellular information pathways! There are quite a few medicines currently in use that are also based on modulation of cellular information. Even simple antibiotics we use are actually 'information-hacking tools' made by fungi against bacteria, which we have exploited for the benefit of humanity. Antibiotics are substances made naturally by fungi and other organisms for the purpose of warding off competition from bacteria for the nutrients. We use them for the same purpose of killing the bacteria. The only difference is that the bacteria we are trying to hit are the ones that invade our body. The 'intellectual property' actually belongs to the fungus! This, in my view, is ample testament to the information-hungry nature of complex life forms like us.

So far we looked at some of the interesting aspects of information management within complex systems. We looked at the need for an incredible communication network when the system grows in size and number. We also saw how distance and specificity in communication becomes important. More interestingly, preservation of information from external threats is a challenge within bio systems as well as our cyber systems.

# 3. MANAGING INFORMATION OVERLOAD

Systems constantly need to 'know'. This craving for information has made them evolve newer means of acquiring, transferring and filtering information. As complex systems become more and more dependent on information, they have this absolute need to handle information efficiently. Everybody is talking about information overload and information addiction after Internet has come about. I am sure man will learn to be very selective in what he is going to let in. Our brain does that. It has ways of filtering irrelevant information. The deluge of information has to go through a 'filtering gate', a structure in our brains, which turns away useless information! I am sure man, as a social system, will learn to do that. He has to. He will.

As individuals, we tend to crave for information all the time. We read newspapers, watch TV, or simply gossip in order to know. Most of the times we end up taking in some rubbish information. However, you find some valuable piece of information every now and then. Man today has so much information available to him that he is probably at the breaking point. People who lived in the past had very little access to information and that also came from some form of direct experience. It is said that all through their lives entire they would have been exposed only to the same amount of information we find in a single Sunday edition of our newspapers!

Man of modern times has shown a peculiar tendency to take in information even through artificial means, which adds to the information load. Normally, we can see the world as seen through the narrow window in the electromagnetic spectrum between wavelengths 300 nanometres to 700 nanometres. Our world is perceived as how it appears within this narrow part of the electromagnetic spectrum. If we relied on our natural vision alone, we should not be able to know how our world looks like in other parts of the spectrum, which includes ultraviolet rays, gamma rays, X-rays and the radio waves. It is true that we cannot see the micro- and the macro-worlds as seen in these portions of the spectrum with our natural eyes. But, artificial gadgets give us the ability to perceive the world as it looks in these portions of the electromagnetic spectrum too.

Our UV spectrophotometers can help us capture the information from the molecular world. A lot of biochemical research involves studying the behaviour of molecules with the help of spectrophotometers. Light of a wavelength in the UV range is shone through a solution of molecules to be studied. The molecules have the property of absorbing light of this wavelength and it is related to the property of the molecule and its concentration. Our eyes do not have the sensitivity to pick up such signals from molecules but our UV spectrophotometers have compensated for it.

We use radio telescopes to study far away planets and stars, which we cannot see with our natural eyes. X-rays let us see through our body because they can penetrate the body. Infrared cameras help us peep into the world even in the dark. Microscopes enable us to see tiny things invisible to our natural eyes. I view these artificial gadgets as natural appendages to our sensory system. They enhance our information acquisition capacities. We are unique in the living world in that we access data through natural and artificial means. Why are we so greedy to capture such enormous quantities of information? We are trying to access information we are not biologically entitled to. Aren't we?

I outlined in the first chapter the extent of data avalanches waiting to smother us. Are we overdoing it? I do not know. Perhaps, our unfettered drive towards a progressive increase in sensory and extra-sensory data acquisition is the result of the confidence offered by our superior information processing abilities made possible by our computer networks and other data management tools.

Moving on to another dimension in information management, both physical and biological, it is to be noted that there is a risk of over accumulation of information over time. How do systems deal with this issue? The longer a system is in existence the chances are it will have accessed and retained a lot of information over the time. I suppose it is like the wisdom of an elderly person and a kid. At the work place it could be the difference between a veteran and a new starter. If you look at it from the biological perspective I suppose evolution of organisms is really a type of accumulation of survival-related information.

Knowledge is digested information. Our human society has accumulated so much knowledge over the millennia and this has taken a very steep upward trend in the last few hundred years. At the stage where we are now it is impossible for any single human being to know every bit of human knowledge that is out there. In fact, some of the human endeavours actually require very powerful supercomputers to process the data, let alone a single human brain. I guess human society itself is a 'network of minds' with infinite processing power. Advances in science, literature and philosophy are ample testaments to this. The point I am trying to make here is that the human society does not expect us to become masters of all human knowledge before we begin our working career. That expectation would mean that a human being would be spending whole of his or her lifetime learning and training, probably without any hope of completing the 'training' before they die. If that is the case how is the society going to ensure that people actually do any useful work? That is where functional specialisation comes in.

Functional specialisation is really all about efficient information management. Human knowledge can be broken down into distinct functional areas or skills. To name a few we have the skills of a doctor, engineer, lawyer, accountant, hairdresser, businessman, scientist, craftsmen etc. Even these divisions are too broad. For example, a doctor cannot know everything about all medical specialities. That is why we have medical specialisation into numerous branches of medicine like cardiology, nephrology, pathology etc. The same can be said of other professions as well. The advantage of this professional specialisation is that human resources can be used effectively. Of what use is it if every single human being is a doctor? Or, all of us are lawyers? Who will do the other works? Really, no body actually forces you to specialise into one of these functional areas. You do it out of your choice. That is what is every one else does. We all think of career choice at the time of leaving school, before we embark on higher studies, when the actual professional differentiation happens.

Functional specialisation in my view is a natural outcome within any system when the complexity of information contained has outgrown individual capabilities. By differential sharing of the collective information a system ensures that the constituent entities can collectively tap into the whole. We can actually see this in every sphere of complex systems, including life forms, and not just the humanity as a whole. Before we go into the case of life forms let us look at a system such as a commercial company. People who work in a company have their job titles, which clearly indicate their respective skill groups. If the owner of the company is actually doing everything by himself then it is probably a small village shop or something like that. Or, he could be self-employed. There may not be really much of information in this setting and so one person is able to manage. But, if the company employs tens of thousands of people with offices everywhere then you start seeing job titles and clear demarcation of roles. This is because you would expect to see vast amounts of information exchange between employees. Such a big company would also be indulging in complex tasks requiring skilled workers.

I feel that our body is a complex system infinitely more knowledge-based than any known systems in the universe. It would be impossible for just one, or even a few, cell types to accomplish the operations of your body. It will be like trying to run huge corporations like General Motors, Microsoft, Pfizer etc with just a few skills groups.

The many hundreds of billions of cells in our body fall under many different structural and functional types. The cell types are all functionally and structurally different from each other. Biologists call it tissue differentiation, which happens quite early during the embryonic development. All cells in our body have identical genetic material and therefore the same information content. Information is held in the form of 23 blocks called chromosomes. Each block has hundreds to thousands of discrete units of information called the genes. Given the enormous size of our information content, human species can accomplish tasks not matched by any other form of life. Then what underlies their structural and functional difference between cells? The answer lies in the selective, cell-specific expression of the genes in the different cell types. Each gene codes for a specific molecule, or at least part of the molecule. No cell in our body can use all the 30000 gene programs. It is not possible. Perhaps, it would involve a degree of complexity of information processing that even 3 billion years of evolution has sorted out yet. Cells have learnt to use this information in a selective, cell-specific manner to minimise the enormity of the task of having to handle this amount of information.

Molecular biologists call the ability to use the information coded by a gene as gene expression. Many genes in our cells are not used at all. In other words, the information lies there unused and therefore they are unexpressed genes. Perhaps, I can compare it to your use of the library. The library may be large and may contain thousands of books. Do you read all of them? You are quite selective in what you read. Aren't you?

The genes are generally named after the products they code for. But, for simplicity, let us call them by numbers 1 to 30000. A muscle cell will have a set of genes expressed in it that is unique for a muscle cell. Let us call the muscle-specific genes arbitrarily as gene numbers 7, 57, 92, 1908 etc. Those muscle-specific genes will not be expressed in a nerve cell or a liver cell or a bone cell. Similarly, a kidney cell will allow a unique set of genes to be expressed in it. Again this set will be expressed only in the kidney cell and no other cell types. Their gene numbers will be, let us say, 45, 76, 809, 2735 etc. I have to make it clear I have used these numbers purely for the sake of explaining the concept. There are no genes that are numbered this way. They are normally named after the products they code for. This cell-specific expression of genes is exquisitely controlled by factors that allow differential flow of genetic information within the same organism. In other words, cell types 'co-operate' by helping to decode manageable amounts of DNA. This is called tissue differentiation, a process that occurs early on during embryonic development. The sum total of the efforts of all cell types put together enables complete utilisation of the vast amounts of information held by the organism.

We all know that all humans are alike until they finish their school. There is nothing different between us. All kids finishing school can choose what they want to be in their lives. There is nothing that prevents them from becoming a doctor, a lawyer or an accountant. It is your choice and you train for it. Almost all of us stay on in the chosen profession for the whole of our working lives. Don't we? Because changing our careers late in our lives is too much hassle. The cells are not allowed to do that too. They stay differentiated all their life too. There are a whole lot of control mechanisms that keep the cells from using gene programs they are not allowed to use. The only exception is when they turn cancerous.

It is estimated that there are 256 cell types within the human body. I am not sure how many functional skill groups exist within the human society. Could there be a reason why there are 256 cell types in our body and not 1000 or 2000 etc? Similarly, if some one counts x number of human skill groups why there are x number of skill groups and not y?

Stuart Kauffman, an expert on Complex systems, apparently has worked out that there is a relationship between the number of cells in a life form and the quantity of DNA in it. Smaller organisms with less DNA have obviously less number of cell types. If you have more DNA then you have more types of cells. He proposed that the number of cell types increase as the square root of the number of genes. A graphical plot of that correlates amount of DNA with the number of cell types in various organisms (bacteria, algae, yeasts, sponges, jelly fish, annelid worm, and man) follows a straight line with a slope of 0.5, consistent with the square root relationship.

Basically, this finding raises the question as to why this is so. Stuart Kauffman considered that the number of cell types actually represent what he called as 'state cycles'. If you looked at an organism with 30,000 to 40,000 genes, like a human species, what would be the state of affairs if all these number of genes were active in every cell? It is likely the products of these genes will enter into the network of gene interactions. Kauffman used an analogy where the nodes in a network are light bulbs. Two extremes of this network would be the situation where all nodes are 'flashing' and a state where all of them are off. In between these two extreme states you can have variable numbers of these 'bulbs' on or off. In other words, going back to the gene network this refers to the possibility of having varying number of genes that could be turned on or off. If there are N bulbs, since each bulb can exist in on or off state, the number of states in the whole system is 2N. So, if there were a total of 100 bulbs (genes) then you would have a frighteningly large 2100 different states.

Kauffman studied the behaviour of such large networks in computer models. The team looked for stable patterns in the network by applying simple rules such as introduction of feedback (whether a bulb can be turned on or off depends on the state of the bulb it is connected to) and application of Boolean logic (if a particular bulb is connected to two other bulbs, then it might be 'on' only if both the bulbs are 'on' (AND operation), or it might only be 'on' if one of the two bulbs is 'on' (OR function) and so on. The expectation was that if you start with some randomly chosen state, with some light bulbs 'on' and some 'off', the Boolean rules may allow the system to settle in to a repeating pattern, following its tail endlessly around the same cycle of states, called the state cycle. If there is only one input per bulb (gene) then irrespective of what rules you have in the model nothing interesting happens. But, if the number of connections between nodes increases to more than 2, chaos reigns. But, interesting things happen at the edge of chaos. With feedback as an ingredient each state cycle has a length equal to the square root of the number of nodes. Going back to the 2100 possibilities again a system with that number of states can be shown to settle into a state cycle with just 10 steps in it (10 being the square root of 100). What about a system where the number of nodes is 30,000 (i.e. the number of genes in the human genome)? Even in such a system the number of steps can be shown to be about 173. If you went by the previous estimates of the number of genes in a human genome (i.e. 100,000 genes) still the number of state cycles is 317 only. Kauffman proposed that these state cycles are like attractors pulling a random system into a stable state, without being sensitive to minor disturbances. Whatever is the initial number of potential states one would find in a system there would be a tendency to settle down into a stable cycle, repeatedly visiting the same number of states in the same regular order. With 100,000 different nodes there would be only 317 different attractors. With 30,000 nodes, there are 173 different attractors. If the number of nodes (genes) were anywhere between 30,000 and 100,000 what would be the number of attractors? Does the figure of 256 sound all right?

Kauffman suspected that each of these attractors, or state cycles, could actually represent cell types we see in organisms. In these state cycles, specific genes are turned 'on' and 'off'. The organism settles down into a finite number of such state cycles (or cell types) depending on the number of genes it has. This is where we get the square root relationship between the number of genes and the number of cell types in an organism. In other words, in a network of tens of thousands of genes interacting with one another with the rules of Boolean logic there are only a couple of hundred (256 to be precise) state cycles which are attractors.

I guess Kauffman's stable state could be a physicochemical expression for the limitation of what a single cell can achieve. I guess it is not difficult to see why this happens if you look at it from the functional angle. Just imagine that you are working in a small family firm. Your boss is the owner of the firm and apart from the owner there is only you. This is fine with very small firms but, with increasing growth, your boss employs one or more person (s). The decision was taken because physically there is only a certain amount of work that you can do. Expecting more than this physically maximum limit slows down the whole business because you start prioritising and delaying things. Similarly, the number of functional or structural capacities an individual cell will have is determined by how many of the 30,000 genes this cell helped to decode for the organism as a whole. Let us assume that all cells equally share the task of decoding all the information in these 30,000 genes. This would mean that each cell type in your body would have roughly the information content of 100 genes to cope with. This is complete speculation that cells equally share this task but I would not be surprised if they did. This is because I think the physical limitations of decoding and controlling the genetic information beyond a critical limit would favour a restriction on this number. I think this fits in well with my hypothesis that information overgrowth leads to functional and structural speciation to meet the demand of utilisation of total information through shared access.

Our own world has grown to the extent that we struggle to meet the challenges of the information explosion. We have been forever increasing the skill types in human professions so that specialised people will help to tackle the information load. Just look at the number and types of educational courses available in universities. There are so many departments, each offering literally hundreds of courses. This means we are acting like the cellular strategy of differentiation and the consequent functional division of the work required for tapping into the vast ocean of information in front of us.

Just to give you an example there are many projects in the world where unimaginable amounts of information are generated on a routine basis.

Big projects planned by the NASA's Earth Observing System will generate a terabyte (1000 billion bytes) of satellite data every day and up to 1000 terabytes every three years! The large Hadron Collider at the European Laboratory for Particle physics (CERN) in Switzerland will spew out 20 times this much of data! Tony Reichhardt wrote in _Nature_ in the 10 June, 1999 issue that the amount of data coming out of the collider will be equivalent to all humans on the planet talking into 20 telephone lines simultaneously! In 15 years from then, it will have produced an equivalent of 100,000 terabytes of data!

Robert Grossman of the University of Illinois at Chicago's National Centre for Data Mining, says the amount of data man generates has increased by more than five orders of magnitude since the 1980s. Usually, the high-energy physicists, astronomers, climate modellers were the ones faced with large amounts of data and therefore had access to super computers. The times are changing for the biologists of the modern times. They have now realised that they have to be bio-informatics experts too. Vast amount of information is pouring into databases such as _GenBank_ and _SwissProt_ , the DNA and protein databases, respectively. There are other DNA databases too. In the field of genomics, it is still not clear whether the databases are allowed to bloat unnecessarily because the database is open to all expressed DNA sequences, which is a bit worrying. Are they all worth keeping? Secondly, should we keep records of all, innocuous single nucleotide polymorphism in genes?

In 1999, the U.S vice-president Al Gore launched a plan for Federal agencies to carry out a 'Digital Earth' survey to create a virtual representation of our planet! This would involve modelling parts of the earth such as the ocean and atmosphere. To make it into a workable model of the Earth would make programmers cringe with fear.

Scientists are gearing up for grand challenges in computational science. One such challenge is to design a virtual thermonuclear fusion reactor on a computer without having to build real prototypes, a problem likely to involve 1 to 10 petaflops. A petaflop supercomputer is roughly 1000 times faster than today's teraflop (1000 billion floating-point operations per second) machines!

Ruzena Bajcsy, assistant director of the U.S National Science Foundation's directorate for computer and information science and engineering, believes the massive computing power available will help tackle dynamic complex systems like cells and earth, closing the chapter of Descartian reductionism.

A typical PC today is capable of carrying out 40 billion calculations a second. As of October 2012 computer engineers at Oak Ridge National Laboratory in Tennessee, US, are hoping to have the fastest supercomputer in the world that has the combined power of 500,000 individual PCs. This supercomputer named 'Titan' is expected to have the power to carry out 20,000 trillion calculations a second! This is about 4000 trillion calculations a second faster than the current fastest supercomputer called _Sequola_! _Sequola_ is being used currently to simulate nuclear explosions. 'Titan' supercomputer is expected to be used for jobs such as Flu virus genome sequencing and designing suitable vaccines for Flu. Another proposal is to use it to study climate change and understand effects of green house emissions.

The race for supercomputers, which was triggered by Seymour Cray in 1976, is still on. Though 'Titan' is aiming to be the fastest supercomputer researchers in Cambridge, UK, are already working on another supercomputer that can work 150 times faster than 'Titan'. This means the processing speed will be between 2 million trillion and 3 million trillion calculations a second! This supercomputer, when ready, will be hooked up with the Square kilometre Array, a giant radio telescope made up of thousands of radio dishes under construction across South Africa and Australia. This supercomputer will collate and analyse data captured by these dishes.

Driven by the need for dealing huge amounts of data even commercial companies are working towards supercomputers. It is not always the academic establishments that need them. Tesco supermarket chain in UK is already investing in a supercomputer to be based in Watford, UK, to deal with their online and banking businesses. Their idea is to study the consumer behaviour from information extracted from their customer loyalty cards and spending habits!

Information overload is a challenge for us. No one can be a jack of all trades any more. Now, man's biggest challenge of all time seems to be not related to distance of communication. Instead, what is he going to do to face up bravely to the tidal wave of data? Are we humans going to sink in it, or try and swim afloat? We are gearing up to face colossal data avalanches the kind of which we have not seen yet.

# 4. ENCODING INFORMATION

If you took a look at our modern society there is information exchange everywhere. People rely on information all the time. You utter a few random words to someone you are talking to. Those words actually transmit some information. You are driving your car. You see some road signs on the road. They convey some information to you. Sitting in the lounge you are staring at the Television. Visual images and sound signals from the TV mean something to you. A book that you are reading sends a steady stream of information to your brain. We are all swimming in a sea of information all the time.

Just what does it take for something to mean something? In other words, what does it take for something to 'hold' some information? In the same vein we can also ask the question - what does it take for someone or something to extract a piece of information from something?

For example, a word like 'atom' is only 4 alphabets long. But it is enough to contain some information. Will the same 4 alphabets mean the same if you rearrange the alphabets randomly in any order you like? Perhaps you can try 'mota', 'toma' etc. The alphabets are the same but the meaning is lost.

Try to tell the word 'atom' to a kid. It is unlikely that you will manage to transfer the information normally associated with this word. The same applies if you try telling the word to an illiterate. You are not likely to meet with success. If the Chinese or Indian came back to you and said the equivalent of the word 'atom' in their native language it will be your turn to give that blank look.

When you speak any language information is generated using coding conventions known both to the sender and receiver, which are actively learnt. The learner needs to grasp the meaning of each word and the way to write or read the words. In English we use 26 alphabets in various sequences and combinations to impart information in the form of words. The person who can speak English needs to know as many of them as possible. Everybody that speaks English will have to follow the same rules.

Stringing together a series of words in sentences can further enhance information content of the words. These sentences can be joined to one another to add more complexity to the information content. We add symbols, numbers, images, graphs etc to these sentences and paragraphs and you get very complex science or technical stuff that requires sometimes decades of learning to grasp the coding conventions. Basically, the information content of something is proportional to the amount of coding conventions employed. In fact, all of our social languages use agreed grammar codes and meanings to convey predictable information. To me languages are simply coding conventions similar to the ones mentioned in this paragraph. We are so used to languages that we do not look at them codes at all. They have become a way of life.

Use of a common code for denoting entities is an accepted practice in our society. Our supermarkets use the European Article Numbering (EAN) or Universal Product Code (UPC) to keep track of the identity of the products sold and for stocktaking purposes. The libraries use a unique system of numbering the books. The publishers use the ISBN codes to identify a book. Computers use ASCII codes. As you will see later even biological systems use coded information transfer to accomplish communications between cells.

In the computers, 0 and 1, the binary digits, represents numbers and alphabets using the ASCII (American Standard Code for Information Interchange code). For example, A =65, B=66, C=67, and so on. These numbers, representing the alphabets, can be written as a series of 0 and 1's according to a pre-agreed code (ASCII), and can be transmitted as such by the characteristic patterns of 0s and 1s in order. Computers store text documents using these codes. ASCII codes represent text in computers, communications equipment, and other devices that work with text. As said a short while ago a number represents each language character. The number is the ASCII code corresponding to the language character. For example, F = 70, D = 100 etc. If you want a space between characters then it is represented in the ASCII code by 32. Upper case characters, lower case characters, punctuations all have their own code.

Computer is a machine that does what you tell it to do and you need to be able to tell the computer in a form that it will 'understand'. Just like you represent each alphabet by a number code, even the functions you want the computers to perform are coded. The computers store in their memory the definitions of what it will be asked to do. The memory in the computer is built of _registers,_ each register holding a pattern of bits called a _word._ It is something like your own language vocabulary. Computers can have even billions of these register _s,_ each holding a 'word', and only one register accessed at a time. Registers are referred to as locations in memory and each register has an _address_. The addresses are a pattern of bits by which you can access them. Boolean logic blocks decode the address and select the location for reading and writing.

If you take a look at the history of human race it is amazing to see the variety of means used by man to encode information. As early as 50000 years ago, humans have created cave paintings as a means of communication of thoughts and ideas. Probably they were attempts to represent the world around them, which in a way is nothing but encoding information. Except Antarctica, every other continent has been found to have sites where rock art of the primitive man can be seen. Africa has the most number of sites. The oldest known African rock art is in the south and are estimated to be 27,000 years old. But, it is believed that some of the African paintings may reach more than 40,000 years.

It is believed that proper writing, as a means of communication, was not developed until 3500 years ago. Sumerians, people living in Iraq now, were the first to use such writing. They used a form of symbol and word carving on clay tablets. A wedge-shaped nib was pressed on soft clay, which was then baked afterwards to make the carvings permanent. This form of writing was called cuneiform writing. Ideas were conveyed using symbols and words as the units of information.

Egyptians have also used writing roughly around this period. They used reeds, which grew along the banks of Nile, to make both paper and pen. Parchment, made from dried animal skin, was used later. Papyrus was the first type of paper made by the Egyptians around 3500 BC. The Egyptian system of writing was called Hieroglyphics. We only deciphered the coding convention used in Hieroglyphics language in 1823 when the Rosetta stone was found. Still there are probably a number of ancient languages not decipherable to us and therefore we do not have the ability to extract the information.

There was an interesting article in New Scientist (30 May 2009) titled 'Decoding antiquity' by Andrew Robinson who is also the author of the book ' _The enigma of the world's un-deciphered scripts_ '. He has listed a few yet to be deciphered ancient scripts across the world like Etruscan (Western Italy), Meroitic hieroglyphics (Sudan), Olmec, Zaptec and Ishmian (Central America), Indus script and finally Proto-Elamite (Western Iran). Proto-Elamite is the world's oldest un-deciphered script used around 3050 BC. This is almost as old as the oldest writing of all, the early cuneiform from Mesopotamia. The language that used this Proto-Elamite script is not known and it is not certain if indeed it was a fully developed writing system. This again illustrates my point that if you do not know the coding system a language is useless for information transmission. I suspect the information held in the written records of these ancient languages may never be released. They are locked forever.

In the second century BC the Chinese invented the first paper made from wood, as we use today. The earliest known printed book is the _Diamond Sutra_ , probably printed in China in 868 AD. It was made by pressing carved wooden printing blocks, by hand, on to a roll of paper. Paper was only a medium on which information could be transmitted. It was a substrate but did not in any way affect the information content. But the role played by paper in data transfer can hardly be overestimated. It gave man an easy and simple way of disseminating information. Pi Sheng in China invented moveable type printing in 1040. This involved printing whole pages by combining the letters, held together in a frame, and then pressing the frame onto paper. Johann Gutenburg is said to have invented the same method of printing in Germany in about 1440. The moveable type printing allowed the invention of printing presses. This made it possible to produce books cheaply and quickly. This aided the transfer of knowledge and information more widely than was possible ever before. In fact, this was the first ever mass communication tool in human civilisation. In spite of the advantages of the paper it still has to be carried physically over distances, which limits the use of paper as a speedy method of information transfer.

If you look at the recent history, within the last few centuries, human civilisation has used various means of transferring information depending on the circumstances. Between the 14th and 17th centuries, armed posts set up to guard Russia's southern borders used fire signalling at night and smoke signalling during daytime. It was a quick method of communicating with other posts but its limitation was the sparse amount of data that could be transferred this way. The information content of such mechanism could at best be a 'yes' or 'no'. The information content of the drum beats or the fires was enhanced by beating the drum in many different ways, or by setting up one, two or three fires, respectively. Beating the drum faster, beating it vigorously, and beating it lightly were some of the variations in drum beating.

' _Stephen Razin'_ is a novel written by the Russian writer S. Zlobin, based on the peasant war in Russia in 1670. It describes how one haystack on fire meant there are good many troops in town, two haystacks on fire meant there are not good many but the enemy can stand up against us. Three haystacks on fire will mean that there will be no resistance at all, and they can come in without any fight.

In the 1790s, a French Engineer named Claude Chappe invented the mechanical semaphore system for sending any message quickly over long distances. This system consisted of hilltop telegraph stations, each with signalling arms that were moved to positions, which stood for different letters. This method speeded up communication by about a hundred-fold, when compared to the horse-bound messengers. By the 1850s, France had a network of over 500 such telegraph stations.

Code flags were the best way for ships to communicate at sea before the invention of Radio. At the Battle of Trafalgar in 1805, Lord Nelson is said to have used the code flags to send his famous message, 'England expects that every man will do his duty' while aboard _HMS Victory_.

Electricity and magnetism were discovered by the late 18th century and it was soon realised they could be used to send messages along wires, connecting towns many hundreds of kilometres away. The person sending the message turned the electricity on and off, and a receiving machine at the other end detected this. The 'on/off' switch enabled by letting and stopping a current to flow through a circuit, is still the way binary bits of computers encode information!

How could you send in a message in English if your telegraphs could only allow binary variations (such as a 'yes' or 'no')? Aren't there 26 alphabets in English? This was accomplished in the early 18th century by using separate wires for each alphabet. If a current was received along a wire, the letter that it represented was pre-determined. Obviously, you had to have at least 26 such wires. The flow of current was detected at the receiving end by the small bubbles formed in a container of liquid.

Samuel Morse was an American who helped humanity make a big leap in 1844 with his single-wire telegraph. He used the code named after him, the Morse code of dots and dashes to represent different letters. In Morse code, each letter is made up of a series of dots and dashes. The letter 'a' is a dash followed by a dot. The letter 's' is represented by three dots and so on. How can you represent a dot and a dash electrically? After all, didn't Morse code use electricity? Wasn't it a form of telegraph? Yes, it was, and it used electricity. The operator presses a switch for short and long periods to send a sequence of dots and dashes. When you press down the lever, a circuit is closed and an electric current flows along the wire to the receiver. At the receiving end, the current running 'on' and 'off' makes a stylus draw a pattern of dots and dashes on a paper tape. The duration of pressing the lever encodes for a dot or a dash. If you look carefully, it is again the yes/no switch involved here too. The same old binary principle again, do you see?

Advances in telegraph machines brought keyboards for typing messages and machines to print them. Pressing down a key, which represents a letter, lets a current flow through in a similar 'on' or 'off'' form of closing and opening the electric circuitry. The only advantage was that the sender could use the actual alphabets without involving unrealistic looking messages in the form of dots and dashes! By the end of the nineteenth century, every town had a telegraph office in many countries. For a fee, people could have a message sent to another office, from where it was delivered by hand. Telegraph cables were even laid under the seas, connecting countries around the world.

Telex stands for _tele_ graph _ex_ change. In the early 1930s, telex became available for sending written messages along telephone lines. The message was typed in at the sender's keyboard and was immediately printed at the receiver's machine. The principle of electrical circuitry closing and opening, every time you press a letter key, is operational here too. The basic mode remains the same though it was done in a more convenient way.

Fax is now one of the most commonly used modes of instant communication over long distances, next only to telephone and e-mail. It is a hybrid between phone and a photocopier. It is surprising to note that even gadgets 'evolve', by mixing the good features just as an organism would evolve by combining the good genes of the parents. It is difficult to believe that the first practical Fax machine was built in early 1900s! Then why didn't it come in to use sooner? It was because electronics wasn't cheap then. By 1980, Fax soared in popularity along side other electronic gadgets. How does the Fax work? A Fax machine divides the page waiting to be transmitted into thousands of tiny squares. Then each square is scanned by light- sensitive elements to work out if each square is light or dark. This information is again of a 'yes/no' nature. It is either presence of a dark square or not. This 'binary' information is sent over telephone lines to the receiver Fax machine which has a row of heat-sensitive elements and a heat-sensitive paper onto which the light and dark squares are 'printed', according to the pattern sensed by the sending Fax machine.

Computers cannot think. They need to be told what to do and more importantly how to do. Computer instructions are stored in machine language, which is directly interpreted by the machine. In simple software each instruction in machine language is stored in a single 'word' of memory and a sequence of instructions are stored in a block of sequentially numbered memory locations. Machine instructions produce predictable results brought about by coordinated sequences of gate changes i.e. opening and shutting of gates in a defined order. With improved programming knowledge computer scientists used sets of instructions as one unit.

In the computers the words stored in the memory, like numbers and letters, are data that need to be acted upon. But, not all words are like this. There are other types of 'words' that simply represent the sequence of operations to perform. I am sure we all know this kind of thing from our own life. Some words like 'deadline' in an office setting means a series of actions that needed to be done to meet the project deadlines, which are obvious to the employees concerned. Some words you have registered in your own brain memory are descriptive words like colour, taste, objects etc. Some others are action-oriented such as run, read, eat etc. When some one says these words to you then it is clear to you what needs to be done and you initiate the action. Some words like 'protocol' implies a series of rules and actions to follow and I am sure every one of us is aware of quite a few words that apply to your work place or social commitments where there are pre-defined, exact sequences of action needed.

The number of binary bits in a word can be 8, 16 or 32, or even 64, depending on the microprocessor in the computer. The difference between 2, 8, 16, 32 or 64-bit processor is the number of binary bits used to represent a word. Bits are rarely seen alone in computers. It would be almost like using individual letters in your social languages. If you tried to encode meaning for each alphabet in English you can only have 26 'words' in total. Just what can you convey or store in a 26-word vocabulary? That is why languages have been formed with the possibility of having varying numbers of alphabets in the words used. This generates infinite possibilities to encode meaning. The computer scientists also have realised this and so created units higher than a single bit. They are almost always bundled together into 8, 16, 32 or 64-bit collections. A byte is a 8-bit collection. In 8-bit processors, with 8 bits in a byte, you can represent 256 values ranging from 0 - 255.

0 = 00000000,

1 = 00000001,

2= 00000010 and so on.

A 16-bit processor uses 16 bits in a word and can represent 65,535 values.

In a 16-bit processor:

0= 0000000000000000,

1= 0000000000000001

Obviously, you can expect more possibilities for 32 and 64-bit processors.

Computer scientists have learnt to encode information using the 'opening' and 'closing' of electron flow. When electrons flow it is '1' and when it doesn't it is '0'. Seemingly, it does not look like a big deal. But, because you have also learnt to convert practically all forms of other information into the digital format the outcome is simply amazing. The intermediate step in this encoding process, or rather the transformation of non-digital information to the digital format, is the additional step whereby pre-agreed conventions are used to translate information (such as letters, numbers) to series of 0s and 1s. As said above every letter and number and other characters and symbols can be represented by unique sequences of 1s and 0s. There is a binary code for every bit of such units of information. Again, as said above, all computer actions can be coded as well. So, tasks such as adding, multiplying or writing or drawing or anything else you do with your computer have a code. For specialised functions software is written as a series of actions, which are in turn coded. As you probably already suspect there is a need for the computer to interpret the command codes, which may be a direct step or may involve an intermediary. Most computers also run interpreters for the purpose of executing various sorts of script or byte-codes. If the right interpreter is available your computers can be made to transform data into executable code, which is dangerous. The danger is because they allow access to the operating system and its components and allow misuse.

In the biological scenario the code interpreting equivalent can be seen in the world of viruses. The harmful viruses that infect humans can have either RNA or DNA and they are called RNA or DNA viruses as the case may be. The peculiarity is that these viruses can decode their own RNA or DNA, using host cell machinery, and survive. In the case of viruses with RNA genome the product can even be integrated into the human DNA! They are called the Retroviruses. When a retrovirus such as HIV infects a cell, an enzyme called reverse transcriptase copies the viral single stranded RNA genome into a double-stranded viral DNA. The viral DNA is then integrated into the host chromosomal DNA, which then allows host cellular processes, such as transcription and translation to reproduce the virus. The viral code gets executed by the human cellular operating system. The pharmaceutical companies have realised the importance of this step in HIV infection and have therefore developed drugs which act by blocking reverse transcriptase's enzymatic function and prevent completion of synthesis of the double-stranded viral DNA, thus preventing HIV from multiplying. These types of drugs are called Reverse Transcriptase Inhibitors and currently they are indispensable drugs when it comes to treatment of the HIV patients.

Our own immune system normally has an ability to sense free RNA and/or DNA, as well as viral or bacterial antigens, as an indication of viral or bacterial entry into the body. This immune detection can activate mechanisms in healthy individuals to mount a fight against the microbes no different from the way your anti-virus softwares work. Who gets infected with the biological virus, and which computer gets infected with the computer virus, are dependant on the preparedness of systems concerned as well as the ingenuity of the infecting agent! Another example of biological code transfer is the bacteriophages, which are basically viruses with the ability to infect bacteria. The viral RNA or DNA is injected into the bacteria followed by 'execution' of viral code by the host bacteria's operating system!

When you look at the biological world of microbes, plants and animals, it is clear that the cell communication methods employed by the life forms are infinitely more complex than what we see in our information technology. The needs of unicellular and multi-cellular life forms in terms of information capture and transmission are different and, as can be expected, can be orders of magnitude more complex. The life forms have the need to capture information from the extra-cellular and intracellular environment. Unlike in humans, and certain other organisms having the advantage of a brain, most life forms deal with information trafficking by means of a sort of automated chemical sensor mechanism. For a bacterium it is presence or absence of nutrients. For plants it could be detection of light or water and/or nutrients again. Even we humans do a lot of such chemical perception unconsciously. For example, when our blood glucose levels drop our body cells are able to detect it. Following this detection appropriate signals are sent to the brain to initiate the feeling of hunger. This is followed by other signals sent to other cell types to try and mobilise glucose from body stores as well as start making glucose from other raw materials. Alternative fuels like fat are accessed in response to the signals. All this happens without you becoming conscious of the processes involved in the signal transfer. We also do a lot of other chemical perception of our own internal environment in terms of levels of oxygen, carbon di oxide, salts, pH of blood, blood pressure etc. Specific chemical receptor cells, which enable us to 'perceive' these chemical entities, are no different from unicellular life forms. They accomplish chemical information capture without the help of a brain.

The question I am raising now is how is this chemical or non-chemical information represented by the capturing cells, which later become the 'senders' of the information? Again, how are the 'receiver' cells able to decipher the information that has already been transformed and coded by the 'senders'? It is very clear that the biological information undergoes some sort of a 'change in form' equivalent of a code. This coded representation of information and the ability to respond/decipher the codes are hard-wired in life systems in the form of DNA blueprints and nervous connections as the case may be and involves learning and/or genetic inheritance.

In the case of organisms with higher sensory capacity, such as humans, there are other modalities such as vision, hearing, smell etc, which help us to capture external information. These sensations are handled with the help of the brain's ability to process the sensory information. Whether it is the chemical information or not, whether the brain is involved or not, the most important point that needs answering is how do the life forms manage to represent the captured information in a suitable form that can be decoded by the cells so that the information is not lost or altered. The most fundamental aspect of information theory is that there should be an accurate reproduction of the information by the sender. How can you represent information such as levels of nutrients or gases or a hormone or presence of a mate, prey or predator?!

In terms of accurate information representation I can only consider visual information easy to understand because intuitively you almost think of it like capturing an image using a camera. It is a different story if the eye and brain actually use the same principles as a camera but at least we can imagine the process to be conceptually similar. But, it is not the case when it comes to other forms of biological information such as smell, sound, superficial sensations like touch, pain etc. Just how can you 'code' smell or sound or touch or pain etc? The channels that carry these types of information faithfully retain them during transit. The messages are delivered to receivers which could be yet another carrier in the transmission chain. It is not uncommon in biology to have many layers of transmitting entities in information transfer. At each of these transmission stages there is a further step of coding involved. So, typically there may be some thing like 5-10 steps of coding in a biological information transfer before the message actually reaches the actual effector that acts on the message. This again raises the question: what are these coding conventions that operate at successive layers in the molecular communication chain?

Claude Shannon, in his seminal paper in 1948, defined information as 'a reduction in the uncertainty' and formulated a mathematical theory for communication. The reduction in the uncertainty refers to the gain in the information about an entity, brought about by reproducing at one point either exactly or approximately a message selected at another point. The reduction in uncertainty about a system becomes vitally necessary when the systems are capable of existing in alternate states. His information theory was developed to understand the transmission of electronic signals but has had an impact on biology as well, particularly with reference to DNA, because of the obvious relationship between DNA and biological information. What I have shown here is that Shannon's concepts are equally applicable to all aspects of biological information and exchange, not necessarily DNA. In fact, just a few years after Shannon's famous papers were published a symposium was apparently held on the topic of ' _Information theory in Biology'_. Biological applications of the information theory at that time seemed to have included topics like membrane phenomena, antigenic specificity, protein structure, morphogenesis, ageing, effects of radiation etc. Apart from application of this theory to DNA, over the years, it should be noted that researchers have also used it to understand sensory perception.

National Science Foundation in the US sponsored a workshop in 2008 titled ' _Molecular communications/Biological communications technology'_ , providing a forum for discussion on interdisciplinary field of biological communications technology. This workshop brought together leading researchers from Biology, chemistry, Mathematics, Nanotechnology, Engineering etc to stimulate research towards understanding biological computing and communication processes and to use these insights for designing and constructing new computing and network systems. One of the 5 sessions at this workshop addressed the Coding theory and Channel capacity where discussions centred on application of Shannon Information theory to biological systems and how coding and decoding was done in biological systems.

Over the years people have learnt to accept that biological information should be amenable to mathematical understanding and could also provide clues and insights for making better communication and computing systems. There have been quite a few books written on these topics. Shannon's theory enables specification of the lowest information rate needed to faithfully represent a message source, and the highest rate of reliable message delivery under constraining conditions. Just as in engineering systems even the life systems need to be able to represent, store, maintain, protect, transmit and replicate information. It is highly likely that the engineering approaches to communication modelled on biological communication may prove better.

What I have shown below is my account of how Shannon's principles can be adapted to biochemical signalling. Basically, a typical communication system, as described by Shannon, looks as shown here.

Information source (Message) - Encoder (Transmitter) - Signal (Channel) - Decoder (Receiver)

As you can see the above sequence of events is obvious in the case of any standard communication system in our everyday life. Can this sequence also work inside your body for biological information transfer too?

I think Shannon's model can work for information transmission inside life systems too. Let me show how.

In our body, and any other life system, the information source is DNA present inside a cell nucleus. DNA is the operating system for the cell. Really it has a series of executable codes that have the capability to 'operate' a number of vital cellular life processes. But, before this can happen, DNA information has to be interpreted by cellular machinery before the information can be used. This 'interpretation' is done by the transcription process (DNA to RNA) and later the translation process (RNA to protein). To be honest, some of the most exquisite controls are exerted on these 'interpretation' steps. Which part of the DNA has to be executed, how much, when and for how long, in which cell type and in response to what signals is reasonably well known now. But, there are vast stretches of DNA codes that are little understood. They are still like the un-deciphered language scripts that I mentioned earlier. We still have not deciphered the DNA codes present in 80% of the DNA which is the dark matter of biology.

In a figurative sense the DNA executable code is transformed by two successive levels of interpretation - first RNA and then proteins. Proteins actually execute the information contained in the DNA.

Proteins can be viewed as 'messages' in a coded form. But, many proteins are actually effectors too. Insulin is a protein. Insulin is a message generated within pancreatic beta cells. This 'message' sent to the cells to start utilising glucose. In that sense insulin is a message. The message is 'encoded' in the form of amino acid sequences. The 'channel' for this message transmission is blood stream. The target cells 'decode' this message and 'receive' the signal. The decoding step involves a 'receptor' specific for the insulin message.

The receptors (Decoders) are common in biology. Each cell that is a target for a 'biological message' has a dedicated cell-surface receptor working like an aerial or an antenna. Living cells are one up on us in having a much more sophisticated system of decoding. The same message can be 'decoded' in multiple ways by different types of target cells. This means one message can lead to multiple effects. The beauty is that the sum total of all effects will be complementary to each other in achieving a concerted biological function. This technological capability is yet to be seen in our modern communication systems.

The cell surface receptors are themselves proteins. In many instances they are made of multiple types of protein subunits aggregated in a functional and structural set up. The binding of the 'message' on the outside of the cell leads to activation of conformational changes in the receptor. There is an activation of the next stage of message transfer involving generation of a second messenger like cyclic AMP. In the majority of the cases there is a further involvement of a 'tertiary (third) messenger' like calcium or an enzyme (such as protein kinase). The final biological effect is mediated by this calcium or a protein kinase.

In the case of nervous system obviously the messages are either externally sourced (like vision, sound, touch, smell etc) or internally accessed from our subconscious 'sensations' like salt levels, nutrient levels, blood pressure etc. The messages are 'encoded' by dedicated sense receptors. This is transmitted by means of coded electrical impulses along nervous conduits. The signal is decoded by the target cells using receptors that cascade the signal through electrical impulses again or through neurotransmitters (molecules in the nervous system that are carriers of information). There are many types of neurotransmitters in the nervous system. Each type of neurotransmitter is a message to unique cellular destinations where they are decoded similar to the way I described above.

In summary, biological systems rely on a model of information transfer no different from what we see in our technological world. The concept of a source of information, encoding of the information in a message, a channel for transfer, and finally a decoder to receive the signal are all present in biology and technology.

Typically, the flow of information inside life forms is highly ordered. It can flow down pre-determined paths. The first step in the biological information process may involve generation of a primary informational message such as a neurotransmitter or an electrical impulse (in the case of a brain or heart signal), a hormonal or growth factor messenger (in the case of gene-based internal information). They are just coded versions of the original message. The receiver cells decode these codes using hard-wired cell-surface receptors or synapses. This is further cascaded down set paths, involving additional carriers of information such as secondary and tertiary messengers.

For a human body a hormone such as Testosterone or Oestrogen is a biochemical code like Insulin I mentioned before. The source of this information was the executable code present in the DNA as the genes. There are of course many such 'biochemical codes' in the form of hormones, growth factors and neurotransmitters etc. The source of these Testosterone and Oestrogen codes is the concerned reproductive organ cell, of course DNA being the ultimate source.

In real life setting our human body makes these reproductive hormones to mediate reproductive function, as the names suggest. What was that external or internal stimulus that led to production of these hormones? One could say it was the sensory information coming in the form of visual or auditory signals from a mate. How these sensory signals are captured and coded, by the concerned sensory receptors in the eye and ear, is by itself a long story. Light and sound signals get converted into electrical format at the level of the sensory receptors, and the nerves, and how this electrical format is 'interpreted' by the brain will be discussed later.

For the present discussion we will concentrate on what happens after this. The brain then sends out commands to the hypothalamus and pituitary gland cells. These two tiny organs sitting in the base of the brain are 'master glands' that pretty much control most of your body function. They are capable of secreting so many types of hormones. In this instance, these glands send biochemical signals to the testes or ovary, as the case may be, when instructed by the brain. First the hypothalamus sends down polypeptides like Gonadotrophin-releasing hormones to the pituitary, which in turn secrete Follicle stimulating hormone and Luteinising hormone. These are also proteins that act as preludes in the sequence of steps required for initiation of reproductive function. The other way of looking at it is they are additional controls required to regulate the costly reproductive function. They in turn activate the testes and ovary, which produce Testosterone and Oestrogen. Our body typically has many control steps that regulate complex functions. An action has to be preceded by fulfilment of prior conditions.

The interesting aspect is that these biochemical codes work in every body. They are universal codes in that sense. I can collect these hormones and releasing hormones from one person and inject them into another person and they will work! Medical use of testosterone supplements and hormone replacement therapy for postmenopausal women are based on this principle. Even more interestingly, the hormones from the pituitary and hypothalamus will work even across animal species. Hormones like Insulin, Growth hormone etc obtained from animal sources have been used in medical practice which is ample testament to the universality of the biochemical codes seen in the biological world, no different from your ASCII codes and EAN codes etc!!

For example, the final effect in the biological communication cascade in the case of sex hormones is preparation for reproductive activity and/or maintenance of the state of pregnancy. Environmental signals (mate) and internal body clocks largely regulate the reproductive process and if you carefully look at the communication behind this event you can scarcely recognise the link between the initiating signal (visual and sound signals from mate) and the final effect (pregnancy and a host of preparatory steps inside a woman's body). The flow of events happens down many steps, involving a large number of cell types and molecules, using successive levels of coding and decoding, and the beauty is that the information content is faithfully preserved at each of these steps. The integrity of the message is preserved despite multiple steps of code conversions!

I suppose it is no different from transmission of messages in our society. A piece of information that originates somewhere often percolates through multiple levels of transmission before it brings about an action. Depending on the importance and nature of this information the levels of information transfer may vary. If it was a chitchat you are having with a friend then I guess there is a form of direct communication. But, if it were some sort of official communication then you would probably see many recipients involved, and probably many layers of organisational hierarchy involved as well, before you received it. Even a simple letter you wrote actually goes through an organised communication chain if you pause to think about it.

If you look at how information is coded in nature you would notice that there are some other recurrent themes there. The principles are the same though the systems are diverse. A simple example would be that there may be hundreds of languages in the world but one recurrent motif in all of them is the use of monomeric alphabets as the unit of information. English has 26 of them and other languages have varying numbers.

I suppose written language may have come about nearly five thousand years ago. Human beings may have used symbols and sound before that to encode information. In the last half a century or so we have seen another groundbreaking technical language called the binary language make its appearance. The use of binary alphabet (0 and 1) in digital revolution is too well known.

Amazingly, even biological systems use 'alphabets' which is evident in the DNA and protein structure. DNA has 4 types of nucleotides used in varying sequences no different from the way we use letters in a language. Proteins use amino acids as the letter-equivalent, arranging varying sequences of them to form diverse functional and structural proteins. Just as in a language the exact sequences of nucleotides and amino acids encode information. It is a point to ponder why man, or even nature, can't think of some other means of encoding information other than re-arranging monomeric alphabets, whether it is the common language alphabets or the binary alphabets or the nucleotide alphabets. Is it the ultimate solution to the problem of information generation? Why do we need anything better than this?

The next recurrent motif is that coding conventions are needed not only for information generation but also for deciphering it. As I said above the word 'atom' means an 'atom' to those who know the code. It means nothing to some one who does not know this. Of course you can use any number of examples. I do not know the Morse code and a message I receive in Morse code is as good as junk material. Even for that matter if some one spoke to me in Japanese, or a language I do not know, the speakers could as well be talking to a wall. The code 'testosterone' is only 'understood' by male reproductive organ cells but not others just as Oestrogen is 'understood' only by female gonadal cells.

I am now going to move into a different territory. I am going to dwell more on the nature of biological information handling and how it can be compared to our own technology. Obviously, this is the main aim of this book and I had to touch upon various information transfer strategies used my man over the history just to be able to bring out the similarities and common principles. In life systems, information is associated with DNA (in every cell) and memory (in brain only). In fact, it is one of the main aims of this book to highlight those information-rich processes inside the miniature cosmos of a living cell.

I am sure all of us know about DNA as the carrier of hereditary information. I said a little while ago that I feel DNA uses a quaternary language to encode and transmit biological information, speaking in computer jargon. Richard Dawkins, in his book ' _River out of Eden'_ says DNA is uncannily computer-like. I think he is right. Use of 'alphabets' to imprint biological information has been around 3500 millions of years on the planet! Just for your information, man as we know him has been around only for the past 2-5 million years at the most. Our binary language is less than a century old.

The language of DNA has four alphabets only, namely, adenine (A), guanine (a), thymine (T) and cytosine (C). That is why I said it is a quaternary language.

In spite of the Human Genome Sequencing projects we are still grappling with the coding conventions in DNA. We do not yet understand the meaning of much of the DNA sequences. ENCODE Human Genome Project has identified 4 million gene 'switches' in the portion of DNA previously thought to be junk. These gene switches are now believed to play regulatory roles in the function of many genes. They probably are the executable codes for a number of cellular processes.

A DNA sequence of just 8 bases can be shuffled into 65536 combinations of arrangement. The sequence of arrangement of bases carries information and one can wonder the variation the cell can work out with sequences longer than that. Just remember that a typical gene can range from hundreds of bases to thousands of bases! They can be arranged in so many different sequences and I am sure nature may have produced most of the possible combinations before the most successful gene sequence was selected. We do find that a given biological function is mediated in different organisms by genes that are different to a small degree. They may be similar quite significantly, which indicates the relatedness of the organisms. The differences support the view that there are many solutions to a common problem. Each organism has arrived at its own genetic solution to it.

The unit of DNA is the gene, which conveys a biological meaning just as a sentence would. An organism is like a book written in the language of DNA, using sentences called genes. Using different combinations of these four nucleotide alphabets, like beads in a string, nature has encoded enough information to create approximately 30 million life forms. We have no idea how many have become extinct even before man originated. In other words, nature has 'written' at least 30 millions books, each of them 'printed' millions & trillions of copies. So many of these 'books' have never gone out of print, which means these species are all still flourishing! What an incredible 'publisher' God is!

I can see some eyebrows raised. I am sure readers may wish to think that comparing the language of DNA & evolution to an exercise similar to publishing is rubbish. However, I can't help it. What would a good publisher do? Leaving alone the quality of the writing, the publisher at least would like to have the material proof read, so that simple spelling errors can be avoided. Every time DNA is replicated, as happens during cell divisions, an identical copy of the original DNA is made. Who does the 'proof reading'? It is the enzyme that is responsible for duplicating DNA. It is called DNA polymerase. Believe it or not, after duplicating DNA, it literally scans through the newly formed DNA to correct any errors! It is unbelievable. If you don't accept it as a publishing concept, pray tell me what is!

Another similarity between DNA and a language, apart from the use of alphabets, is their tendency to evolve in time **.** Every one knows that languages 'evolve' and give rise to dialects and newer languages. I am sure English is not spoken the way it was spoken and written a hundred years ago. People tend to borrow and mix words from other languages. I am sure English has plenty of Latin words in it. People living in outer borders of a country, or a state, tend to speak languages most differently than ones living inland because those people have more chances of inter-mixing words from the neighbouring languages. I am sure most of the languages can be traced to a common ancestry. The whole scenario looks very much similar to organismal evolution. DNA exchanges & mutations account for slow changes in the DNA of an organism, which ultimately branches the organisms into a new species altogether.

I remember reading an article in Scientific American many years ago about the origin and diversity of languages. The author had used a kind of 'evolutionary tree' for languages, in a manner biology textbooks describe the evolution of species. For over two centuries, linguists have worked out the family trees of languages. The lineage of most of the 6000 languages spoken today has been traced. It is amazing human beings have come up 6000 forms of coding conventions (languages)!

I would be inclined to think that all these 6000 forms of language codes will not be radically different from each other. There has to be some common motifs and common principles between them.

Spanish, Italian and Portuguese are recognised as offspring of Latin, while English and Dutch are part of a family known as Germanic. Germanic, together with Latin, ancient Greek and six other language families, is part of a group of related tongues called Indo-European, which has 144 member languages. This kind of analysis of the structure and grammar of languages can help trace the history of language evolution up to 10,000 years. Johanna Nichols at the University of California, Berkeley, goes even one step further. Instead of just attempting to re-construct the family trees of languages, Nichols aims to identify and map grammatical building blocks. Languages that share these features may come from the same family because people borrowed linguistic elements from one another. By using this approach, Nichols hopes to trace the migration of ancient humans tens of thousands of years ago. She believes that languages change over time when migrating humans start mixing grammatical blocks of their own with that found in the new territory. By plotting the occurrence of different linguistic blocks, she is able to find some striking geographical patterns. She calls it linguistic geography and believes it can explain the movement and spread of ancient humans.

There has been some support for Nichols' work from the Chinese human genome diversity project that examined short stretches of DNA called micro satellites from 32 East Asian populations such as the Han, Wa, Hui, and Yao. When compared to other Asians, for example, Japanese, Koreans and Taiwanese, and with a variety of African groups, the genetic tree constructed by the team that carried out the study indicated that early modern humans left eastern Africa and colonised Asia's southern coast before spreading into eastern Asia. In November 1999, a group of researchers at the University of Paivia in Italy described a group of related mitochondrial DNA variants, called Haplogroup M, found in Africa, Western India, Tibet and Mongolia. These findings showed that Southeast Asia could be the portal through which modern humans spread throughout Eurasia after leaving Africa.

What is to be noted is the way languages and genes co-evolve. Just as languages can be traced to their roots, the DNA of organisms can be studied too. Organisms and languages have apparently evolved at their own rates of growth resulting in varying degrees of diversity. Nichols has studied the language stocks from the Northern hemisphere to how often they branched to create new families, and how often they disappear. What she finds is interesting. Some of the language stocks seem to produce a lot of offspring while most spawned only a few. Some even disappeared altogether. She even found the rate of language growth to differ between one another.

Martin Howak and his colleagues at the Institute of Advanced Study, Princeton University, have reported in _Nature_ (Vol 404, 30 March 2000, p495-498) a mathematical model for the population dynamics of language evolution! Their objective was to explain how language could have arisen by Darwinian evolution. Recent work by this group and others suggest that minimising communication errors was a major selection pressure in the evolution of human language.

There can be no doubt that the information contained in the DNA should be kept free of errors. But, the evolutionary process favours occasional changes in this information content, due to mutations or gene exchanges. This is the basis of the diversity in the biological world that we see. It is believed that not all parts of DNA mutate at the same rate. There are some regions in the genome of organisms that show a higher rate of mutations, dubbed the 'hot spots' of DNA. These regions show faster rate of change in their information content than others. If diversity is essential for survival, and if mutations are required to generate such diversity, perhaps you need active mutagenesis. Surprisingly, several DNA mutating enzymes have been identified and named the DNA _mutases_ , whose job appears to be generation of changes in DNA information content at optimum levels of _mutase_ action.

Mutations can open up new possibilities by enabling origin of new information. They have been compared to exploration of new solutions. Frequently, mutations are deleterious because they destroy the information content of DNA by scrambling the codes. Depending on the location of change of information, the effects on the organisms can range from diseases to death. But, it is true that mutations can sometimes result in beneficial improvements in the products coded by the genes. It is also true that mutations neither destroy the information nor offer any extra benefits. The mutation rate of DNA, due to faulty copying, is 1 error in a million cell divisions. The fidelity in DNA copying is extremely impressive. Perhaps, this level of accuracy is supplemented by DNA repair enzymes whose job is to correct any changes in the DNA sequences. Changes in DNA information can occur in localised points due to physical, chemical or biological damage. Physical damage, for example, could be due to ionising radiation etc.

A human cell has about 3 billion base pairs in its genome. In other words, it has 3 billion letters, which I call the DNA alphabets, namely adenine, guanine, cytosine, and thymine. A bit of trivia here - if you were to link up DNA chains of all cells in your body (i.e. the 3 billion bases long chain present in each cell joined up with similar length DNA chains in all other cells in your body) then the estimated total length is 1.2 x 1014 metres. This is the sort of distance that even light would take about 4.6 days to cover at the break neck speed of 670 616 629 mph!!

In terms of information content the genetic code in humans is roughly 6 x 109 base pairs or 1.5Gbyte (1.5 billion Bytes). Only 3% of this i.e. 45Mbytes is active (45 million Bytes). Really, compared to modern day computers this is not much. But, it does things even supercomputers with computing powers in the region of petaFLOPS (1015 Floating Point Operations per Second) cannot do. PetaFLOPS is really 1000 trillion FLOPS!! Supercomputers with this level of processing power are being used to understand how individual proteins fold after they are newly synthesised! ' _Folding at Home'_ is an example of distributed computing project at Stanford University, which boasts of about 8.5 PFLOPS of computing power as of May 2009! And there are thousands of types of proteins in your body cells!! How many supercomputers will you need then?

We now know bits and pieces about the human DNA but do not know the entire context in which they control the information flow. We have so far been able to read the 'book of DNA' randomly, picking pages from the middle, end, left and right. We have not been able to read it from beginning to end in a continuous, meaningful manner.

Imagine buying a book from a shop and straight away starting to read from somewhere in the middle of the book! What is worse, we jump pages as we read along. We jump many pages front and back and yet hope we will understand the story! Molecular biologists have been doing exactly this all this time because of difficulties in reading the DNA book from the first page to the last page in a continuous manner. That is why our understanding of the genetic control of biological processes is very sketchy. I should point out that even the fact that DNA carries the hereditary information was only known in 1944! Only in 1953 Watson and Crick's model of DNA structure turned out to be the Rosetta' s stone for deciphering the DNA language! Till then we had no clue!

Human genome sequencing projects are going to let us know for the first time the entire content of information in us. But, I am a bit sceptical whether we will be able to understand the full meaning of all we are going to read from it. It is completely a different ball game when it comes to understanding the meaning of DNA sequences as opposed to just reading it. I do not think for a minute that we have grasped all the 'coding' conventions used in the language of DNA yet.

Look at a child. It learns the alphabets and slowly learns to read out words in the proper way. It is what a child learns in the first few years of school. Give it a book with fairly complicated terminology. The child may still manage to read out the words correctly, following the rules of pronunciation. But, it is a different matter if the child really understood the information content of those words. Molecular biologists will be able to access the entire DNA 'book' but will only be reading it just as child would, without knowing what they are reading. I am positively certain this state will continue for many years even after we have completely sequenced the human genome. We will be able to decipher the full meaning of DNA only as time goes along, just as the same child understands the meanings of complex words as it grows up! This is what Bioinformatics experts are doing now. They are using powerful computing methods to understand the patterns in biological languages. I know it all sounds crazy but my worry is we could fail to make real sense of the DNA if we cannot catch the tricks of reading it the way it should be read.

I can use some metaphors to explain why I think we will not be able to even read the DNA information in the correct way at least for many years, let alone understand it. Imagine studying a foreign language, say French. I am sure you will quickly be able to identify the alphabets and read words. But, any one will tell you that pronunciation, especially European languages, is a different cup of tea. The word 'repertoire' should be pronounced as sounding like 'repertuva'. The word 'lingerie' should be pronounced as sounding 'longerie'. I fail to understand why these words cannot be written in English the way it has to be pronounced. But, if you want to convey information you have to follow the rules. As I said in the very start information is all about coding conventions. There are quite a few examples that can be quoted in even English where the pronunciation is something that has to be actively learnt, even after you get the general rules. Take the word 'world'. We know how it is read. The word 'word' sounds similar. Look at the word 'woman'. Suddenly you find similarly spelled words pronounced differently. It is not pronounced as sounding like 'woman' but as 'wuman'. The word 'word' should be pronounced as 'wurd' if this is the proper way.

There are tens of such discrepancies in a language that we use every day. The Italian dish lasagne is pronounced as 'lasania', for reasons I am not clear. Look at the word muscle. It is pronounced as 'mussle'. But, a similarly spelt word oracle is pronounced as 'orakkle' and not as 'orassle'. If you have to pronounce 'muscle' in the same manner as oracle, then it will be 'muskle' and not 'mussle'. If you pronounce 'information' the way you would pronounce 'cation', then it is not 'informashion' but 'informatiyan'! A foreigner who learns the English language will be puzzled and could quite often make errors in conveying the right meaning. There can be quite serious failures in transmitting information if the words are not pronounced, as they have to be. But, we have learnt to agree the common ways of reading and understanding and so there is usually no misunderstanding of information because we have got the tricks under our belt.

Another very important point to be noted is that the same words and sentences can have very different meanings in different contexts. Adding more complexity is that some words have more than one meaning. What I am trying to say is it is not the end of the story if we sequence the genome. It will not open the entire information content all by itself. Just as pronunciations alter the potential of unhindered information transmission, I guess the DNA information has to be viewed in the whole context.

The products of many DNA sequences may be lacking in information content until their products combine with products of other DNA sequences. In the case of DNA it is known that some multi-subunit proteins and multi-component proteins rely on expression of a quite a few genes whose products combine to form the final complete functional protein. I guess this is like assembly of many words or sentences together in a paragraph that gives the full meaning. Many words are so commonly used and the context in which they are used changes the meaning. Similarly the protein subunits formed by decoding genes form structural and/or functional associations with other proteins where the final product is different in structure and function to individual subunits.

For example, in mammals Cytochrome Complex I, which catalyses the first step in mitochondrial electron transport chain, consists of 40 different protein units (polypeptides). There are quite a few such multi-subunit proteins in our cells. Protein-protein interactions are a hot area of research. It is not uncommon for tens of proteins to interact, which in a way is equivalent to addition of numerous sentences to bring about a meaning. The function of a DNA sequence, in terms of its information content, can never be known until you know what is the final permutation and combination of the various sequences are going to be.

Most genes cannot code for whole biological programs. The products of many genes should work together to bring about cellular effects as said before. Most metabolic pathways in our cell require several enzymes that act on biochemical molecules, which are the equivalents of raw materials, to make products. For example, synthesis of an amino acid in our cells requires sequential operation of several enzymes. Each type of amino acid will require its own set of enzymes. Just one enzyme in this pathway is useless for accomplishing the task.

There are numerous situations where cells use gene products in a combinatorial fashion, generating new information and function. There are other examples the classical one being the case of isoenzymes. Isoenzymes are multi-subunit enzymes, formed by differential combination of protein subunits of the same or different types. If you had 4 types of protein subunits, you can create different combinations by joining them in different ways. Can't you? A different gene codes each subunit. The products of all these genes can be combined in different ways to create different forms of the same enzyme. What is the biological purpose of this set-up? The advantage of this differential use of protein subunits is generation of subtle differences in the catalytic capacity of the enzyme. The basic function seems to be the same but there are fine differences. It is observed that different organs will have different isoenzyme forms. In other words, the same enzyme will be made differently in different organs simply by altering the combination of protein subunits. It has to be noted that this information generation occurs at a level later than DNA decoding step. DNA information remains the same but how you use it generates additional level of complexity and information. There are a number of situations within our body where this isoenzyme strategy is employed. You have isoenzymes in your body for Creatine Kinase, Lactate dehydrogenase, Cytochrome P450, Glucokinase etc.

It is also known that the same DNA sequence can be differentially decoded. This usually happens at the level of messenger RNA. The same DNA stretch is read from different start points to come up with different messenger RNA transcripts. It is like trying to read a single sentence by starting from different start points. You could skip one or two words and read the rest of the sentence! Or, you could read a sentence by skipping three words. This incredible way of generating additional information happens in the real biological world. Bacteria are known to do that. This is because they have so few genes and this gives them a way of generating additional products by using the existing information in various ways! Amazingly, even eukaryotic life forms may be using this strategy! I am sure you will agree that this is a strategy not yet possible in out technological world!

It appears also that much of the DNA is composed of families of repetitive sequences for which there is no clearly known function. Some of the repetitive copies are present in ridiculously high copy numbers. There are some, which number 10 million copies per cell! One type of repetitive sequence, numbering up to 500,000 copies per cell, called the _Alu_ family of sequences, accounts for a staggering 5-6% of the human DNA. It is funny that we do not even know what it does for us. Intriguingly, repetitive DNA sequences are apparently capable of moving from one location to another on your DNA! They are probably members of the 'jumping genes' or mobile genetic elements. We still do not know the significance of this phenomenon of jumping genes. They are indeed found to occur in a number of organisms and this discovery of mobile genetic elements has fetched a Nobel Prize to Barbara McClintock, its discoverer.

Mobile genetic elements are quite fascinating entities. It is said that the drug resistance genes of bacteria are also of this type. These jumping genes can make copies of the gene concerned, and physically leave a copy of the gene at a different location within the cell, or even to other fellow organisms, while retaining the original copy at the original location just as we send fax to a distant receiver! Movement of genetic information in space **!** It is one of the most fascinating phenomena in molecular biology!

At least 35% of the human genome is made up of mobile DNA! It looks like there is a heavy DNA traffic out there! When genes jump from one point to another, it may result in disruption of the DNA function at the recipient site. The sequence of the DNA is altered at the new site due to the insertion of the new copy. If mobile DNA activity is not unfettered, it could rapidly result in genomic disruption due to mutations and chromosome rearrangements.

A number of mechanisms have evolved to regulate this process. Mobilisation of information of DNA is associated with high rates of karyotypic change in _Drosophila_ and yeast. There is little doubt that karyotypic changes can be associated with speciation. Together with high mutation rates, there is a possibility for rapid and large-scale changes in genome composition and architecture due to gene jumping activity. I can only liken it perhaps to 'patches' and 'updates' we get in our computers to add new executable codes. But, it is still not known the degree of contribution made by the shuffling of the DNA information by the jumping genes in the evolution of the organisms. But, it is likely to be considerably significant given the fact that 35% of the human DNA is capable of jumping from point to point. How incredible is that?

The most important feature of the DNA information is it appears to be quite dynamic. It keeps changing quite rapidly. The change is not just due to the errors in copying i.e., mutations alone. A lot more active modification of the DNA information seems to be going on. Gene duplication to form families and the abundant copy generation in the case of repeat sequences seem to confirm this. In addition, the jumping genes physically move DNA information over distances within the genome.

Occasionally, we even have viral DNA getting incorporated in our human DNA! We do have plenty of it even under normal circumstances. They are called the viral oncogenes, which have the ability to cause cancer. They are silent in many cases but have the ability to hijack the human genome. The changes in the DNA sequences are also acquired during the reproductive process too. The exchange of bits of DNA, between the father and mother's chromosome, results in a new combination of sequences in the offspring. Given the extent of the changes in the DNA likely to be brought about by all these known mechanisms, it is no wonder there is this amount of speciation on our earth.

I said earlier that our cells have about 30000 - 40000 genes. These genes are coded by the 3 billion DNA alphabets. If you considered each gene, as a piece of vital biological information, then it takes 3 billion bases to code for all of them. Surprisingly, we appear to possess more DNA than necessary to code for this number of genes. An average gene's size has been calculated, and on that basis an estimate of the DNA size necessary for 30000 genes should actually be 10-15 times less than what we actually have. What is interesting is that an incredible 98% of all transcriptional output (i.e. DNA to RNA code conversion) do not code for any proteins. Only 2% of the RNA actually code for proteins (by the translation step which occurs in ribosome). This seems like a lot of executable code in DNA is not really executing anything. What an inefficient operating system this should be? Intuitively, this seems unlikely that nature is so inefficient.

John Mattick at the University of Queensland thinks that these non-protein-coding RNAs may not be non-functional. Instead they may represent an important development in the genetic operating system of the higher organisms, as opposed to the mainly protein-based systems of microbes. They may be involved in RNA-DNA, RNA-RNA and RNA-Protein interactions coordinating and modulating gene expression.

If you look at the number of coding genes (that is the genes that are known to code for proteins) in bacteria and yeast it is around 6000. Humans have only 5-fold more than this which is unexpected. The Human Genome Sequencing project has revised the previous estimate of the number of human coding genes from about 100,000 to just about 30,000. This has led to suggestions that the non-coding regions of human DNA, and other eukaryotes, may play regulatory roles and may determine the complexity of the organism.

Another interesting observation is that humans and mice share about 99% of their protein coding genes in common! Can only 1% difference account for all the difference between 'Jerry' and us?

It is claimed that the phenotypic variation between individuals and species may be largely based on differences in non-protein-coding sequences primarily. It has been reported that out of the 3 million sequence differences per haploid genome between individual humans only 10,000 (0.3%) occur in protein-coding sequences mostly as silent (3rd base in codon). Mattick argues that the non-protein-coding RNAs would be involved in cellular communication networks that, at face value, may represent an enormous increase in network connectivity and functionality over the situation where system activity is solely regulated by metabolic and environmental state information. A non-coding RNA can exert a signal that allows activity at one locus to be connected to others in real time. This is possible when the non-coding RNA interacts with one or more other entities to bring about an effect at another location or level.

Even the fragmentation of the coding genes (the genes that code for proteins) into exons (that part of the gene sequence that actually gets represented in the final product i.e. protein) and introns (the non-coding portion of the protein-coding genes that is removed during RNA processing) may have conferred an evolutionary advantage by allowing the possibility of alternative splicing. By controlling differential start and end of the gene transcript one would have the added advantage of multiple ways of altering the information content in the available DNA sequence. This alternative splicing is common in single celled life forms as it is advantageous to be able to generate more information from a limited set of genes. What I am saying is that alternative splicing is also happening in eukaryotes.

In order to enable all that has been said in the above paragraph there has to be a mechanism by which the cells can interpret the genome. This mechanism should allow us to read the genome correctly in space and time. What is even more intriguing is that even this interface mechanism is DNA-held. This interface structure has to be assembled first before the genome interpretation can happen.

If you look at the fertilised ovum, the zygote, the nuclear DNA is the product of paternal and maternal DNA exchanges. It will hold all the information needed to make the new life. The supplies that come with the cytoplasm provide the head start for this single cell. This cytoplasm, by and large, if not all, comes from the mother. The molecular machinery that is present in the cytoplasm includes the set of DNA-binding factors, which are capable of forming the application or user interface. Once this is set up, there is a sequential and parallel decoding of sections of DNA over time so that codes can be executed. Products formed in this process will successively control the next steps in the information flow. It is like a chain reaction. The importance of the cytoplasm in kick-starting nuclear decoding inside the embryo is well known.

DNA binding proteins are used by cells to decipher the genome. They constitute the user interface and the Application Program interface. There are a few types of DNA-binding proteins that exist. They are namely transcription factors (proteins that control DNA to RNA transcription by altering the readability of the concerned gene or stretch of DNA), DNA Polymerase (the enzyme that actually copies the DNA code during cell division so that the daughter cell can get a copy of the DNA), RNA Polymerase (an enzyme that converts DNA code to RNA code which is central to deciphering of the DNA information), and Histones. These DNA-binding proteins have unique DNA-binding domains on their surface that allows them to dock onto the DNA.

Histones act as spools around which DNA is wound for the purpose of compacting it by about 40,000-fold. This compaction has an impact on any process such as transcription, replication, repair, recombination etc that requires physical access to DNA. In general, those genes that are active have less bound histone, while inactive genes are highly associated with histones. Interestingly, histones can be chemically modified by simple addition of acetyl, methyl or phosphate groups by respective enzymes. These chemical additions reduce the affinity of the histone for DNA, allowing easier access to other DNA binding proteins such as the polymerases and transcription factors. In other words, deciphering and copying of the genome is controlled reversibly by these simple chemical modifications of the amino acids such as lysine that are found in the histone proteins. Enzymes called acetylases catalyse addition of acetyl group. Deacetylases do the reverse step. If acetylases can enable gene transcription deacetylases do the opposite. These charged moieties determine the affinity of interaction between DNA and histones. DNA itself has negatively charged phosphate groups in every nucleotide and any chemical group with a negative charge (on Histones) would probably repel. This is purely an electrostatic interaction that can be reversibly modified by adding or removing charged groups in a controlled manner. That is what happens inside the nucleus of the cells.

Histones are some of the most highly conserved proteins in animal biology. There are very few differences between histones across different species indicating the highly preserved nature of this DNA interface. In the past histones were thought of as simple packaging material that will help to compact the DNA but now that picture has changed. They are now thought of important proteins that will determine DNA access. Pharmaceutical companies have realised that inhibiting deacetylation enzyme can help to keep some important genes in the active state. The histone deactylase inhibitors have been in use as mood stabilisers and anti-epileptics and more recently are being studied for treatment of neurodegenerative diseases like Alzheimer's disease and Huntington's chorea. Also, in recent years, there has been an interest in use of these compounds for cancer treatment.

Other DNA-binding proteins like transcription factors and polymerase enzymes are different from histones in one important aspect. They bind to DNA exactly at particular sequences. Each transcription factor binds to specific DNA sequences and activates or inhibits the transcription of genes that have these sequences close to their promoters. They are able to do this by binding to the RNA polymerase enzyme responsible for transcription, either directly or through some mediator proteins. This will enable RNA polymerase to be positioned correctly at the right promoter region of the concerned gene that has to be transcribed.

Alternatively, the transcription factor can bind the enzymes that modify histones altering the accessibility of the DNA. The number of transcription factors found in an organism increases with genome size. Larger genomes have more transcription factors per gene. The human genome contains approximately 2600 proteins that can bind DNA. Most of these are presumed to be functioning as transcription factors. In fact, approximately 10% of genes in the genome code actually code for transcription factors making it the largest single family of human proteins. This also indicates the degree of exquisite control of the user and application program interface between the DNA and the users. Most if not all genes have several binding sites in their sequences for binding several different transcription factors that will control DNA reading. These transcription factors may be expressed with tissue specificity meaning only those cells that express the particular transcription factors can express some genes. This may be the case for example for genes that control tissue differentiation. Some transcription factors will expressed only at certain time points enabling DNA expression in relation to time. For example, if some genes are expected to be expressed at distinct time points then such a temporally controlled mechanism helps. An example would be that during embryogenesis some genes are selectively activated but never in adult life.

Transcription factors also seem to have the peculiar characteristics in their structure that assists them in their DNA binding. It is possible to classify them as belonging to classes of transcription factors based on the characteristics of these DNA binding motifs. For example, leucine zipper transcription proteins have one leucine amino acid for every 7 other amino acids in the amino acid sequence of the transcription protein. Zinc finger transcription proteins have Zn bound to them. Some transcription factors have unique helical structure.

As one can imagine mutations in these transcription factors can cause disease. Various diseases like diabetes, autoimmune diseases, speech disorders and cancer have been shown to result due to mutations in various different transcription factors. It is important to note that many transcription factors are tumour suppressors (genes that suppress cancer) or oncogenes (genes that cause cancer). These diseases are perhaps to be viewed as errors in DNA coding and decoding steps.

As gene expression is central to important processes like cell cycle control and intracellular signalling, it is not unexpected that cancer results when there is disordered gene expression. For instance, a transcription factor that controls expression of a tumour suppressor gene can become mutated. If that mutation results in lack of expression of that gene then there will be uncontrolled cell divisions and consequently cancer.

Similarly, genes that are prone to causing cancer (called oncogenes) are necessarily turned off in the normal state. The transcription factors that control the expression of the oncogene can become unnecessarily activated due to a mutation. This will turn on the particular oncogene causing cancer. It is no wonder approximately 10% of currently prescribed drugs directly targets the transcription factors. Classical examples would be drugs such as Tamoxifen (used for breast cancer) and Bicalutamide (used for prostate cancer). Other drugs would include various types of anti-inflammatory drugs and anabolic steroids (drugs taken by body builders).

Continuing with our discussion point of biological 'languages' and information encoding let us look at another major concept in biological information handling. Proteins are biological languages too. Amino acids are the alphabets here and there are about 20 of them commonly used. Arrange them in different sequences and you get a whole variety of proteins, each of them with different capabilities. Literally, there are thousands of types of proteins in our body. I said that genes have the blueprint for the proteins. Proteins are made of totally different types of molecules, namely amino acids. But, the DNA is in the form of nucleotides. How is information in one form converted into another? As always, information transfer invariably involves some form of change of form called transduction. It is a universal feature of all communication systems. Didn't we see how a number of information forms can be digitised if we can represent units of non-digital information with pre-agreed binary codes?

The process of protein synthesis needs an intermediate step, involving another class of molecules called messenger RNA. They are also made of nucleotides just as DNA. Their job is only to carry the information from DNA to the protein synthesis factories, the ribosomes. The messenger RNA is free of all the frills associated with DNA. It is pure information that has to be used by the ribosomes, reading the information three bases at a time. Because the non-coding portions of the genes are removed and the coding portions alone are joined together. Every three bases in the messenger RNA are to be read as a unit called the codon. A codon is three nucleotides on the messenger RNA. Each codon stands for an amino acid. In the order of arrangement of these codons, the amino acids are added one after another like beads in a chain. This happens in the assembly lines on the ribosomes. Most codons are universal in that they are the same in all the organisms. A codon always means one particular amino acid. There is a codon for every amino acid. They have the same meaning irrespective of whether it is a horse or a man or a lion. It is a universal language code like the ASCII code.

Why should there be three bases in a codon? Why not two bases? If you had two bases in a codon, then the number of codon types possible would have been 16. By having three bases for every codon, the number of possible types is increased to 64. This enriches the codon repertoire of the cell. I find it incredible to note that the DNA's use of monomeric bases as units of information is further enhanced by using three bases at a time, in the from of codons. It is exactly like we use bits and bytes in our computers. Bytes are more information-rich that bits. Aren't they?

What advantage is it for the organism in having an extra step between the information in the DNA and protein? The reason behind it is the fact that a gene contains a lot of nucleotides in it that are of no use. Why a gene has these unwanted bases intervening between the wanted ones is not clear. These bases are not going to ultimately code for any of the amino acids because these unwanted bases are ignored and removed in the conversion of DNA to messenger RNA. The intermediate step of messenger RNA enables a neater version of the information in the DNA. Secondly, this step is like the step of machine language interpretation that is required before the executable codes can be used.

I wonder if these unwanted bases, interspersed between coding segments of the genes, are like the carrier waves used for transmitting acoustic signals. A carrier wave is like a carrier pigeon in that it does not change the message but simply carries the information from the transmitter to the receiver. The simple way of explaining this would be to compare it to the cargo trucks. These trucks enable transfer of goods and there is no functional relation between the cargo and the truck other than that. The carrier waves are like that.

For example, radio waves broadcast from a radio station spread out in all directions. Different radio stations use carrier waves with different frequencies. This enables the radio receivers to tell one station from another. The radio's tuning circuit picks out the carrier waves of the frequency you select by tuning the knob. Then the carrier wave is removed to leave the original signal. The signal then reaches the amplifier, where it is amplified. The loud speaker then turns the electrical signal back to sound. I know it sounds preposterous but I really can't see any difference between this carrier wave and the non-coding bases in the DNA. Both have no role as such in the information content but are both removed at some stage in the deciphering of the signal. I know the molecular biologists are frowning their foreheads now as they read it but they cannot refute my argument. It is the theme of using a carrier they have to note here.

Mutations can garble the codon sequences in genes. Depending on the type of mutation the extent of change in amino acids the function of that protein may be reduced, or lost. There is some degree of fault-tolerance in that a single base change in the third base in the codon triplet could still code for the same amino acid as the originally intended one. Sometimes, a change in one or more of the bases in a codon can lead to incorporation of a new amino acid at the given point in the protein chain. Again it is difficult to say if the function is fully lost. It is rare to see mutations that really lead to complete loss of the function of a protein.

Going back to the analogy between gene and a sentence, you will realise that it usually takes an extremely rare spelling error to make the sentence totally incomprehensible. Do you agree? In the question, 'do you agree?' I cannot see what spelling error will totally spoil its meaning. I can try writing 'ds you agree'? Or, may be 'do yom agree'? Why not 'do you asree'? The last one sounds a bit too abnormal but one could still easily understand what is being said.

I guess our brain works in a fault-tolerant manner too. It is more to do with the brain's way of perceiving things. But, can this work inside the cells? What do they do? How do the cells respond to simple errors? Quite often, the mutated proteins function with reduced capacity. They may not totally fail. They are not like our computers to cringe every time you make a simple mistake. If you had named a file as 'do you agree?' try opening it with any one of the spelling mistakes I have said!

There are some types of mutations called deletions, which can result from removal of a stretch of DNA bases ranging from one to many. The gene is a bit short because of the deletion of a part of the message. Naturally, whether it will convey any meaning will depend on the length of deletion and the type of molecule it codes for. In the word, 'do you agree?' if you remove the word 'agree', it reads as 'do you'. Now the meaning of the word is unclear. In insertion type of mutations, additional bases are inserted in the gene. This kind of mutations can throw the whole meaning absurd. Loss of ability to use DNA information can be genetically caused but they can also result from environmental factors like radiation or chemicals.

What have life forms evolved for such intercellular communication? They have evolved diverse communicational molecules of various types: hormones, growth factors, neurotransmitters, cytokines, antibodies, prostaglandins etc. In terms of chemical nature they are of diverse types: proteins (long sequences of amino acids attached in a chain), small peptides (small number of amino acids in a chain, not big enough to be qualified as 'proteins'), amino acids (or modified amino acids), fatty substances like steroids, vitamin A-like substances called retinoids, and gases like nitric oxide. Specialised cell types within your body make each of these informational molecules from biochemical precursors. There is an exquisite level of control exerted on the biosynthesis of these informational molecules based on the input from other cell types within the body. DNA in your body has the blueprint for all this. We could revisit this topic again later in the chapter on 'Information Processing'. At the moment I have focussed on how information is encoded distal to DNA. DNA transcription and subsequent step of translation, converts genetic information held in nucleotides to proteins where the amino acids are the 'letters' of the language. Where do we go from there? How is this information held in the protein form translated into biochemical actions and programs? This is an extremely important process in biological information.

Obviously, not all proteins made by the cells are informational in nature. Many of them like collagen, albumin, myosin etc are purely structural in nature and have no informational value. Of course molecules like Insulin, Glucagon, etc are information-rich. They act as hormones inside your body carrying information between cells. But, not all such informational molecules are proteins though. There are non-protein informational molecules as well. Examples include adrenaline, thyroid hormones, cortisol (stress hormone) etc. How are they made? How come information is encoded in them? What is the link between proteins made by Gene translation and the synthesis of these non-protein informational molecules? There is a link in that the proteins formed directly by gene translation indeed catalyse the steps necessary to form the non-protein informational molecules.

The truth is that DNA still holds the information for making them all. Unlike the case of proteins, which are made straight from DNA (of course using amino acids as raw materials), the non-protein informational molecules require the intermediary step of enzymes and some raw materials that need to be 'biochemically tinkered' by those enzymes. These raw materials could be simple things like amino acids, cholesterol, fatty acids etc. These raw materials are themselves devoid of any information.

It is incredible that simple, meaningless biochemical molecules can be 'tinkered' in a series of steps to generate information-packed molecular messengers. Biochemists would call these as biosynthetic pathways, which are highly regulated processes. It is so much like an assembly chain in a factory. You start off with the relevant raw materials and dedicated workers fix the parts in a pre-determined order. The final product often has no resemblance to the individual components and, interestingly, the product acquires a new capability.

In short, we can say that information coded in DNA can be accessed if 3 different ways:

Scenario 1 - Use of DNA for hereditary purposes where DNA is simply duplicated and a copy passed on to daughter cell. Some modification of code due to meiotic exchange of chromosomes from father and mother occurs. This is executable code transfer happening between single cells (gametes).

Scenario 2 - Transcribe DNA information into messenger RNA and then to Proteins. The proteins formed can have informational content by themselves (like in the case of hormonal proteins) or can have structural roles (like muscle and connective tissue proteins) or can have catalytic properties (like enzymes)

Scenario 3 - Enzymes, which are proteins made in scenario 2, can further act on biochemical precursors for energy generation purposes (glucose or fat metabolism) or for making molecules, which have informational content (e.g. adrenaline, cortisol, dopamine etc). This process occurs in a step-wise manner where the enzymes catalyse addition or removal of simple chemical groups like phosphate, methyl, acetyl, hydroxyl or carboxyl. Most biochemical pathways involve such simple 'tinkering' like this. Depending on the context the product has informational content.

For a biochemist these reactions mentioned in scenario 3 are just a matter of some specialised chemical reactions, involving some enzymes and specialised cells. But, viewed from the perspective of information, the whole thing seems amazingly different. Just as an introduction to what I am saying here I would like to use an example.

Tyrosine is an amino acid. We obtain this amino acid from the diet we eat. Tyrosine on its own is nothing different from other amino acids and can do very little on its own. I guess we can say that for most biological molecules. Tyrosine has defined biological roles one of which is to contribute to information generation within the cell. Meaningless tyrosine molecule can be converted to a vital information-carrier called the thyroid hormone, within the thyroid glandular cells. This only requires, most incredibly, just addition of another equally meaningless entity called iodine to tyrosine to form 'iodinated tyrosine'!! Iodinated Tyrosine is indeed Thyroid hormone!! The result is marked transformation of information content of tyrosine! It is now able to stimulate overall metabolism of virtually all body cells. I said that only thyroid gland has this unique ability to join tyrosine and iodine in a controlled manner. No other cell in your body can do it. This is because the enzymes needed for this task is present in the thyroid gland only.

Similarly, for example the important hormone called adrenaline can be formed from a seemingly unrelated amino acid called phenylalanine. Phenylalanine does not have any meaning by itself. But, adrenaline is packed with information. The difference between phenylalanine and adrenaline is that there are three biochemical tinkering steps: one step of hydroxyl group addition, one step of carboxyl group removal and finally one step of methyl group addition. Addition of these moieties happens through the agency of enzymes, in successive steps like in a factory line, as shown in the figure. As you can expect, the enzymes necessary for these steps are present only in the gland that can make adrenaline, which is a tiny gland called adrenal medulla. Adrenal medulla sits on top of the kidneys on each side. Interestingly, Tyrosine itself can form from Phenylalanine in a simple step by which a hydroxyl group is added to Phenylalanine!

What you see in the schemes described above is what I call 'Biochemical tinkering'. Simple precursor molecules are 'tinkered' by molecular tools called enzymes. These enzymes themselves are proteins, made by translation of DNA! In effect, the enzymes and their actions introduce a third level of hierarchy in the process of information encoding inside the cells. This is needed in all instances where proteins cannot be the direct information carriers.

Before I move on I also want to point out another incredible dimension to the Tyrosine story. This is probably even more amazing than the ability of a simple iodination that can pack information in Tyrosine.

One of the biggest findings in the area of cellular communication in the last couple of decades is the phenomenon of addition of phosphate to amino acids (existing as part of various proteins) to bring about reversible cell signals. The addition of phosphate is called phosphorylation. Unique enzymes called Kinases, under tight regulation by various cellular molecules, catalyse this simple biochemical step. Most interestingly, the phosphate moiety that is attached can also be removed easily. This de-phosphorylation is also under regulatory control of enzymes and hormones as the case may be.

Addition and removal of phosphate groups to tyrosine amino acid (present in different proteins) is such a ubiquitous phenomenon in cellular biochemistry. What is the purpose of this? The simple answer is that this seemingly unconnected phosphate addition/removal can signal activation/deactivation status for a wide variety of enzymes and other proteins in your cells. For example, an enzyme will be inactive until phosphate is added to the tyrosine molecules present on it. In other words, addition of phosphate confers the activation status for the enzyme. You can effectively, turn 'on' the enzyme by attaching a phosphate to it. Similarly, removal of the same phosphate will make the enzyme inactive turning it 'off'.

This reversible 'on/off' signal effectively controls the action of the enzyme concerned. You can turn the enzyme on or shut it off by this mechanism. This is one of the most exciting and hottest areas of biochemical research in cell signalling. The take home message for the time being is that generation of cellular information in your body cells can be achieved by simple chemical steps like iodine and phosphate addition? Did you expect that?

Even control of the flow of information from the genes seems to be controlled by such simple tactics. In the case of DNA addition/removal of methyl or acetyl groups seems to determine the activity status of the genes. These methyl residues are usually attached to the Cytosine bases in the DNA. Addition of acetyl groups can be made to amino acids forming the histone proteins. Whether the information available in the gene can be accessed or not is controlled by a reversible addition/removal of these simple moieties! These moieties have a molecular weight under 100. Amazingly, the DNA is such a complex molecule with molecular weight running in millions of Kilo Daltons!! The principle behind the use of methyl and acetyl groups in DNA function is that they provide simple signals for permanently or temporarily inactivating genes.

One of the incredible things about biological information generation and transmission is the fact that simple chemical groups determine the information content like in metabolically synthesised informational molecules like adrenaline, cortisol, histamine, dopamine, nitric oxide, etc. Amazingly, even in the case of proteins with informational content we find that they are still susceptible to chemical group additions like phosphate groups, which determine the activity status. This seems like a recurrent motif in biochemical information. On this basis I think we need to add a further scenario (scenario 4) in the world of biological information.

Scenario 4 - Reversible activation/deactivation of informational and non-informational molecules by means of simple chemical steps like addition/removal of groups like phosphate, methyl, acetyl etc.

What is even more interesting is that the principle of 'biochemical tinkering' in generation and modulation of biological information is applicable to the brain as well. Information inside the brain resides in neuronal interactions through synaptic hardware. Memory is still largely an ill-understood phenomenon from the point of view of how it is encoded. But, besides memory and long-term brain information there is a whole lot of day-to-day and minute-by-minute neuronal interaction that happens by way of exchange of molecular information. Just like hormones and growth factors there is a huge class of molecular information carriers called neurotransmitters. These neurotransmitters are usually small molecules. Typical neurotransmitters are acetylcholine, GABA, Dopamine, serotonin etc. These are formed inside cells in the nervous system. The precursors of these neurotransmitters are simple molecules that by themselves have little information content.

For example, GABA is formed from the amino acid called Glutamate. The biosynthetic step involves removal of a carboxyl group from Glutamate and you get GABA with all its signalling power in the brain. Glutamate as such does not have this power. But, removal of 3 atoms (1 carbon and 2 oxygens) from Glutamate is enough to encode information that is associated with GABA. GABA is a very important inhibitory neurotransmitter in our brain balancing the stimulatory from inhibitory brain stimulation.

The scenario is no different from what we saw in the case of information 'tinkering' in cellular metabolism. A diverse range of nervous system function is brought about by neurotransmitters and interestingly some of them control cognition and therefore affect uptake of social and environmental information. It is interesting to see simple precursor that occurs in your diet (Glutamate amino acid) is encoded with information to be able to access information at a different level. This is another exciting dimension in the biological information story.

Before I close this chapter, I thought I would spare some time on a totally different form of communication attempted by man. He is trying to extend his information coding abilities to aliens! Man has been trying for quite some time now to communicate with intelligent civilisations out there. They call it the 'Search for extraterrestrial intelligence' (SETI). When you are trying to communicate with aliens, the problem is two-fold. What are you going to convey? How are you going to convey whatever you are going to convey? The main problem is the task of encoding information in your message. That information should be something known to him in some way or other. You should encrypt the message in such a manner that the alien will be able to decipher it.

In 1972, man launched pioneer 10 and 11, in 1972 and 1973 respectively, carrying metal plaques engraved with pictures. The information contained in the pictures told the position of our planet and how a male and female of the human species looked. In 1977, the two Voyager spacecraft were launched, carrying a long-playing record containing music and sounds. It was realised by the people involved that the chances of the spacecraft being intercepted, and the message deciphered, was so remote, considering the distances involved.

Light or radio signals are more realistic for communication in the interstellar space. Light is fast. The astronomer Frank Drake, in 1974, sent a radio message to the outer space to mark the reopening of the Arecibo radio telescope in Puerto Rico. He had only enough time to display our knowledge of prime numbers, represent the structure of DNA and a picture of a human being.

In the last couple of decades, radio transmissions have been used to send signals to people out there to see if they can read our messages and reply! Radio telescopes dedicated to the task of listening to replies from aliens have been stationed at observation sites.

The question now is: what is the information we are going to make the aliens understand? How are we going to encode this information? If you look at some of the information so far attempted, it includes prime numbers, DNA structure, and the picture of a human being. The assumption is these things might be fundamentally similar everywhere in the universe. We do not know if we are right. We may well be.

However, attempts have been made to create messages that will be decoded and understood by aliens who obviously have no knowledge of humans. Yvan Dutil, an analyst at the Defence Research Establishment Valcartier, near the city of Quebec, has done a similar thing. His aim is to make the information readable to any intelligent being. His approach is to do what cryptographers would normally do to hide information in a message. He is an 'anti-cryptographer'!

On 24 May 1999, backed by the American firm, _Encounter 2001,_ Dutil's 4-hour message was transmitted by the Russian radio telescope. It is now beyond the orbit of Pluto heading far beyond. Dutil has used radio signals, shifting back and forth between two frequencies 48 kilohertz apart. One frequency represents the 'off' and the other represents the 'on' state. This binary language is used to draw pictures and symbols. The point noteworthy here is we are using the same principle of binary again even while trying to communicate with aliens! The next point is we are hoping they would know the language of binary as a means of information transfer. Deep inside we feel any intelligent form of life would have arrived at this principle because it is, perhaps, a fundamental property of complex system communication.

Dutil has anticipated that his signals may become weak as time passes by in its voyage. He therefore introduces redundancy in the message so that even if the recipient loses some of the data there is still a probability that they will be able to understand the message. I pointed out redundancy in biological information a while ago. He uses patterns in his message that are repeated over and over again so that the beings will be able to make out the idea.

For example, he draws a 'box' around each page of the message. The box gets distorted if bits of the message are lost. The aliens can correct for the lost bit if they realign the sides of the box. He has introduced redundancy in the symbols too. Each symbol is a picture 7 bits high and 5 bits wide, representing a single concept. It could be a number, an abstract idea such as temperature, or an object. Each picture differs from every other character by at least 7 bits. He surmises that, with the concept of the basic information correction theory, one should be able to correct for an error of three bits. The whole message can be reconstructed even if a tenth of the message was lost.

Dutil relies on the assumption that pattern recognition in information is something that will be fundamentally similar in aliens too just we do it so effortlessly. That is why he imparts a periodic quality in his messages by using regular shape of the boxes. But, assuming the aliens see the pages and symbols, how will they know our language? It is therefore planned to teach it to them within the body of the message, allowing the message to decipher itself.

In the beginnings of the cryptography era, a message always had a predictable beginning. It was dumb. Allied intelligence are known to have cracked the Enigma code, during the Second World War, because the Nazi's always had the habit of using the same few characters in the start of the message.

The beginning of a message should be predictable and known to the receiver for anti-encryption. Hans Freudenthal, a Professor of Mathematics at the University of Utrecht, described in 1960, a language called _lingua cosmica_ or _Lincos_. _Lincos_ uses symbols defined by symbols that come before it. So, the first symbol necessarily has to be something that needs no definition. It could be something universal.

Freudenthal believed that it has to be mathematics. Man has learnt numbers before he invented the written form of the language. The same numerical properties, derived from the same theorems and the same mathematical tools, have been used by successive civilisations making mathematics the equivalent of a universal culture.

He therefore thinks aliens can count too. He uses the system of binary to represent the numbers as well as a series of dots as strange numerals. He thinks the three systems will reinforce each other and can be used interchangeably. He thinks they should know prime numbers even if they have a primitive mathematical system. After representing numbers, the second page teaches how to add, followed by how to subtract, multiply, divide and exponentiation. Even Pythagorean Theorem and pi are included. An advanced civilisation should surely know it because they are the oldest mathematical entities known to man.

Physics, chemistry and speed of light are some of the other things included because there are good chances they are universal properties of nature anywhere in the universe. A problem here is the usage of units. Hydrogen spectrum is included too. Mass and charge of protons, electrons and hydrogen atom have also the potential to be universal properties. DNA's structure and function is added to the message and so is a description of our understanding of the solar system.

Douglas Vakoch, a psychologist at the Search For Extraterrestrial Institute in Mountain View, California, and an expert on the problems of communicating with alien civilisations, thinks the chances of Dutil's message working are not guaranteed. He thinks a direct method of encoding the message such as representing the hydrogen atom by transmitting the frequencies corresponding to its emission spectrum. He, however, agrees that combining it with Dutil's method can also work by offering redundancy.

The point I am trying to highlight is the fact we feel redundancy is a key component of information transfer. I am sure this applies to all forms of cellular communication too. Redundancy in information management of biological systems is well known. This ensures failure of organs and metabolic programs from failing because of any isolated failure of one channel of information flow.

For example, if you needed glucose as the fuel for your body cells, there are many ways you can get it. You can eat to supply you the glucose. Or, you can make glucose by breaking down the stored glucose kept in your muscle and liver cells. The other option is to use other raw materials to make glucose by means of a separate metabolic program. The last chance is to use alternative fuels instead of glucose, like fats and ketones.

# 5. BIOLOGICAL INFORMATION - DIVERSE AND UNIQUE

It is time to dwell on some fascinating aspects of biological information. We all know that the blueprint for all biological information resides in the form of nucleotide bases in DNA. It is, however, not the actual end-form of the information if you know what I mean. This is 'transduced' into an information form that relies on the use of amino acids as units of information. One may ask why? Why could not DNA itself be used to transfer the information? This is a fundamentally important question that needs to be answered.

An attempt to answer the above question brings to clarity the understanding that DNA's information content is not used only for hereditary purposes only. There are a whole lot of day-to-day operations inside our body that need access to gene programs held in the DNA. The answer also relies on another question: is the purpose of DNA access for single cell use only (like in the case of a sperm, ovum or a fertilised zygote) or for multi-cellular communication seen typically in day-to-day metabolic operations in our body?

The answer to this question is obvious. We do have to access DNA information on a daily basis. There are lots of metabolic activities that require expression of many different genes. These metabolic activities may vary in terms of duration and also time needed for onset. Some activities may need a rapid cellular response whereas certain others there is no harm even if the response arrives in hours or days. If the cells depend on fresh rounds of gene expression every time a cell response is required, especially in cases where a rapid cell response is needed, there is no question that there will be a delay. For instance, once cannot wait for hours for the cells to manufacture adrenaline in cases of emergency. You need adrenaline in a flash. What is the solution to this problem?

The solution is by having pre-formed gene products for such acute responses. In other words, the protein products of such acute genes can be made in certain quantities and kept in store for immediate use. This can be kept in circulation in blood stream and can be accessed by cells as soon as they are needed. When the supply runs out there can always be more synthesis. Most, if not all, vital informational molecules are circulating in our blood stream ready for use. The concentrations of these molecules are known with certainty in healthy subjects and can offer diagnostic information in situations where the levels are low of high. Doctors often order lab tests to measure concentrations of many different bio molecules simply because they know that a normal human being will have well defined normal ranges of the molecules.

I wonder if the conversion of DNA to protein form may not seem so unnecessary after all. From the perspective of information transfer this allows the possibility to save time for the cells. In fact, it is known in the case of many genes messenger RNA transcripts can be produced but only translating them (to proteins) when needed. This is similar to the way we stock food and other items in some quantities in our fridge. We do not go to the supermarket each time we need something. This way a ready supply of messenger RNA acts like our own home stock of food and can save a bit of time. It should be faster than the situation where you had to transcribe the gene from scratch. Even from the supermarket point of view they also keep a certain amount of stock in each branch store. They do not get them from suppliers only when the customers are already waiting in the cash till!

In fact, cells employ various strategies to bring about speed in communication. As an analogy I can compare these approaches to the way our computers handle the issue of speed. A major task of an operating system is to manage the memory. Each process should have enough memory to execute. A processor can only access memory one location at a time.

The types of memory available in a computer include disc storage (slowest) to RAM (main memory) high speed Cache. High-speed cache allows fast access to memory though in small quantities. Cache controllers predict which pieces of data the central processing unit will need next and pull it from the main memory to speed up system performance.

Relative to the cell function the main memory would be the genes. They are slow to be transcribed and can take time for products of gene function to be formed. If you want Insulin hormone for some quick action it does not make much sense to start decoding insulin gene and produce more of it in the ribosomal factories. Instead, it is much quicker to have some insulin floating around in the blood stream in a readily useable form. It is like having the high-speed cache in your computer memory.

I suppose you can say the same for a number of other cell functions. The cells actually do high-speed cache-like immediate access to gene memory in a number of different ways. Some times, it can be a question of having the messenger RNAs made in some quantities for the specific proteins saving the time needed for the gene-decoding step. Or, it could be even better as in the case of reversibly modifying the already formed proteins by phosphorylation/dephosphorylation to quickly change the activity status of them. This is far quicker way of achieving the required function as it does not even require the protein synthetic step. This is a unique capability in biological information world. The proteins are already synthesised after the laborious gene decoding steps but they are kept in an inactive form until required. The activation step requires a simple attachment of a chemical group like phosphate to one of the constituent amino acids like Tyrosine present on the protein. This changes the activity status of the protein reversibly. How ingenious?

It is worth pointing out at this stage that for the brain the storage capacity resides in the memory. It is well known now that memory can be long-term or short-term. Long-term memory is said to result from some lasting changes in synaptic connections whereas the short-term memory is not so. Just as in a computer our brain memory is largely unused. The RAM in a computer is also similar in that not all RAM capacity is used all the time. You only use the extent of memory needed for running your required programs. In the case of the brain it is progressively more difficult to access very long-term memory whereas short-term memory can be accessed with ease. It is almost as if short-term memory is the high- speed cache here.

In the nervous system, speed is of the essence. The last thing you want in neural communication is the slow signal-triggered synthesis of the neurotransmitters. How slow would this process be if neurotransmitters like adrenaline or dopamine or acetylcholine were formed only after the signal has arrived?

The difference between a protein-based information carrier and non protein-based carriers (like steroid hormones, neurotransmitters like adrenaline, dopamine etc) is the fact the protein-based information molecules are formed from the gene decoding (transcription) and protein synthesis (translation) steps. Whereas the non protein-based carriers it is not like this. They need a further biochemical synthetic pathway where additional steps are required to make them. Coordinated action of a series of enzymes is needed to manufacture them. These enzymes are proteins, which need decoding of the respective genes first. So, you make a set of gene products (enzymes) which will then act on some precursors to make the non protein information carrier. There is obviously more complexity here.

A typical biosynthetic pathway of a neurotransmitter or a steroid hormone may involve 5-10 individual enzymes acting in concert. Each enzyme is a protein decoded from the respective genes. So, if you wanted adrenaline to be released you would first decode these 5 enzyme genes which then need to act on Phenylalanine or Tyrosine amino acid to produce adrenaline. Wouldn't it take forever? It would take hours at least. Instead, it makes sense for the nervous system to have pre-formed adrenaline stored in nerve endings. This is the high-speed cache-equivalent.

Typically, neurotransmitters stored in nerve endings are held inside granules, which can release the stored neurotransmitter at the right stimulus. The released neurotransmitter at the nerve ending has to travel only about 100 nm to reach their target cells, a process that takes less than a millisecond. Interestingly, nerve signals using neurotransmitters can be controlled by modulating the quantity of the neurotransmitter that will remain at the nerve ending. After a stimulus the neurotransmitters are usually removed from the vicinity of the nerve ending by a re-uptake process. The re-uptake confines the neurotransmitter inside the granules again which not only arrests the signal but also enables re-use of the information at the next stimulus. An example of such re-uptake mechanism can be seen with Serotonin signalling. Where such re-uptake is not possible the neurotransmitters can be enzymatically degraded thereby destroying the information. This is seen in the case of nerve transmitters like acetylcholine.

I suppose the speed of response expected determines the biological mechanisms behind the information transfer. Nervous systems evolved primarily for speed of response and electrical impulses can travel at the rate of 100 metres per second. So, one can expect an impulse travelling from one end of the body to the other in a fraction of a second via the nerves. At the business end, where this electrical impulse has to be converted to a chemical form, the release of neurotransmitters happens at the nerve endings (synaptic clefts or nerve-muscle junctions as the case may be).

The chemical form of the signal only has to travel a very short distance, typically about 100 nm. Because the chemical signal needs to act on a nearby target there is a localised accumulation of the neurotransmitters unlike the case of blood-borne hormones, which need to travel a long distance in the 'blood sea'. The hormones get diluted in the volume of blood present and therefore you only see miniscule quantities of hormones circulating in the ready-made form. Consequently, the cells that respond to the hormonal signals need to have very sensitive receptors to decipher the information. This is not the case with neurotransmitters because the business end of the nerves are exposed to comparatively large quantities of them and therefore the receiving ends can afford to have less affinity for these signals. The affinity refers to the tightness of binding between the neurotransmitter and its receptor. If the binding is too tight then this is a problem when it comes to termination of the signals. The loose affinity serves well in making the neural signalling short and crisp.

Another point to be noted is that chemical signals in the form of neurotransmitters are generated using simple amino acids in most, if not all, cases. Glutamate (used as Glutamate or Gamma amino butyric acid - GABA), Glycine, Tryptophan (for making 5-hydroxytryptamine also known as serotonin), Tyrosine (for making adrenaline, noradrenaline and dopamine) are the most common amino acids used by our body to make vital neurotransmitters. The advantage of such small molecules in neural signalling is the fact they can be formed easily and their small size allows their entry and exit from tight inter-cellular locations within the nervous system. Unlike proteins these small molecules do not need a sophisticated intracellular signalling pathways or nuclear activation. Instead, the chemical transmitters alter the electrical activity and electrical impulse generation within the nerve cells for forward conduction. I guess this is the fundamental difference between signalling within the brain and outside the brain. The nature of primary messenger is different and also the second messenger systems that will carry the signal into the cells.

The above discussion also points to an important distinction one can make in terms of information - both biological and physical. That is, there are two fundamentally different types of information, namely executable code and non-executable data. DNA is executable code. It can program a cell. Whereas proteins are non-executable in the sense they do not reprogram the cell but can help being the effectors for the executable codes by selecting a biological program from existing capabilities. For example, insulin is a protein molecule. It holds the biological information necessary for cells to activate their carbohydrate and fat metabolism. But, this is a biological function already possible for the cells but insulin is required only to signal the start or end of the program, as the case may be, in cells that can respond to it by way of insulin receptors.

In other words, proteins bring about cell-specific responses and therefore ideal for multi-cellular communication. The difference with DNA is that it works very well as the information carrier in single cell exchange such as that happens between a sperm and an ovum or in the case of bacteria. Essentially, it re-programs the genetic repertoire of the offspring by way of paternal and maternal gene exchanges during meiosis. In a multi-cellular setting one cannot allow re-programming on a daily basis because it causes unpredictable effects.

Secondly, all DNA content is the same in all cells anyway. There is no need for transfer of DNA between body cells. The body cells differentially access DNA content which accounts for the difference in structural and functional difference between cell types in the body.

The other implication of DNA as an information carrier for cell-cell communication in multi-cellular organisms is that it will alter the tissue differentiation status of the cells concerned. Our body has a large number of cell types, each with unique functional or structural capabilities. The difference between cell functions is not due to the differences in information content of these cells but due to differential use of the common genetic content. It would amaze anybody if told that all cells in our body have exactly the same genetic complement. They all receive the same amount and type of genes. The trick underlying the differences in function lies in the ability to switch on/off varying combinations of genes in different cells. If you looked at a muscle cell it will have muscle cell- specific genes active in it. A liver cell will have a different set of genes active here and so on. In the interests of the body as a whole it is vitally important for cells to maintain their individuality because it enables specialisation of labour and also enables manageable levels of gene decoding than if the cells were to decode all available 30,000 genes or so. DNA as a carrier of information between cells will ruin this set up because by nature DNA can fundamentally re-program the cell by making available genes for activation inside a cell where they should not be.

There are other problems with use of DNA as a multi-cellular information carrier is that the physical form of DNA is unwieldy for transfer between cells. Its long chain-like structure, with no compact folding, is no good for external surface of cells to interact with. You need something more compact than this so that they can neatly fit into a cell surface receptor. This is perfectly possible for proteins due to their folding patterns and the associated stability. The receptor-recognition domains on protein surfaces are made of characteristic amino acids and once bound to the receptor can start the signalling process in unique ways. The problem with DNA is that it is a string of raw nucleotides, fundamentally unstable, susceptible to degradation by enzymes and, more importantly, need a complex process of decoding before their information content can be released. This decoding is possible only with the help of molecules and agents inside the nucleus, which are not available external to the cell. Imagine having DNA decoding machinery on cell surface? Another problem with between-cell transfer of raw DNA is that internalisation of DNA is very difficult with its thread-like structure. There are a number of informational molecules in our body that need to be first allowed into the cells before they are able to bring about an action and this may not be feasible for the DNA.

DNA, as said above, is fine for single cell information management. Unlike multi-cellular life forms we see microbes, both bacteria and viruses, relying on DNA for cell communication as well as for hereditary information transfer. This is fine with them because they are single-celled creatures. Conjugation is the process whereby bacterial DNA is actually injected. Bacteria use DNA to transfer important survival genes to fellow bacteria. Antibiotic resistance genes are transferred between bacteria through DNA as the information carrier. These new genes provide the executable code for the recipient bacteria, which helps them to degrade the antibiotics. Rather than starting from scratch antibiotic resistance genes transferred between bacteria almost act like updates or patches we get for our software. You continue to use the same and other programs with the added update. Antibiotic resistance is a menace for medical community because of the rising problem of antibiotic resistance and the consequent severe infections. One may ask a hypothetical question as to why the bacteria cannot use a protein or other form of information carrier other than DNA. The answer is that the intent here is sustainable, population-level change in bacterial capability to deal with antibiotic scourge. The best way to do this is by changing the genetic make-up by re-programming. This, then, becomes available to bacterial off springs as well. A protein-based transfer of this capability will obviously be not transmitted to the next generation and will only be a temporary, short-term solution. For metabolic programs it is OK but not for capabilities needed for hereditary transmission.

This point illustrates another fascinating aspect of biological information. That is, the way the life systems deal with information management of immediate issues is different from long-term ones. After all, environment in which life systems live is never constant. This is applicable to the life system as a whole, and its external environment, as well as to individual cells bathed in their internal environment. Cells need to come up immediate responses to short-term environmental challenges. Their strategy is different for population-level, long-term problems. I suppose it is in a way like sending out some internal correspondences to your office staff on handling of some day-to-day problems (short-term) as opposed to re-training the staff to add new capabilities to the company (long-term). You cannot do this re-training by way of quick e-mails or letters, can you? You need more formal approaches to transferring this knowledge. Once trained the staff retain the new knowledge for long periods. They are re-programmed.

There have been medical research attempts to treat genetic diseases by means of gene therapy. These attempts are yet to bear fruit despite the initial excitement. In fact, when the human genome sequencing projects enabled us to have the 'blueprint' of human life every one was thinking that this will forever change medical sciences because we would know the genetic basis of diseases and how to treat them all perhaps with gene therapy. The reality is that we are yet to see any of this. The problem here is that the aim of the gene therapies is to transfer information in the form of DNA as an executable code to somatic cells. I said earlier that germ cells alone do that during reproduction. But somatic cells don't do DNA communication. Therefore, we need 'carriers' that can take the 'information cargo' into the somatic cells.

Getting the DNA for the defective genes into the somatic cells is attempted by means of a carrier such as viruses, which have a natural ability to transfer DNA across cell membranes. The genes for 'upload' into human cells may be of various types depending on the type of disease we are trying to cure. Adenosine deaminase was one such gene attempted for curing patients who have a deficiency of the adenosine deaminase, which led them to have frequent infections. There have been attempts to transfer genes associated with inherited metabolic diseases and even cancer. Genetic vaccines are being researched by pharmaceutical companies trying to deliver certain genes into the cancer cells, which will hopefully prevent cancer growth. We are hoping to re-program cells using these gene therapy approaches but the fundamental problems that have is that the predictable/unpredictable effects of introduction of the genes and the success rate for integration of the DNA into the genome.

Viruses are known to be naturally capable of using DNA or RNA as a tool to infect host cells. The effects of the viral infection on the host cells can vary from simple common cold to the death (such as AIDS, Influenza or Ebola viruses). The injection of viral DNA and the consequent devastation on host cell is a striking indicator of using executable code as a carrier of information.

Steve Burbeck, an independent IT consultant, has authored a paper on ' _Complexity and the Evolution of Computing: Biological Principles for Managing Evolving Systems_ ', which is available at his homepage. The theme of this paper is that the computers collaborate on the Internet much the way cells collaborate in multi-cellular organisms. He argues that computer scientists should learn from the way multi-cellular systems have evolved their communication tools. The basis of his argument is that Internet/Web today involves hundreds of millions of computers in the network and there should be a better way of enabling communications between them. He calls it multi-cellular computing. The challenges of communication and collaboration between networked computers are similar to those between cells in a multi-cellular organism.

About 20 years ago there were only a few computers that communicated directly with others but now the situation is so different. Burbeck likens evolution of multi-cellular life forms from single cell life forms to evolution of multi-cellular computing from single-cell computing. I think that is a piece of beautiful thinking. He says that specialisation tends to work against executable code transfer. These days computing machines have become specialised and you see PCs, PDAs, cell phones (cell phones have computers more powerful than a very high power computer 15-20 years ago), web cams, bar-code readers, wireless routers, microphones for VOIP telephony, credit card readers etc on the Net. Even household appliances have web interfaces so that you can switch on your microwave or coffeemaker 10 minutes before you arrive at your home!

Burbeck claims that the complexity of the machines in the network and the complexity of their interactions provide ever-growing and unanticipated opportunities for hackers. He points out that the testimony to this is the type of IT specialties that have sprung up in the last decade or so. He says that computing epidemiologists seek to identify new viral outbreaks before too much damage occurs. Computing pathologists dissect new computer viruses and worms to see how they work. IT professionals spend a lot of resources on updating viral detection software, modifying firewalls, updating spam filters etc. Burbeck rightly argues that this is so much like the world of multi-cellular life where small viruses and bacteria constantly vie for access to the multi-cellular organisms for exploitation. The multi-cellular organisms, in turn, evolve a number of immune mechanisms to identify and prevent the microbes from causing damage.

Burbeck's argument is that specialisation of computing machines is also a reason why executable code transfer (like DNA in the case of life systems) is taboo. The fact that each type of computing machine functions in a very different context, using different codes, it simply makes little sense to base multi-cellular computing on transmitting code, says Burbeck. However, Microsoft's acknowledged ambition to have Windows functionality, Application Program Interfaces (API) and proprietary protocols in every server, PC, cell phone, PDA, MP3 player and game box is the very opposite of specialisation and enables a 'Windows Monoculture' where all machines are similar. A rich ecology of third-party applications relies on the Windows market place. Burbeck feels that this is ripe entry point for worms and infections because, once you infect one machine, spread of the virus is easy. This is like outbreaks of infections (in a community) like Cholera, Plague, AIDS etc. It is so much easy for the microbes to hit all people. If many people in the community have the right immunity they can fight off the infections. This is called herd immunity. But, in a community caught off guard such infections can effectively wipe out majority of the population. History is replete with instances like that.

Specialisation is yet inevitable despite market forces simply because of technical reasons. Routers (specialised for input/output, query), Web servers (small transactions, throughput), Cluster computers (for parallel operation and more processing power), portable devices like PDAs, cell phones, MP3 players (for low power and long battery life), embedded devices such as in cars (for reliability, durability and precise real time control) etc are all specialised computing machines. They all have incompatible requirements and therefore need to be specialised. Secondly, they avoid excess generality thereby reducing the cost and size of software engineering needed for development and maintenance. Most importantly, because the specialised systems support less number of functions it is less exposed to viruses, spy ware etc.

Burbeck says that general-purpose end-user PCs are the ones first infected with such viruses. Specialisation of machines discourages executable code transfer for inter-machine communication because each machine works in a very different context. But, a 'Windows monoculture' provides a common context amongst machines with its common APIs. This allows meaningful, but dangerous executable code transfers. But code transfer is losing popularity and a new trend called Service Oriented Architectures (SOA) and Web services is gaining ground. The point of this SOA is that specialisation is possible in multi-cellular computing as specialised computers can rely on other specialised computers for services it does not itself provide. This is identical to the set up inside multi-cellular life forms where all cells need not do everything. Or, it is more appropriate to compare them to Bio films, the federal organisation of bacteria living in a community.

To sum up, the take home message is that multi-cellular interacting units in communication with each other have to distinguish between executable code and data. Executable code (in the case of life forms it is DNA) is no good to transfer information between cells but perfectly fine for single unit information management.

In a way it is not surprising that biological information is influenced by biochemical changes. How else can this be done? But, what emerges from the careful analysis is the striking fact that our cells use a hierarchical information cascade starting obviously with DNA. The difference between information in DNA and non-DNA information in the cells is that the non-DNA information (such as protein and non-protein information molecules) has a short/limited life span. Unlike DNA the non-DNA information does not have the same longevity as the DNA. This is a rather intriguing facet in the biological information. The proteins hormones are degraded after a period of time, which in a way of regulating the action of these hormones and destroying the information carried by them. What is the point in continuous action of these hormones? For example, a hormone like adrenaline has a half-life of just minutes, which means that they will be degraded by enzymes dedicated for that purpose. Non-DNA information is 'perishable'. There is of course a difference in the life span of these molecular messages. The other way of looking at it is to consider them as being 'programmed to self-destruct'.

Neurotransmitters released at nerve endings are removed very quickly. Neurotransmitters are responsible for transmitting information across the synaptic gap between neurones. Neurotransmitters are stored in synaptic vesicles. When action potentials are conducted down an axon synaptic vesicles attach to the pre-synaptic membrane, then break open and spill neurotransmitter into the synaptic cleft. Neurotransmitters in the synaptic cleft attach to postsynaptic receptor sites and trigger an action potential in the postsynaptic membrane. At the end of the stimulus, the transmitters can become detached from the receptors and be removed by either diffusion away from the site, by specific enzymatic degradation (e.g. acetylcholine and its breakdown enzyme called acetyl cholinesterase) or reuptake into vesicles (for reuse) at the synaptic membrane. The time spam of action of the neurochemical signals could be only seconds or even milliseconds! The signal duration can be prolonged if the life span of the neurotransmitter can be prolonged. Inhibition of the enzymes that degrade the neurotransmitters is one approach that is exploited by pharmaceutical companies. The other approach is to prevent the reuptake of the neurotransmitter at the synaptic interface. By doing so one can prolong the duration the chemical signal will stay at the information transmission point. An example of a medical exploitation of this latter approach can be seen with serotonin reuptake inhibitors used widely in medical practice for treatment of depression. An example of the former approach is the Monamine Oxidase inhibitor that can prevent the degradation of neurotransmitters like noradrenaline and serotonin. Such drugs are used for treatment of depression again.

Generally, water-soluble hormones removed and destroyed within minutes of entering blood stream. On the other hand, water-insoluble molecular signals (like steroid hormones, thyroid hormones etc) are long lasting. Steroid hormones can persist for hours and thyroid hormones can even last for days! The only difference here is that these water insoluble signals are able to cross the oily membrane barrier and go to intracellular locations for transmitting the information. While they are in the blood they are kept protected from water by way of binding to carrier proteins. The carrier proteins have two roles. One is to enable the water-insoluble molecules stay compatible with water in the blood stream. Secondly, they protect the molecular signals from getting degraded by various enzymes in the blood.

It is not unknown in biology for an extra-cellular signal to have persisting effects on a target cell long after the signal has disappeared. For example a protein kinase that is activated by a tertiary messenger (calcium) to phosphorylate itself and other proteins can stay phosphorylated long after calcium is gone. Another example of a transient signal producing long-lasting effects is seen during organ development and differentiation. A third example would be the case of psychological effects produced in response to emotional events. The Limbic system in the brain is a set of specialised brain neuronal groups that mediate emotional responses. The neuronal circuitry in the limbic system is a bit unusual in that they can create 'reverberating circuits' whereby stimulus from a neuron can feed into a sequence of neuronal synapses only to end in feed-forward stimulation of the original firing neurons. This keeps the signal alive. The effect is a prolonged emotional response felt by the person long after the traumatic event was finished. Some of these changes persist for a lifetime.

Interestingly, cells can alter their sensitivity to an extra cellular signal by means of desensitizing techniques. If some signals persist for too long then the receptors can be down regulated. That is, their numbers can be decreased by means of temporary sequestration underneath the cell membrane or they can be actively degraded. Similarly, when the signal intensity is low (such as when levels of signal molecules are low) the number of receptors on cell surface can be augmented in order to increase the sensitivity of the signal capture system. I guess this ability to differentially alter a system's ability to respond to signals is unique to biology.

In the world of biological information there appears to be a unique ability to locate the information effortlessly. This is an absolute requirement in any information system. In the case of the human organism the DNA as an operating system is very efficient. It is believed that decoding the information contained in the DNA of your cells is not a simple task but every single time this is done without fail.

Each gene, I said, is like a meaningful sentence. Just as in a book, each sentence should have a clearly defined start and end. You are now reading my book. I will give you a small challenge. Can you locate and read out the sentence containing the word 'silicon'? I am sure you can. If a book had an index at the end, it is relatively easy. If not, may be you can try to recall which chapter this word was discussed and go straight to that chapter. Or, alternatively, you can try to associate the meaning of the word silicon with the concepts discussed and figure out the most likely location where this word would occur. As a last resort, you could even blindly flip the pages hoping your eye would meet the word somewhere.

Can you imagine how our cells find the exact location of a gene that has to be quickly decoded to make some vital molecules? Lacking the resourcefulness of the brain, how do the cells manage to do that? Your cell has to locate a single gene from a maze of 30000 genes, scattered around all over the nucleus. This is even more complicated by the fact that these genes are intermingled with unknown, repetitive sequences of DNA to an enormous degree and are in close interaction with proteins called Histones. The cell has an unenviable task ahead of it each time it has to find a gene.

Let us say you go to your library hoping to find a book that you are interested in. Depending on the size of the library, it may contain anything from thousands of books to hundreds of thousands. There are some with even millions of books. Naturally, the amount of information that is available will be increasing with the size of the library. But, it is impossible for someone to want to read all the books. Can they? We restrict our interests to some chosen fields of speciality so that we can use only those books that we require for practising our profession. In a way, we behave exactly like our cells. No cell uses all the genes, I said earlier.

Look at your collection of books in the living room. It is indeed a miniature library. Its content is usually limited to a particular area of interest but often we read all kinds of general interest books too. The information content of the library in your living room is limited but directly suited to your interests and, therefore, you have read them all. I always think a single-celled organism, or a less complex organism, is like your own personal collection of books. They have a limited amount of information but very focussed in the overall content. In the case of such simple organisms, we find that they use all the information they possess. They use all their genes unlike us human cells, which have everything but selectively pick only what they want.

Coming to the issue of locating your book of interest in the libraries, it is simplified by a number of mechanisms. For a start, the librarian will help you to locate the book if you care to approach him. At least, he will tell you which section of the library your book of interest will be located. To help the users to find the books on their own the sections of the library are usually well sign-posted. You know where to go. To make things even easier, all books are given a number. Interestingly, the code of numbering books and filing them is done in a universally similar manner. The librarians are taught to use this coding system in libraries all over the world. It is all fine for you. But, have you ever given a thought as to how our cells locate their genes?

To make things hard the DNA in your cell is not present as a loose and open information track. It has the physical appearance of a thread, which is nicely wound up around a core of proteins in a manner identical to the way our sewing thread is wound around a spindle. It is the perfect way to store a thread structure, be it your sewing thread or DNA. It saves space. It compacts the DNA to the order of 8000 fold. From the point of view of locating the gene in question, it becomes even harder a task than searching for a needle in the haystack to search for it randomly. Just how is the cell going to find what it wants, sifting through other genes, and other unwanted DNA sequences like the repetitive sequences.

The DNA decoding enzyme has to scan the DNA at a staggering rate of 1000 bases a second! Because the DNA is wound around a core of proteins, it is necessary to unwind the coiled DNA to access specific locations. It has been estimated that quite often the DNA has to unwind at such a rapid rate that the number of revolutions can exceed a vehicle travelling at the speed of 40 miles an hour! Incredibly, the shear forces are prevented from being transmitted up or down the gene in question by enzymes, which act like molecular scissors, snipping the DNA in a reversible fashion! In short, there is a localised unwinding of the DNA.

Decoding a gene is a highly regulated process. Information cannot be allowed to flow when there is no need. When there is a need, it has to flow fast. The problem is how to locate the gene of interest quickly and decode it. There are so many thousands of other genes that have to be ignored while searching the wanted gene. Believe it or not, there are at least 50 different types of proteins involved in helping the decoding enzyme find the right gene and decode it. Remember these accessory proteins are themselves products of genes and have to be decoded first! It is a long and step-wise process of making the regulatory molecules first and then using them for finding the genes. How did the cell find the genes for regulatory molecules? My head is spinning.

I said a gene is meaningful sentence. It has to have a start site and an end. The sentence itself is preceded by characteristic sequences of bases that stand for a start sign. Similarly, the end of the gene is followed by a characteristic sequence of bases again that indicate a stop sign. The start sign is usually preceded by other characteristic sequences that act as docking sites for messenger molecules to bind and activate the decoding of specific genes. Ultimately, the whole idea of decoding a gene is to provide the products coded by the genes for cellular use. Naturally, there should be a way for messenger molecules to activate the decoding process by directly binding to a site near the gene, marking it for decoding. The genes have to be sign-posted with biological signals. Indeed, that is what happens. Molecular guides and signs aid the decoding enzyme in its search of the gene. There are characteristic sequences preceding the gene that act as signposts for the cell to find where the gene is.

One of the fascinating truth about the gene decoding process is there is only one common DNA decoding enzyme for all the 30000 genes. How can one type of enzyme decode thousands of different genes? How will it know which gene information is required at any given moment? The accessory proteins I mentioned about a while ago help our cells to accomplish this.

In the case of bacteria, there are molecules known as _sigma_ factors that confer gene specificity. Apparently, each gene program has a specific _sigma_ factor, which is another type of protein really. This _sigma_ factor, upon binding to the DNA decoding enzyme, will alter its specificity such that it acquires the ability to seek and bind the gene in question! If there are 20 genes in a bacterium, then it will have 20 different _sigma_ factors but only one type of DNA decoding enzyme. In the case of eukaryotic cells there are many more genes and therefore many more accessory proteins. They are generally referred to as transcription factors mentioned earlier on in the book.

I said in the previous chapter that not all genes are active in all the human cells. Information contained in many of the genes is shut for all practical purposes. They cannot be accessed. They are 'classified information' for the 'eyes of authorised cells' only. This access seems to be determined by the ability of a cell to unwind and expose the genes from the spindle structure. If a gene was embedded deep inside, it has to be exposed some way. This is accomplished by enzymes, which cut the DNA in a localised, controlled manner to expose underlying genes. This sounds quite a crude way of controlling gene information flow but experiments have shown this is what happens. Literally, the DNA is reversibly torn apart and this ability to tear apart the DNA is largely responsible for determining if a gene will be decoded or not. Whether a stretch of DNA will be susceptible to the action of DNA cleaving enzymes or not is determined by whether there is presence or absence of methyl groups attached to the cytosine 'alphabets' or presence/absence of acetyl groups on histone proteins.

I said cytosine is one of the DNA alphabets. It does not normally have a methyl group sticking on its top. When it is present, it is a signal for the cellular information machinery to leave it alone. It is not meant for access. From a mechanistic point of view, presence of a methyl group jutting out of a base is a sort of obstruction. This prevents the DNA-cleaving enzyme from getting near that stretch of DNA. After all, even enzymes need to comfortably position themselves before they can do the job!

In a nutshell, the marking of stretches of DNA with methyl groups singles them out as inaccessible information for some types of cells. This means the same stretch of DNA in another cell will be free of methyl groups! Confidentiality of information by manipulating with simple chemical moieties! How ingenious? In fact, our cellular metabolism is full of such modifications of molecules. Specifically this modifies their information content, and consequently, the function also. I referred to this concept in the chapter 'Encoding information'.

Incredibly, cells can time their gene activation too. For example, during embryonic development, cells can turn 'on' genes for crucial molecules that control further growth of the foetus. They are called the morphogens or organisers. These molecules are turned 'on' only during this narrow time window and never after! During rest of the life span of the organism these genes are kept forever 'off'. How can cells control the activation of genes for a short period and keep them 'off' the rest of the time! The messages involved act transiently to produce long-lasting effects!

There are quite a few temporally regulated genes in our body. One example that springs to my mind is the 'programmed cell death' gene. Technically, biologists call it the apoptosis gene. All cells in our body are designed to self-destruct themselves at the end of their life span. It is difficult to believe that cells are programmed machines, with self-contained information to turn 'on' their own liquidation at the end of their life span!

Cells in our body differ in their life span. Not all cells live the same length of time. There are some cells, which live only a few days whereas there are some capable of living tens of days. Cancer biologists are frantically trying to search for the mechanisms that activate the self-destruction programs. Because that will give them a way of selectively activating the cell death genes in cancer cells in the hope the cancer cells will self-destruct themselves without the need for any treatment! It is one of the hottest areas of biomedical research.

# 6. INFORMATION CAPTURE

Communication is an art. At the same time, it is a pure survival tool. Life forms employ various strategies to sense the environment around it all the time. They have to do it in order to detect any changes in their surroundings and adjust accordingly. This acquired information is often transferred to their fellow members if it is of a common survival value for the species as a whole. The question is where do we draw the line as to what is needed and what is possible.

Information constantly reaches our brains in various sizes and forms. For example, we take in quite a bit of information by talking to people. Even idle and frivolous talk generates a lot of information that competes for memory storage space in the brain. By talking aimlessly, a casual chat can feed a lot of information about your friends, colleagues and generally what is happening in the town etc. You may think it is of no value but in reality you are evaluating your social environment by such information only. Information that alters your social behaviour is obtained by such gossip. More serious talks at your workplace add a lot more information to your memory bank.

Information that reaches us as audio signals reaches us as sound waves. The nerve carrying this information from the ear to the brain is called the acoustic nerve. It is made up of about 30,000 conductor fibres. The ear-brain system is a bit slow and can do up to 50,000bits/sec.

We see the world all our waking times. We capture a hell of a lot of visual signals during this time. Most of them are of no direct consequence to you. Sometimes, we stare straight through some object or person. For somebody looking at us as we stare blindly, it might appear we are looking at something. But, our mind is a million miles away and that blind stare was not consciously registering anything. Even during such times, our eyes are sending in visual signals. The eyes stop sending visual data only when we close our eyes, such as when we go to sleep, or die. The rest of the times our eyes are capturing all images like surveillance cameras. We pay no extra attention to the images at normal times but can get incredibly interested if there is something happening.

In today's literate world, we are facing the need to acquire most of our visual information as written material in the form of documents and other forms of hyper-media texts. For a simple organism, all that matters is the visual location of the food, the mate and the predator. For us it is not that simple. The optic nerves carry the information coming in as light from the eyes. We have an optic nerve for each eye. Each optic nerve has about 1.2 million nerve fibres. The eyes do the job of fax and television cameras for bio systems. The eye-brain system can process up to 5 million bits of information per second.

When a person is awake, the brain is constantly tuned to its various 'information channels'. Apart from the eye and the ear, there are so many other routes through which information from the external world can reach our brains. There are 11 major types of sensory receptors, which constantly inform the brain about what is happening in the environment. These receptors are the 'information windows' for the brain to capture information. They are the equivalent of sleuths working for a government who do the intelligence operations. There are the receptors concerned with vision, hearing, smell, taste, touch-pressure, warmth, cold, pain, acceleration and movement. Another way of looking at sensory system is to consider them as the 'interface' between us the world. We interact with the world through this interface.

In addition, a large number of receptors relay information about the internal environment of the body. Such information is never allowed to reach our conscious part of the brain because there are no direct neural connections from these receptors to the thinking part of the brain. The unconscious senses include special receptors for temperature of the body, blood pressure, pH of the body fluids etc. The sense receptors that pick up information about these chemical states of the body are fully automated to communicate the information to specialised regions in the base of the brain. These basal regions of the brain are not involved in the conscious processing of the information. They are chemical automatons.

When, for example, the pH or blood pressure or temperature of our body changes from their steady state levels, the receptors specific for detecting this change are stimulated. The stimulated receptors automatically transmit the chemical information to the nerve cells in the base of the brain. These nerve-control centres work like self-regulating structures and send appropriate commands to the target cells to initiate measures to re-establish the steady state. The conscious brain never comes into the picture.

Let us say oxygen concentration in your blood drops. It is a state of emergency for the body. We cannot consciously calculate the oxygen concentration in your blood. If someone asks you how much oxygen concentration you have in your blood, you do not know the answer. It is a different matter when you measure it with the help of blood gas analysers used in the medical diagnostic labs. Measuring blood levels of oxygen and carbon di oxide is one of the essential diagnostic tests carried out in intensive care wards. But, on your own, your brain's conscious space is not equipped to either sense oxygen concentration or do anything about it. It is left to the dedicated chemical receptors to sense the oxygen levels, just like the medical lab analysers do, and relay the information to the appropriate nerve control centres without involving the conscious action of the brain. This centre despatches commands to the breathing control centre, at the base of the brain, to increase the rate of breathing to meet the demand for oxygen. Try to catch your breath for a while and see your breathing rate quickening immediately after that. You can see the same thing happening after a bout of running. Your body is able to sense the change in your internal environment, in this case a change in the oxygen concentration, and respond appropriately without bothering you.

If the concentration of oxygen is low over a long period of time, then signals are also sent to red blood cell production centres in the bone marrow to churn out more of them. Because, the red blood cells are the oxygen tankers for our body. They carry the oxygen cargo to all the cells inside your body. An increase in red blood cell counts and/or an increase in blood haemoglobin concentrations are natural adaptations in our body to meet the challenge of low oxygen availability in the environment. One would typically encounter such low oxygen availability in air at high altitudes. People living at high altitudes are able to better tolerate hypoxia than people living at sea level. Ethiopia is one of those countries at a high altitude, nearly 4000m above sea level. Long distance running in major athletics championships is dominated by Ethiopian runners generally. One explanation would be their natural tolerance to falling oxygen concentrations that runners face after a period of running.

When we skip our meal, we feel hungry and distressed. There is a drop in our ability to function efficiently. We cannot concentrate because our brain cells need glucose as the fuel, just as your vehicle needs petrol. If you do not find food fast, you become worse progressively. Most often, we avoid deprivation of food for longer than a few hours. Some times, we are so busy with our work that we either do not have the time to eat or you simply do not realise that it is time for your meal. The body cells view the absence of food molecules in the blood environment as a threat similar to the way a nation would view a food crisis, such as during a famine.

The body has its own way of knowing when there is food shortage. The capture of this information involves a group of detector cells in a part of the base of the brain called the circum-ventricular organs. They sense the drop in the level of glucose in your blood. It is a chemical signal that is captured. The body correlates presence or absence of glucose with food intake because glucose comes along with food. When the glucose levels drop, the detector cells automatically start firing out electrical signals, which reaches the part of the brain called the satiety centre. This centre is concerned with regulating the food intake in your body. Activation of the satiety centre sends out hunger signals. That is when you start feeling the urge to eat something. The more you delay the eating, the stronger will be the hunger pangs. You are forced to seek food. A juicy burger and some fries from the McDonalds replenish the glucose levels, and the satiety nerve cells stop firing electrical discharges. You stop eating as you feel 'satiated'. If you observe your eating habits carefully, there is always a limit to what you eat. It is usually due to a sense of distension of the stomach, which again is sensed by mechanical receptors lining the inner walls of the stomach. It is true that your satiety centre has a role too in stopping you from overeating. This shows that there is a reciprocal flow of information between sensory receptors and their receiver cells.

Blood pressure is another extremely important physiological variable. It keeps fluctuating all the time. When you find some one driving in front of you like a maniac, your blood pressure hits the roof. You want to desperately stop him and give him a piece of your mind. Your blood pressure is brought back under control in a few minutes even though you do not make any specific effort. You have no idea as to how you feel calmer as your blood pressure normalises.

Blood vessels are like tubes that can stretch and contract. When your blood pressure rises, the vessels are stretched. This is sensed by certain stretch receptors situated on the walls of the major blood vessels. They have the property of firing out electrical discharges when there is too much pressure and, consequently, there is a mechanical stretching. A chemical molecule capable of detecting mechanical changes! This activates a series of mechanisms that help lower the pressure. Once the blood pressure is within normal limits, the receptors return to their basal, unstimulated state. This self-regulatory loop, operated by local information capture systems and effectors, goes awry in people who are hypertensive. They have to turn to blood pressure lowering medications to do the job.

The blood pressure-lowering medications basically attack the biochemical steps that lead to high blood pressure. It could be a medicine that relaxes smooth muscle cells lining your arteries or could be one that enables you to excrete more salt so that your blood vessels do not retain a lot of water. Both measures ease the pressure inside your vascular system. There are other mechanisms by which blood pressure-lowering medications work as well.

One of the striking things about these unconscious senses is the fact that you were never consciously involved in the processes at all. If the brain had to devote its attention to these changes it would not be possible for it to do anything else. By automating these responses, our body spares the precious powers of the brain for something more important.

If you look at your own society lower management structures handle a number of routine, predictable jobs. Local managers sort out problems in an organisation because they have direct access to prevailing situations. May be the boss at the headquarters will come in to the picture if there is a need. The Prime Minister may not be aware of how the city council at a city sorted out its civic problems. He cannot be bothered to know about what a company did to improve their business. He cannot afford to pay individual attention to each trivial development in the routine management of organisations within the country. The duties of sorting out administrative tasks in local regions are left to appointed councils and managers. They are able to respond to local changes more readily, based on the available information. In a way, it saves time too.

The next important aspect of these sense receptors is their ability to sense various forms of stimuli. It is true that there are different forms of receptors, each specialised in sensing one particular form of information. Information of a sensory nature comes in various forms. Some are chemical, like the taste sense, oxygen concentration, pH etc. Receptors meant for them need to be able to react to these chemical signals. There are receptors like the touch receptor, the pressure receptors, and stretch receptors, which have to respond to mechanical changes. The receptors for temperature have to pick up information coming in the form of heat. These receptors must be really incredibly versatile biological devices! They act as biosensors. They can transduce energy of various types into one common form - electrical.

The sensory receptors are transducers that can convert mechanical energy (touch-pressure), thermal energy, electromagnetic (light), and chemical energy (odour, taste, and oxygen content of blood) into a form that is sensed by the brain, which is electrical. They act as transducers of sensory information into a form that is suitable for conduction. Once at the receiving end, this information is converted back in to the original form. Don't you realise that is precisely the way our modern communication tools work?

The receptors are literally the feelers of our body. They seek information. To suit their function, they are usually situated externally. In a way, you can imagine the sensory receptors as if they are the antennae you can see on an ant or some tiny insect. If you keep watching them, you find them probing their environment ceaselessly, apparently looking for information about food, their fellow members and about their enemies. The tiny hair-like projection acts in a similar manner to our radio or antennae. Even bacteria have such hair-like projections on the outer surface of the body that help them in navigation and information capture. The bacteria have chemical molecular receptors on their outer membrane just as our cells have receptors. The bacteria use them to sense their food sources. It is indeed surprising that the receptor cells in all the sense organs of all animals on the earth are very similar in the overall design. All of them have a tiny mobile hair or flagellum. The flagella of the receptor cell are similar not only in the design but they do the same function as our radio & TV antennae!

It is amazing. How do the receptors convert energy of various forms into electrical form before conduction? In most cutaneous sense organs, the receptors are specialised, histologically modified ends of sensory nerve fibres. In complex sense organs connected with vision, hearing, equilibrium and taste, they are specialised, separate cells with a synaptic connection to the nerve. Synaptic connection means a form of structural linkage between two excitable cells through which the excitation can be allowed to pass to the next cell. All nerve cells in our nervous system are connected this way.

Basically, there are two types of synapses. In the first type the impulse is transmitted electrically across the physical connection between the nerve cells. This is a fast way of transmitting the impulses. Such electrical synapses are found in visceral smooth muscle, heart muscle etc where co-ordinated contraction of all the muscle cells should happen all at once. The action potentials in the sense receptors and nerves conduct between adjacent cells through gap junctions. In the second type of synapse there is a release of a chemical transmitter at the end of the synapse, which stimulates the next nerve cell.

I am going to try now and explain in simple terms the mechanism behind the ability of sense receptors to respond to sensory information and how they transduce this sensory information of various types to a common electrical form. Cells in our body have a negative interior. Inside the outer cell membrane, the cell is electrically negative to the extent of 60-90mV, depending on the cell type. The outside of the cell is positive. This state of electrical negativity is maintained at considerable energy expense to the body. It is achieved by altering the movement of positively charged atoms, namely sodium and potassium.

In a way the maintenance of the electrical polarity across the cell membrane is the very essence of the one of the striking properties of living forms - excitability. Sodium is abundant outside the cell and it can flow into the cell driven by the electrical gradient. If equilibrium is allowed to set in, then there will be a state where the cell's inside environment is electrically neutral. The cell has a lot of negatively charged ions too and you may wonder why they can't move out to maintain electro neutrality. But most of them cannot exit the cell because they are bulky entities like proteins, amino acids, and phosphates. It is therefore left to the positively charged atoms, the sodium and potassium, to do the job.

But, in life systems, equilibrium is a bad word. All life processes work in order to prevent equilibrium from setting in. This is applicable for all metabolic reactions inside cells too. I said the sodium enters the cell from the outside. Now these ions are made to get back to their original locations by actively expending precious energy! About 50-70% of all your food energy goes towards this apparently meaningless task of retrieving ions, which move about according to electrical principles. Of what use is it?

There is a lot to it than you suspect. In simple terms, it is what determines the ability of life systems to respond to stimuli, the hallmark of life. They call it the property of excitability. Life systems are excitable and non-living systems are not.

I view the property of excitability as the ability to respond to information. An organism, or a cell, responds to a stimulus in an excitable manner. This property is conferred by the ionic movement in and out of the cell, enabling a variation in the intracellular electrical potential as either positive or negative. Keeping the cells electrically negative inside enables the switching of the cell form the negative to positive state, by appropriate mechanisms, to bring about two distinct states of existence. That is what you need for a transistor, don't you? You can switch the system between two possible states of 'open' or 'shut' by altering their states reversibly. The cells are excited when the cell becomes positive. It loses its excited state when it resumes its negative state.

This switching of 'positive/negative' requires the positively charged atom movement in and out of the cell. Obviously, the cell needs to activate a signal only when there is a stimulus. There is no point in keeping the receptors 'on' all the time. The receptors should be kept in the 'off' state, the unstimulated state all the time in order to be able to go to the 'on' state when necessary. The receptor cell's interior is deliberately kept negative by pumping out the positively charged sodium that has moved in. This is what I said uses up to 50-70% of our food energy. By returning to the negative state, the cell retains its ability to quickly switch to the positive state by altering sodium movement.

How does a stimulus such as touch, pressure, temperature etc., alter the electrical state of the cell to switch from a negative to positive state? The trick is in the difference in the permeability of sodium. When there is a stimulus, the conductivity of the membrane to sodium changes dramatically. It increases by about 5000-fold. There is so much sodium outside every cell of our body. Due to the sudden increase in sodium conductance, the cellular interior quickly becomes relatively positive. In the initial stage of this process, there is a drop in the membrane potential from -90 mV to about -65 mV or so. This is about the threshold level for the generation of an action potential in the nerve. Once this level of depolarisation is achieved, it is taken as adequate stimulus for a full-blown action potential. The interior of the cell shoots to about +35 mV or so. This is a short-lived state. The cell reverts back to the original, resting potential of -90 mV fairly rapidly. The pump I mentioned about aids this. It pumps the sodium out. If the stimulus persists, the conversion of the cell's interior to the positive state recurs again. If the stimulus is strong, there could be a series of such 'negative' and 'positive' state alterations. This is the same old 'yes/no' mechanism of depicting information.

I used the word action potential a couple of paragraphs ago. What does that mean? I am sure the readers now must have a fair idea of the reversible changes in the cell's electrical potential inside the cell. The word 'action potential' refers to the explosive shooting of the cell's positivity from -65 mV to +35 mV. Until the cell's negativity drops from -90 mv to -65 mV, there is no way of being certain whether this will result in a signal or not. But, when it reaches the threshold level of -65 mV, there is a runaway generation of positivity until it reaches the peak of +35 mV. The word 'action' probably symbolises the fact that this action of transmitting information has begun. Figuratively, the sharp shoot of the cell's positivity could look like a spike. That spike is what we call an action potential. If there were many succeeding action potentials, the picture looks very spiky.

The frequency and amplitude of the action potentials can encode information. It can be used to detect variations in signal intensity. If the stimulus were less strong, you would find less frequent potentials. On the contrary, if the stimulus were strong, they would be more frequent.

Now let us recapitulate the whole process. Ionic movement suddenly changes dramatically, in response to the stimulus. There is an action potential. The cell becomes positive in the interior. Now, I do not think I have explained how this change in cell's electrical state is actually going to convey the message to the nerve that will take it to the brain.

Action potentials, i.e., the cell's altered electrical state, can propagate down a nerve's long structure. After all, don't we know that electricity moves? There is a local circuit of current flow between the positive areas of the membrane and the still negative areas of the membrane of the same cell. This becomes necessary because the nerve cell can be very long, unlike other cell types. This means, the whole nerve cell does not become uniformly positive throughout its length and breadth. It has to spread from region to region by conduction. This takes time. If the stimulus was very weak, the action potential generated may not be strong enough to successfully conduct the impulse all the way to the other end of the nerve, which is the 'business end'.

I said that some receptors are themselves endings of the nerves that are supposed to conduct the information. So, if an impulse arises in the receptor, it will propagate all along the nerve towards the brain. In some other receptors, they are found to be independent cells. They have to establish a separate point of contact with the nerve that will later conduct the information. The electrical impulse will have to cross this point of connection between the receptor and the nerve cell.

Before I move on to another topic, I thought I would bring up the topic of why there is so much sodium outside the cells. As you may recall, the sudden increase in the movement of sodium from outside the cell to the inside that is responsible for the generation of the electrical impulse. If there was less sodium outside the cell, can this electrical impulse be generated?

Presence of lots of sodium outside the cell is nothing new to living organisms. They originated in an environment exactly similar to it. We all know that life forms originated in the sea. Sea's composition of salt is incredibly similar to the extra-cellular fluid that bathes every cell of our body even to this day! Life forms were initially unicellular. All cells were exposed to this saline environment. When these cells became multi-cellular, the salty water had to get in between the cells, mimicking the salty environment all early life forms were exposed to during their origin. That state still continues to this day, despite enormous increase in the complexity of organisms. As I have repeatedly been showing, some motifs are so simple and effective that they are simply retained. The saline nature of the sea was responsible for the evolution of excitability of primordial cells that were to evolve into life forms. The same kind of ionic movement, as happens in the nerve cells of today, must have happened in the primordial, prototype organisms, allowing the generation of an impulse in response to a stimulus. How incredible?

In a way, the cells are behaving like batteries. The batteries convert chemical energy into electrical energy. They rely on the use of a positively charged and a negatively charged electrode, and a salt solution in between them that bridges the two so that the current flows from the negative end to the positive end.

The salt solutions commonly used in batteries are ammonium chloride, potassium hydroxide, or sulphuric acid. Living cells use the sodium chloride that bathes every one of them, just as seawater had been used ever since their origin. The batteries channel the electrical energy to power gadgets.

The cells channel the current towards the brain as an electrical impulse. The inside of the cell is the equivalent of the negative electrode and the outside of the cell is the positive end. The sodium pump's job is to keep this polarity operational in the resting state so that the receptors can quickly switch from 'off' to 'on' state by allowing the electrical flow. All of us know that we need to take salt in our diet. We can face severe problems if we are deficient in salt. People who end up in the deserts without water and salt die soon. Even wondered why? This is because of salt loss in the sweat. The person is unable to run their 'cellular batteries'. The person loses the vital life property called excitability, which in other words means ability to respond to sensory signals.

It is amazing that mechanical distortion, as happens during touch, pressure etc can open up ion channels and trigger electrical changes. Similarly, chemical stimuli like taste, pH etc can do the same. So can the light and sound stimuli. That is why I said the receptors act as transducing devices, which will convert the sensory information of a variety of forms into one unifying electrical form suitable for conduction. In some types of receptors the mechanism of doing it will be different from others but the basic theme of converting the stimuli to a different form for conduction is the same.

One of the characteristics of our sense receptors is their habituating response to repetitive, monotonous stimuli. When we enter a room, we may be able to pick up the smell of a perfume some body is wearing. In a short time, your smell receptors in the nose get adapted to the smell. You do not smell the perfume after some time. The smell receptors have stopped transmitting the smell information to your brain because it is of no consequence to your further actions in the room. You have other businesses to attend to. What is the point in continuously inputting perfume smell information? Is it worth troubling the already overworked brain?

We rarely hear ticking of the clock unless you focus yourself to hear it. The sound receptors do not relay the clock tick after a while. We have other useful functions for our information capture machinery. The 'orienting' response to new stimulation disappears after a few moments and it is because of the habituating response of the sense receptors. It is a kind of filtration mechanism that protects the brain against recurrent, monotonous stimuli and other useless information from clogging the information capture pathways of our nervous system.

If you see our newspapers and T. V, news items lose their topical value after a while. The newspaper editors know that only too well. The public can get habituated to the news stories if articles keep appearing one after another, about the same issue, for a long time. This is especially more likely given the enormous number of newspapers and magazines. We, like the sense receptors, need to be choosy about what we can take in. We cannot go on devoting our attention to the same news information for longer than a brief period. That is why editors are desperately keen to get other stories on their newspapers to keep the public 'oriented'.

I would not believe there is a single individual living in the civilised world who has not seen advertisements on the TV or newspapers. It is bombarding us at a staggering rate through unimaginable routes. Even egg baskets have some advertising leaflet inserted in. As you may have guessed it right, it was nothing to do with the egg. It was about a hotel advertisement. Sometimes, I wonder about the meaning of some of the advertisements. An ad about a car may have nothing spoken about the car's features. A commercial about an alcoholic drink may not speak about the drink at all. That is really besides the issue. What I am really getting at is these advertising and marketing personnel have come to realise that people will get adapted to the ad very soon. They know that we will get really inattentive if the same ad is played again and again without any modifications. It fact, we get the urge to change to other channels, or go for fetching a snack, only during the commercial break. People behave exactly like the sense receptors when they do that. We have no time for repetitive ads because they do not offer any new information. That is why marketing strategists come up with different ads over time for the same product. They hope that by changing the sales pitch they can get us to listen at least for a while.

Different sense receptors adapt to varying degrees. Visual receptors adapt very rapidly. In the case of vision, it is more a case of difficulty in focussing our eyes on something for long. We keep shifting our eyes all the time apparently to get the changing visual information. If you were looking at some view, you are going to very quickly adapt to visual images of unchanging or immobile objects. Instead, the visual receptors turn their view to where the action is.

This can be compared to our 'big brother', the ever-chasing surveillance camera. People at the security desk have incoming images but they focus the camera more attentively at the location of the store where they think some shoplifter is having a go. Or, if it is a police camera, the focus would be the running criminal as you see in the TV documentary _'Police, Camera, and Action'_. Our eyes behave like the surveillance camera. They can shift the attention quickly to any thing that moves in the view. This has an evolutionary basis. Predators or prey represent the most likely moving objects in our visual images and organisms quickly turn their attention to any moving images in their view. The rest of the objects lose the focus of our attention. Will you keep looking at a wall or a table or a mountain for ages? You may do that only if they are the objects of your attention but otherwise your eye receptors are put to use for something else.

A very interesting thing is that our eyes keep sending images to our brain even though we are not looking at something intently. A lot of background images keep getting registered in our visual databases. This is highlighted when criminal investigators seek witnesses of some crime. When the crime is committed the witnesses may or may not have seen the crime or criminal directly. But, if asked to recall images of persons or vehicles they saw at around the time of the crime, they would be able to come up with some information which they didn't think will carry this much significance at the time of looking. Probably, if not retrieved fairly quickly after the event, the witnesses stand the risk of erasing the information from their memory database for want of space, like we clear unwanted files in our computer hard drives.

At this juncture, I want to bring up the fascinating experiments the famous Neurophysiologist Roger Penfield conducted on his subjects that may help illustrate the topic we are now discussing. They were epileptic patients whom he was studying with an aim to localise the brain defect that caused epilepsy. They were conscious during the surgery as only local anaesthesia was used for the procedure. Brain has no pain receptors and therefore the patients do not feel the pain as the brain is stimulated physically. He, in the course of brain surgeries on these patients, stimulated the temporal lobe of the brain where we know the memory function resides in our brains. This resulted in some incredibly accurate recollection of past events in the subjects. They could sing the entire song they had listened to many years ago, and also precisely remember the background of the room they were in when listening to the song. It was as vivid as re-living that experience. The same subjects could not voluntarily recall the details in the conscious state.

These experiments, discussed in Physiology textbooks, raise important questions on our understanding of the amount of sensory information we capture and store in our memory databases. We could be storing a lot more in our brain than we think. It is a question of how to get it out. If you do not use the information for a long time, it gets into an inaccessible location, I guess! That is why we find it difficult to remember certain things after a long time. In our lives, it is quite common to lose some of the papers and documents and things like that. We know we have kept that paper safely somewhere but we do not know where. Our memory works that way. It is possible to get more out of it but you need to be ready to spend a lot of time on trying to retrieve it.

Continuing a bit more on the topic of adaptation it is worth mentioning that touch receptors adapt very rapidly too. Perhaps, it is because touch accompanies all forms of our activities and therefore it would be impossible to devote all our attention to the task of feeling the touching. For example, while writing something, it is not possible for us to keep remembering the details of the pen we are holding. Can we?

But, some other sensory receptors adapt slowly. Pain receptors adapt slowly and incompletely. We retain the ability to feel the pain even after a long time after an injury. This may be because it will help us to avoid such stimuli in future because the painful experience. Secondly, if you adapt too rapidly, then the chances of escaping the stimulus are lessened. Similarly, receptors for blood pressure, muscle tension, and temperature adapt very slowly. All of us can vouch for those long, cold nights we experience in winter. Don't we? If only our temperature receptors adapted quickly, we would not need all our jackets and coats!

It is possible that any thing repetitive is boring to us and makes us lose interest. This is no different from sensory adaptation and the receptors consequently switching off. It is well known that our audiocassettes sound nice for a while. If you keep listening all the time to the same cassette, the chances are that you lose interest. But, the same cassette may sound refreshing after a break.

I wonder if our insatiable drive to go on holidays is a social mechanism to sharpen our blunted senses. We do the same job day in and day out. We live in the same house, exposed to the same visual and auditory stimuli all the time. Boredom is a manifestation of habituation. After a holiday, our senses are back to the original state, like the renewed interest in our audiocassettes. Television companies repeat telecast of a number of their programmes after a long gap. They know that audience will be in a more receptive mood because of the break.

There are esoteric ways of keeping our brain fit as a fiddle, despite the monotonous nature of our lives. We have learnt to get around the problem of boredom and sensory lull. One of them is the meditation. During meditation, our awareness of the external world is voluntarily shut down. It is like turning your consciousness off. In the experienced practitioners of meditation, it has been scientifically noted that they do not get habituated to monotonous stimuli like we all do! They are in a state of hyper-awareness!

When we face a threatening situation, there is no place for adaptation. We are at the height of our capacity to perceive. This is obviously because we have to be alert and attentive to quickly evaluate the environment around us. It takes very little sensory stimulation to activate the brain in such instances. Every bit of sensory signal is heeded to. Alternatively, during sleep, we 'shut' down our information channels. It helps our brain from a nervous breakdown. To be true, one has to admit that the brain can respond to real emergencies and wake up. It can also respond to quite moderate or severe forms of stimulation such as a loud shout, vigorous shaking etc. Otherwise, no body that goes to sleep will wake up.

The fact that our brain responds to vigorous stimuli is proof that there is a filter-gate in the brain that prevents minor stimuli from bothering the brain during sleep. This gate will open only when necessary. One should not imagine a gate standing there in the real sense. It is a functional mechanism. In fact, such a filter is operational even during wakeful times. That helps us to avoid overloading of our brains. If you observe carefully, when reading a book, we disregard our surroundings for all practical purposes. We vaguely see something through the corners of our eyes. We hear vague voices all around us but do not pay attention. The surrounding interior of the room is ignored though it is there all the time. It is becoming clear that we have the ability to modulate the amount and nature of sensory data we take in.

Coma is the state in which an individual loses the ability to respond to information from the external world but not his internal world. They can still keep their blood pressure under control, have blood at a stable pH, keep breathing to keep their oxygen concentration etc. It is true that they may need help with medical assistance in the form of life support machines to do so. It is yet incredible that his or her 'cellular society' is largely intact as far as information exchange is concerned, within the internal world of his.

An amazing testimony to this intact information flow amongst the cells is the news item that appeared in the media about a couple of decades ago. This happened at a hospital in the US where there was a coma patient on long term care. A hospital employee, who had access to the patient as part of her caring, had gone to the detestable extent of having sexual experiences with her even as she was in coma. The patient obviously had no way of knowing the experience but her internal world of her body went about the task of pregnancy just as she would have done if she had voluntarily had that sexual experience. You would not believe it if I said this coma patient's baby developed completely in her womb, assisted by nutrition provided by the medical care. No body knew about this pregnancy until it reached the stage when her abdomen started distending. It was too late to do anything about it and, believe it or not, the baby was delivered successfully! I guess it is a bit unnerving for me even as a doctor who has seen quite a few dramatic incidents in my own life as a doctor.

The point I was trying to highlight is the unbelievable extent of internal information autonomy of our bodies. It is very true that the lady may not have found nutrition without medical help but the point to be noted is the successful use of her internal information in order to complete her baby growth. I am sure readers will recall the complexity of this process.

Recently, there was the incredible case of a 19-year old girl, who went into coma in 1965 after a car crash. She grew into old age, while in coma, and died in November 1999 at 53 years of age. My hair is standing on end as I write this. A whole life spent without consciously registering anything from the external world but a perfectly normal internal world!

Coma is the state where the brain cannot arouse itself to sensory information despatched by the dutiful sense receptors. Keeping the brain in the aroused state involves a unique information structure in the brain called the Reticular Activating System (RAS), near a portion of the brain called Thalamus in the base of the brain. This consists of sensory information-conducting nerve fibres, which gets uploaded with input from all types of sensory information coming in. This input to the RAS is different from the regular pathways that conduct information from sense receptors to the brain, via the nerves. These nerve pathways actually feed a bit of this information to the reticular activating system, like a splitter would split your TV and Internet data in your co-axial cable.

All types of sensory impulses feed their information into this arousal pathway in addition to their primary role of taking the information to the dedicated brain sites that are connected with the perception of the respective sensory information. It is unique in that there is no more any distinction about different sense modalities once it enters the RAS. The purpose of this reticular system seems to fire the thinking part of the brain to keep the brain awake and aroused, rather than worry about what sensation it is and where is it coming from. That is left for other brain regions to do.

All of us know about anaesthesia administered during surgery. But, what you probably do not know is the fact that it is the reticular activating system that is blocked by the anaesthetic agent. Stimuli are prevented from arousing the brain by blocking their entry into the brain arousal pathway. You would not believe it if I said that your pain receptors work, as ever before, when you have the surgery! Your nerves that carry this signal to the brain regions also work as before. But, you don't you feel the pain because the reticular activating system is inactivated and therefore the brain is not in the aroused state. For all practical purposes, you are a coma patient because, in a coma patient, it is this reticular activating system that is impaired. I guess it highlights the extremely important function of the information processing in making sense of the sensory information.

During your sleep, I said your information channels are shut. It is not because your sense receptors and nerves are shut down. It is because the reticular activating system is functioning at low key. Sensory input is kept to a bare minimum through this pathway. All of us know how difficult it is to sleep in a noisy room. It is because the sound signals, if sufficiently loud, could demand conduction down the reticular activating system, arousing us. Arousal and sleep are opposite states. They cannot co-exist. One would have noticed how difficult it is to sleep in a room with lights on. It is again because the light signals try to keep the brain awake. Again, going to sleep requires a comfortable bed. It is not realised that an uncomfortable bed means a state of a degree of irritant stimuli that will keep firing signals down the reticular activating system preventing us from sleeping. Once again I have to reiterate that, even while asleep, important information can still overrule the gating mechanisms like a hot line telephone connection.

During states of emergency, in life-threatening situations, information access is at its peak. The brain is aroused to the maximum state possible. This requires activating the reticular activating system to its limits. Information is fired at an accelerated rate to the brain. This is similar to the state of emergency at the national level when the prime minister and the governmental machinery are geared to face the task ahead.

Let us move on to the world of information capture in other parts of the biological world. Information capture is a biological necessity as important as food for all organisms on earth. Irrespective of the nature of the organism, they all seek information by mechanisms suited to their form and structure. This, surprisingly, applies to plants too. Thale cress _(Arabidopsis thaliana)_ is a frail weed. Brian Forde and Hamma Zhang at the Institute of Arable Crops Research in Rothamsted, Herefordshire, have found that a gene in it that is vital for their ability to detect nitrates, one of the important nutrients for plants. This gene enables roots to grow lateral leaflets from the main arterial roots towards the location where nitrates and ammonium salts are abundant. This ability to 'forage' for nitrates is conferred by this gene because mutants lacking this function cannot do it.

Plants can also sense for things other than nutrient salts too. Stanley Roux, Colin Thomas and their colleagues at the University of Texas at Austin, California, have found an enzyme on the root surfaces that can detect adenosine triphosphate (ATP) made by soil microbes like fungi. ATP is an energy-rich compound ubiquitous in bio systems. Even we humans rely on it for cellular energy needs. The plant's enzyme picks up this ATP from the soil and converts it into phosphate nutrients. They effectively scavenge extra cellular ATP, a useful compound in nature.

Striga plants, better known as 'Witch weed', blight 40% of all the arable land in Africa and Asia. This weed taps into roots of other plants like sorghum and maize and sucks water and minerals from them. They use molecular cues liberated by the germinating plants to find the roots of the cereal plants. It is an information capture strategy relying on molecular sensing mechanisms. The cereal plant ends up with a beautiful flower but nothing else because the weed has sucked them dry.

Plants can sense light. It is obvious because they need sunlight for photosynthesis. They optimise their growth and survival by sensing the intensity, direction and periodicity of light. They do it with the help of molecular pigments called phytochromes. Nam-Hai Chua and colleagues of Rockefeller University in New York, has found that these phytochromes play a key role as germinating seeds emerge from the soil, switching from growth sustained only by its energy reserves to light-driven photosynthetic growth. Photosynthesis and new energy production is kick-started.

Plants can also sense the level of competition for light by competitors around them. They sense it by the levels of red light, with wavelength between 600-750 nanometres, with the help of the phytochrome pigments. Gary Whitelam of the University of Leicester, says the plants would put more resources into stem growth to outgrow the opposition. Chentao Lin and his colleagues at the University of California at Los Angeles showed that a pigment called cryptochrome enables the plants to measure the day length and switch from vegetative growth to begin floral development.

Communication is an art. At the same time, it is a pure survival tool, I said earlier. Life forms employ various strategies to transfer information to their fellow members. Insects use acoustic signals for mate recognition, rivalry & courtship. Some ant types produce stridulations to recruit nest mates to food sources, as we have seen before. SOS signal can be sent out by buried ants to their nest mates to help dig them out.

Vampire bats emit individually distinct vocalisations. On analysis of the sonograms, it is found that 'contact calls' often accompany grooming sessions. These calls have the acoustic characteristics of variable frequency and low intensity that are necessary to encode individual identity. These calls help in individuals recognising long-term roost mates. Perhaps olfactory sensations are also important.

Killer whales use whistles and calls when communicating under water. They are quite distinct from the high-energy, sonar-like clicks they emit when navigating by echolocation.

About 300 fish species are said to possess electric organs capable of producing weak electric discharges ranging between 0.2 to 2 volts. The catfish can generate a current of 400 volts and the eel up to 600 volts. The torpedo ray can do as much as 60 volts. Some of the fish that generate intense electric currents could be using them for hunting. However, after careful studies, it has been found that the electric organs have more to do with information transfer. The electrical equipment of these fishes has evolved not towards greater discharge force but towards high sensitivity to electricity.

Many of these fishes are nocturnal and live in muddy water. Some fish such as Nile mormyrus keep their heads buried in the mud. How would it know the approach of an enemy? The fact is it can. The electric organs of Nile mormyrus can not only generate electric discharges but can also sense electricity. It generates 300 discharges per second, creating around them a weak electrical field which is constant in pattern. The lines of electrical force converge at the level of its head. When a large fish appears in the vicinity, the uniformity of the electric field is disturbed. The body of a fish is a better conductor than the surrounding fresh water. Therefore, the lines of force shift towards the approaching enemy. The Nile mormyrus is warned of the enemy.

Sea and fresh water lampreys obtain information about the presence of prey by electrical location. In the muddy waters of fresh water basins, this is very useful. Knife fish, which lives in the Atlantic near the coast of America, has an electrical locator on its tail. It thrusts its tail into the rock fissures and passages to locate the prey by detecting the electrical field.

Typical shoal fishes such as scombroid fishes (horse mackerels, mackerels or toothed planes) exhibit admirable co-ordinated manoeuvres while they move. Thousands of fishes can change their direction with unbelievable simultaneity. It is believed that feeble electrical signals are used to co-ordinate their movements.

Dance, as a language, is a rather unusual means of communication in the animal kingdom. The bees do that to communication to other bees about the location of plants that contain food for them.

Fire flies use luminescence to find their mates. When there are other glowworms around, the flashes of light can be misleading. To overcome this problem, the males send out rhythmic flashes to appeal to their mates. The female in the vicinity sends out her reply as light flash signals at strictly regular intervals. The intervals between appeal and reply help the mate to distinguish a female of her own species from those of other species.

Human beings use speech recognition as a means of communication. Of all the life systems on earth he alone uses language as a means of information transfer. People have always raved about how man has developed this ability due to the power of his brain. Ability to use language is one of the major milestones in human evolution. Man may not have witnessed such tremendous advances in science, technology, and literature but for his ability to use language to generate and transfer information.

Most surprisingly, organisms are not content with communication and information transfer between earthly beings! We humans are even trying to communicate with extra-terrestrial beings! The 'Search for extra-terrestrial intelligence (SETI)' is a human effort directed at seeking extra-terrestrial civilisations. We have so far concentrated on a very specific technology in the form of radio transmissions at wavelengths with weak natural backgrounds and little absorption. The idea is to send out radio signals, with encrypted information, to see if other intelligent life forms, if they are out there, can capture the information and respond. It is a tall order but it is indicative of the human strategy to communicate with unknown life forms using the same means of telecommunication as we use in our own lives here. Radio waves are suitable for long distance communication because of their long wavelengths just the same reason why they are chosen for our radio broadcasts and mobile phone communications. The point is whether other life forms will use the same technology as ours. It could well be that if life systems rely on similar mechanisms of information transfer inside and outside their bodies, it could mean that we are looking at some fundamental property about complex systems. It is quite probable, any system outside our own world, could still use the same principles. Scientists have been seriously doing it for years now without really meeting with success so far. It could mean there is no life out there or it could mean we are using a system with a low sensitivity. The most distant star probed directly is less than 1 percent of the distance across our galaxy. Imagine doing it for the rest of our galaxy and continuing the same for the other billions of galaxies! The other problem seems to be in the radio frequency interference due to man's other communication activities. This has led to talks about shifting our base to the moon for this task. International agreements have been reached to shield a zone in the moon for this purpose. Some astronomers have discussed reserving one of the craters on the moon for use as an observation base. By 2050, this may come into place.

# 7. INFORMATION CABLES

Man is not the first to come up with the concept of telecommunication as a means of information transfer. Nature has come up with the telecommunication tools millions of years ago.

If you haven't guessed it yet, I am talking about the Nervous system. Structurally, and functionally, a nerve is exactly similar to our electrical and fibre optic cables that link up continents, conducting your E-mail, fax and telephone messages. The very organisation of the nerves in the nervous system has a striking resemblance to the basic design of the communication methods of the modern human society. Nerves evolved to fulfil the biological telecommunication needs. Just like what our fibre-optic cables did to our human society.

The electric and fibre optic communication cables are bundles of a number of conduction fibres enclosed in a protective covering. For protection against mechanical injury and climatic vagaries, optical cables are encased in plastic, aluminium, steel or composite outer sheaths. Inside the optic cables, the core is surrounded by what is known as cladding which is a transparent material with a low refractive index serving the reflect the propagating beam of light to reduce the radiation loss into the environment. There are three or more coatings on the outer side of the cladding called the primary, secondary and the outer coating of some polymeric material such as polyethylene, polytetra flouroethylene or polyamide.

Coming to the structure of the nerve 'cables' _,_ a nerve is made up of many axons which are the tails of nerve cells and they can be as long as tens of centimetres. The axons connect the main body of the nerve cell (speaker) to the target cell (the receiver). The axons conduct the electrical signals from the brain to the target cells. They are wrapped in individual coatings called the endoneurium. Groups of axons are arranged in bundles called fascicles. Each fascicle is wrapped in a perineurium. A superficial covering around the entire nerve is called the epineurium.

The axons themselves are individually sheathed by what is known as myelin, a protein-lipid complex made up of many layers of cell membranes of a special type of cell called Schwann cells. The myelin sheath, rich in a type of special molecule called the sphingolipids, has an insulating function, preventing electrical signals from straying non-specifically. The insulating sheath is absent at the axon end where it is connected to the target or subscriber cell!

In a normal nerve, apart from the myelin sheath, the cell membrane is itself is rich in cholesterol which reinforces the insulating function of myelin. It is incredible that insulation as a more efficient way of transmitting electrical data has been arrived at by a system other than us. The organisation of a nerve and a modern telecommunication cable are too striking. A central conducting core made of numerous cables, an insulating layer and a protective coating is a bit too much for the uninformed reader.

Peripheral nerves are bundles of axons. Different types of nerve fibres exist in mammalian nerves; each specialised to carry out different types of sensations like touch, temperature, pain etc. The axons are individual communication channels that link the brain and the target cells. In the nerves, signals flow towards the subscriber cells from the brain as well as backwards from the target cell to the brain. Certain signals of survival value are transmitted at a faster rate than others! Obviously, a single nerve can hold only a limited number of communication channels for want of space. In other words, the channels become crammed with information traffic at times, making it necessary to 'queue' the information traffic. This is the reason why some important signals gain priority over others as if it were a 'hot line'!

Nerves are two-way conduction cables. Data transmission occurs from the brain to the target cells as well as from the target cells to the brain. Some 'axon cables' transmit information towards the target. At the same time, there will be some that carry information away from it.

Information that emanates from the target cell will usually be sensory information like touch, pain, pressure, and positional sense about orientation of the body in space, which is important in maintaining the body in balance.

Nerves carrying touch and pressure sensations have conduction velocities like 3-6 meters/sec, and those carrying pain sensations do about 12-13 meters/sec. The nerves carrying information about maintaining the balance or equilibrium of the body conducts at a faster speed of about 70-120 meters/sec. It is obvious that our nervous system prioritises sensory information according to their importance. Balancing your body is probably more important than anything else, isn't it? Can we afford to fall down too often? The babies do that because their nervous system is still immature. Pain information is again more important than touch obviously because it lets us avoid harmful stimuli.

The basic pattern in the communicating systems of the modern world includes a transmitter, a conductor, and a receiver. The conversion of one form of signal to a suitable form for conduction and re-conversion back to the original form is the way our nerves function too **.** In a typical fibre optic cable, a transmitter generates an optical signal by converting electrical signals into light. The optical signal flows into an optical fibre, which carries it to the destination where a receiver converts the optical input into the electrical format by means of a photo detector. In the telephone cables, sound waves are transformed to the electrical format before conduction along the wires that link you and all your contact persons with the exchange, which performs the role of re-routing all the calls. The mouthpiece converts your voice into an electrical signal and the earpiece converts the electrical signal into sound. The earpiece has a diaphragm, which vibrates when electrical signals go through a coil of wire. This vibrating diaphragm recreates the sound. Similarly, the mouthpiece has thin metal discs, which vibrate when sound hits them.

In the mobile phones, your voice is converted to the electrical format just as in the regular phone but, instead of the signal travelling along the wire to the telephone exchange, it travels through the air as a radio signal. The transmitters convert the electrical signal into the radio signal, which then goes from your telephone's aerial to an aerial connected to the telephone exchange. The aerial does the same job as the radio. There are many receiving and transmitting stations all over the country, handling calls from different areas of the country.

In the ultra modern form of the telephones, signals are digitised. Points along the sound waves are given a number, which is then represented in the binary code of 0 and 1. It then becomes possible to transmit the information by switching the electrical current 'off' and 'on' to represent 0 and 1.

Optical fibre cables are now predominantly used to conduct information. An optical fibre is a thin thread of glass with a clear plastic cladding. At the telephone exchange, electrical signals are converted into pulses of light. Light signals are then sent into one end of the fibre, which the travels along it, bouncing off the inside walls. At the next telephone exchange, the light signal will be converted back to the electrical form. The signals are much clearer because there is little electrical interference during conduction. A whole optical cable can carry an enormous number of calls at once.

Television works in a similar way to our radio. The television camera captures an image by scanning it along hundreds of lines electronically, working out the brightness and colour. The microphones pick the voice signal and convert them into electrical form just as in your phone. Sound and light signals are converted to electrical form before transmission as radio waves. A television receiver in the form of your TV, receives the signals from the transmitter, and converts the signals back to their original form.

Satellites have transformed the way we communicate over long distances. The inherent advantages of radio waves in long distance transmission of information could be exploited fully only after the satellite came about. Radio signals could be beamed from a transmitter up to a satellite, and back to a receiver many thousands of miles away. These systems can now help transfer information related to your telephone calls, television pictures, and computer networks. In short, they represent the most predominant method of information transfer by the human civilisation.

It is incredible that the theme of communication in nature resembles our modern means of information transfer in the way it is transformed and modified before conduction. I brought up this point before in the previous chapter when I was discussing the mechanisms of the sensory receptors. Our nervous system has the capability to transform chemical (neurotransmitter molecules like acetylcholine, epinephrine, dopamine etc.), mechanical (touch, pain, pressure sensations), sound (hearing), light (vision) signals into the electrical input, which will subsequently be carried by the nerve cables. At the receiving end, it is received by another nerve cell or groups of nerve cells in the brain. After processing the sensory information, the brain sends back the right commands to target tissues along the nerve cables again. These signals initially are in the electrical format but later re-converted to the form the target cells understand. Most often, it is the chemical form.

Another aspect of information transfer theme, as practised by man and nature, is the need for connecting stations, which link up the two objects of communication. It could be two cells, two persons, two computers or even two exchanges. This connecting station seems to be necessary to channel the flow of information to the right recipients.

One could have dedicated phone connections to each person you want to call. Every time you want to call the person all you have to do is lift the phone designated for calling that person. But, it does not sound feasible to have hundreds of phone lines to call every one you know. In fact, we do not even have a second phone line at home. Imagine having to pay for hundreds of phone lines to the telecom company to call each one you know!

In this age of Internet and web, it becomes frighteningly ridiculous to have one cable for each web site! Internet service providers offer the role of telephone exchange by linking your computer to the web page you want, or any Internet site you want. It is true that the same telephone exchange has a role to play to make you access Internet because your computers are linked to each other, by the telephones, via the modem.

The concept of an exchange has incredibly simplified the whole process of multi-unit communication. It now is possible to have one cable that links you to this exchange, which will use a finite number of common conducting channels to link you up to any body you want, one person at a time. Those common communication channels will be used by the exchange for enabling calls by other users too. This means it is possible to have a limited number of conducting cables to link very large number of communicating units whatever they are. Your call will be competing with others for conducting space.

It is often felt only when you are trying to contact the same number as others. For instance, if you try ringing that number for entering a competition such as 'Who wants to be a millionaire?' you will literally find that the lines are jammed for what seems like an eternity.

But, fortunately, we do not have to face this situation all the time. Even if you try to get in touch with busy organisations, you are put on hold what at least a few minutes though it actually feels a lot longer. It is usually a local problem of not having enough persons to answer the calls. Today the exchanges are run by fully automated electronic systems. Sometimes we do not even think there could be some intermediary involved in our phone calls. In the early days, there were no electronic exchanges. There were no telephone numbers too. If you wanted to make a phone call, you had to alert the operator by simply pressing a button. The operator will then ask who you wanted to talk, and will then plug the two lines together! Today's exchanges look so smart they can handle innumerable calls simultaneously.

We all know the British telecom company as a single entity are meeting the telecommunication needs of all of Britain, just as other phone companies are doing in different countries. But, it is not realised that the BT and other phone companies have so many exchanges situated all over the country to cater to the needs of distinctly defined regions. I guess even the Internet service providers do the same. Even the mobile operators divide the country into what they call the 'cells'. I am really surprised by the word 'cell' here. Why did they want to name it a cell? It is intriguing. Anyway, the mobile phones are also called 'cell phones' in many countries. Each cell represents a geographical region with a separate transmitting and receiving station. I feel dividing a unit into separate information blocks is a reflection of wanting to manage it more efficiently, just like an organism is compartmentalised into numerous cells. Perhaps, it is like having branch offices for a large organisation. Branch offices are able to deal with local information in a better manner than if you had to do it sitting in one big office, very far away from the consumers.

Nerves allow a two-way flow of information. They have defined regions to serve as I said before. They have a finite number of nerve fibres in them. They have to carry all forms of information like touch, temperature, pain, pressure and a whole lot of internal senses I discussed before. There is a competition for conducting space here too. The information conducted in the nerve lose their identity but are decipherable at the consumer cell, just as your telecommunication cables split the information into various types at the receiving end. The most impressive thing is the nerve and your fibre-optic cables manage to retain the identity of the caller and the called. Only, rarely do we face the problem of wrong numbers on our phone. Don't we?

The Internet is the largest of all the networks of computers. Millions of computers are linked enabling exchange of data never before possible in human history. The fastest modems a few years ago received and transmitted data at 56,000 bits per second. In the most common form of local area network in an office set-up, called the _Ethernet_ , the raw speed of data conduction is 10 million bits per second. This is about 200 times more powerful than the modem. But, if the workers access Internet, their speed is limited by the speed of the modem unless they have a high-speed line to their Internet service provider. But, most small businesses do not have that.

The plain truth is the increasing speed of computers will not result in faster communication applications if users remain stuck behind a dial-up modem. The wires and cables running to most homes were not designed for high-speed operations. There is a lot of effort directed at connecting homes for high-speed data communications. Clever technologies to get the most out of existing wires to the homes are being mooted.

The cable TV industry developed its coaxial cable network just to offer television signals. When Internet came about, this cable was upgraded to support this service too. Fibre-optic lines were laid to carry the signals from key signal distribution points ('head end') most of the way to each neighbourhood area (the 'node'), which potentially has about a 1000 homes. Information relayed to the 'head end' is used to serve many fibre nodes. The fibre network allows the data to be sent from the home to the head end too, making telephone and interactive video services possible. In America, half of all homes already have this cable Internet access, while it is fewer in Europe and Asia.

The 'head end' of the cable is said to feed data into many fibre nodes, each node then serving about 1000 homes. I am awe struck by the similarity in the way cortical neurones relay data to many other neurones through dendritic feeder pathways. A single neurone can feed the data to many divergent neuronal circuits through their synapses! These neurones can carry on with data transfer further down their respective pathways!

Then the original coaxial cable was used to distribute the signal among the homes in a neighbourhood or part of a town. This scheme saves the cable companies the cost of having to replace the entire network with the optical lines. The partial use of the optical fibre made it possible for the network to carry two-way Internet and telephone traffic. A hybrid fibre-coax cable can give the users bursts of data from the Internet at speeds of 10 million bits of second. A homeowner must have a cable modem that decodes the data. A home office may have a splitter sending the regular TV signals to a TV, and the data to a cable modem, which can serve a network of PCs. Internet access over the cable runs up to 100 times those of the traditional dial-up world. The connection is always 'on', eliminating the need to log into the network.

Fibre-optic lines can carry data at much higher rates of data, reaching millions of megabits per second. A megabit is a million bit per second! The rate of millions of megabits is so large a rate of data transfer that a single fibre can carry all the phone calls made at any instant in the US. The hundreds of fibres nation-wide serve as the backbone of existing telephone, cable TV and Internet services. The only problem is the cost involved in installing a network linking many homes. To reduce the cost, a single fibre can be used to serve a cluster of homes rather than having a separate fibre for each home. From a central office, the fibre carries the data to a small box near the curb, from where the traditional copper wires or coaxial cables connect to about 10-15 homes.

This reminds me of how exactly similar numbers (10-15) of sense receptors are served by a single nerve fibre each. These single fibres, arising from various regions in a body region unite to form the nerve trunk. This nerve trunk is common for all forms of senses! Just as we are trying to use the same cables for carrying TV, telephone and even Internet data, the sensory nerves carry all forms of sensations like touch, pain, temperature etc. These nerve trunks carry the data up to the spinal cord where data streaming in from all over the body merge into a single conduction line, which is a kind of a hybrid conduction cable.

Broadband Internet-oriented satellite networks are already established in our society. The user accesses the Internet data via a small dish antenna. Digital subscriber lines (DSL) are now meeting with growing success, exploiting the trusty copper phone wires whose transmission capacity has never been used fully for more than a century since Graham Bell. It is projected that DSL will surpass the cable modems soon. Cable modems have had a head start but that is not enough. In the DSL technology, the Internet data and phone voice data are sent over ordinary phone lines to the user. A DSL modem translates the Internet data for the user's PC, while a micro filter passes the voice signal to the phone. The same theme of carrying multiple forms of data over a single form of conduction cable recurs again!

At any given time, information of a diverse nature keeps flowing to and fro in your overhead and underground cables. Telecommunication companies prioritise the allocation of cable space depending on the demand. There is always a drive to pack more information in your cables by increasing the efficiency of conduction and by increasing the number of fibres inside a cable. Optic cables, to a large extent, fulfilled the demand. Still, we are trying other means such as condensing the information before transmission so as to occupy less space in the premium cable. This has been shown to be possible because much of the information is repetitive and redundant.

The information highway in our nervous system is no less complicated than our telecommunication in our society. Information of a diverse nature, captured by our versatile receptors, is relayed to the conducting 'cables', our nerves. The nerves play the role of picking up information, as well as deliver information, to selected geographical locations in the body. Each nerve has well defined regions it has to cover. Usually, anatomically speaking, the nerves are named after the body region they serve.

The information from the sense receptors is carried by the nerves via the spinal cord to the brain. The spinal cord is the real information superhighway. It is a tubular column that is connected to the base of the brain. It is just about the thickness of a pencil most of its course downwards. It is housed inside an opening in the vertebral column that runs on your back. You can actually feel your vertebral column on your back right down to your back as a bony protrusion. It is an extremely important structure because it carries the spinal cord and also acts as the scaffold for attaching your back muscles. Breaking the vertebral column can have devastating consequences because it can severe the information link with the brain.

Christopher Reeve, the 'Superman' film hero, is probably the most famous example of the effects of de-linking the information highway i.e., the spinal cord, from your brain. It is now medically impossible to set right a defect such as this, because the nerves cannot be made to grow. The nerve cells are evolutionarily incapable of cell division after an early stage in your development. I wonder if they lose the ability to divide and multiply because of functional difficulties that will arise if they divide and multiply. The function of a nerve cell is vastly different from that of your other cells. Other cells in your body, like the liver, skin cells, hair cells, intestinal cells etc., divide at a high rate. That is because these cells have a high turnover rate. New cells have to be produced to replace the lost ones. In the case of these cells, it is just a question of going through the process of cell division. During the cell division, the daughter cells produced will be conferred with the genetic information for their biological function. They can start functioning straightaway. The case of a nerve cell is vastly different.

First of all, the nerve cells have no other function than acquisition, transfer and processing of external and internal information. Much of this information is stored as connections between different neurones. They establish numerous connections with other neurones, near and far, to enable generation of information. How they do that is the hot cake of biological and computer research. Nerve cells are powerless if they are not a part of the network. This network of exquisite nerve connections arises all through your life as you learn. It is virtually impossible for a newly divided nerve cell, if they were able to divide in the first place, to recreate this connection. These neuronal connections are environmentally acquired. Depending on the person's learning, training, emotional and mental experiences, he or she acquires this unique set of nerve links over a lifetime. It is not genetically ingrained.

Genetic information and neuronal information are different. Perhaps, this is the reason the nerve cells have lost the ability to divide. It is, however, true that nerve cells die. A significant number of nerve cells die as we age. They are not replaced for reasons I discussed. Any one over 50 will tell you how fast they are losing their memories! When you get even older, even remembering simple things becomes a big task. People suffering from neurodegenerative diseases like Alzheimer's experience very severe forms of memory loss and impairment of cognitive abilities.

Brain surgeries are incredibly difficult because it is surely going to cause inadvertent damage to other regions of the brain. It is not possible for the brain to rectify the damage by any form of healing. The brain cells are densely linked to each other and any damage to this will be forever.

Spinal cord is an elongated structure, which links the peripheral nerves to the central brain. It lacks the intricacy of brain cell's connectivity. It is more of a giant information conducting structure. Very little information is processed inside it. Technically, it is a question of joining the severed ends of the spinal cord to repair the broken backs of those unfortunate people like Christopher Reeve. But, the reality is we cannot do even this yet. Researchers have been trying strategies such as using growth factors like the Nerve Growth factor applied locally to stimulate the cut ends of the nerve to grow longitudinally and reseal in a natural way. Some preliminary studies have shown hope that we may be able to do it someday.

I was talking about the diverse forms of information carried by the spinal cord. All peripherally acquired information has to necessarily go to the brain via the spinal cord. I said the receptors transmit the information to the nerve cables first. These nerves are local conduits situated all over the body. There is a nerve each for all parts of your body. These cables then enter the spinal cord highway at different levels, depending on their location. A nerve cable, carrying information from the leg, it is more likely to enter the spinal cord at its lower end. Nerves carrying information from the upper parts of the body enter the spinal cord at the top end.

The spinal cord can be viewed as a giant nerve collecting together all information brought in by various nerves situated all over the body. It is a dual carriageway in that, just as the nerves they carry information towards the brain they also carry it away from it. There are well-defined conduction pathways in the spinal cord for sensations like touch, pain, sense of balance etc. Sensations of touch or pain or any other form, arising from all over the body, will eventually find that they have to compete here for conduction space. A finite number of fibres are available on a common basis, which have to be shared by the rest of the body.

The nerves, which bring in the data, have to form a link inside the spinal cord with another type of nerve cell to register their call. This registration enables the transfer of data from the nerve to the spinal cord. Touch sense will be routed through channels devoted for touch. Pain sense will be channelled through the tract devoted for it. Similarly, balance sense will get sent through different channels. The spinal cord has only limited ability to process the information. Some emergency type of information will be dealt with by the spinal cord quickly. The best example would be your reflex withdrawal of your hand when you touched that hot object. You had withdrawn your hand even before you realised it consciously. There was no way you could have waited for the information to flow all the way to the brain for it to be processed for you to be able to save your hand. This is high-priority information and it is better to handle it on an interim basis without waiting for the brain to respond. The brain eventually knows the nature of the object that caused the pain etc.

The quick withdrawal of hand happened because of the local information circuitry within the spinal cord, which has a limited information processing ability. The spinal cord has nerve cells called the inter-neurones and the pain information is routed towards two or more such inter-neurones sequentially before a command can flow out to the muscle involved in the movement of the hand.

If you looked at the movement involved in withdrawal movement of your hand, two groups of muscles have to be controlled. One is the flexing muscle group, which pulls your hand towards your body. The other is the extending muscle that straightens your arm. When you are withdrawing your arm, it is the flexing muscle that has to contract. The extending muscle has to be inhibited from contracting. Normally, all over your body, muscle groups of opposing actions are in a tug-of-war all the time. Both the opposing muscle groups will be mildly contracting generating the muscle tone we all can experience. This muscle tone is felt by all of us all the time as some degree of stiffness. We do not realise it. But, if you see someone who has nerve palsy, you could see how flaccid his muscle groups can get because his nerves cannot generate this muscle tone. To bring about an action, which muscle group will actually contract will depend on the prevailing circumstances. When one is allowed to contract, the other has to be actively inhibited in a kind of dual-control mechanism.

This point probably illustrates the concept that the spinal cord is not a conduction channel alone. It is more of a telephone exchange-type of structure in that it helps re-routing the information without altering its content. Usually, there are three points of information relay as the sensory information travels along the spinal cord.

The first of information transfer occurs at the point the peripheral nerve establishes contact with the spinal cord nerve cell. In simple terms, it is like one messenger passing on the information to the next in line. This second messenger nerve cell can take it as far as the base of the brain. It then has to pass it on to another nerve cell in the region of the brain called the thalamus. This is an extremely important and terribly busy information exchange point. Information about the rest of the world is ready to reach the neurons in the cerebral cortex (thinking part of the brain) from here. The sensory information arriving from all over the body is despatched to the right parts of the brain depending on the nature of the stimulus. If it was visual data, it will be delivered to the optic cortex, the part of the brain on the back of our head, just above our neck. If it was sound-related data, it will reach the part of the brain that deals with sound. Pure sense information like touch, pain etc will reach the sensory cortex of the brain.

The thalamus is also crucially important in determining whether some data will reach the consciousness of a person. In other words, whether some data will be relayed to the higher thinking centres in the brain will depend on the thalamus. It is able to regulate entry of information into the conscious space by means of a filter mechanism. Information that does not reach a certain level of threshold of importance will not be let through this filter. It depends on whether the brain needs to devote undivided attention to handle the data. There is no place for weak, half-hearted stimuli. The brain has other jobs to do than dealing with information that is a waste of time. When you want to speak to the boss, you had better brought in some information that matters. Or, be ready to be thrown out.

I mentioned about the reticular activating system sometime ago. I said it is a diffuse transfer of all forms of sensory data to the brain in order to arouse it. This needs dedicated channels at the base of the brain. It is the thalamus from where this occurs. Whether the brain will be awake, alert and conscious will directly depend on incoming impulses in the reticular activating system, relayed by the thalamus after filtration of the sensory data. It is possible for information to be relayed to the conscious brain only if it is of sufficient value. A loud sound could qualify for waking the sleeping person. A severe physical blow can carry enough pain impulse to arouse us.

It is always not necessary for something to be of survival value all the time. Often, we find that we can walk past somebody without the data registering in your brain. He or she would have been in your visual field but you were so busy in your thoughts you failed to 'notice' the person. When you were engaged in serious thinking, you could fail to hear someone calling you. It is so easy not to notice so many things happening around you when you are occupied in intense mental activity. It is a question of available conscious space when it is fully busy. The thalamus could filter out data when the brain is too busy. It can do the same when the brain is not aroused i.e., sleeping. It is this filtration mechanism that helps the brain from getting inundated in sensory data. You have no idea about the extent our brain is protected against useless information.

You have to keep in mind that internal environmental data like blood pressure, oxygen concentration, pH etc are relayed to local control centres in the base of the brain from the thalamus. There is no need for them to reach the conscious brain anyway. Similarly, when the brain has to send out control signals to the internal organs, totally separate pathways handle them all. None of us have any voluntary control over our heartbeats, digestion, and internal metabolism. An autonomic division of the nervous system regulates them all. This is comprised of chains of neurons arranged linearly nearer the organs concerned. These ganglions are almost like local information processing centres situated nearer to the user organ. Sensory impulses from visceral organs like stomach, intestine, urinary bladder etc are received here and appropriate responses sent back. The thinking parts of the brain never receive any of this sensory information.

In the course of all the discussion, the readers may be wondering about the complexity of establishing connections between nerve cells all over the body. True to any telecommunication system, our body has the absolute need to precisely wire up the information pathways between body cells **.** If you requested a new phone line with BT, you would be linked up to an exchange via a cable. Your location is clearly defined with a number. Your phone number is representative of a unique identity that links you to the exchange through which you can be singled out in the world of information. All forms of identification of individual communicating entities like phone numbers, e-mail and web addresses are quite specific. You are well defined in the information space out there. If some one wants to contact you, they know where to get you. It is all because the phone companies laid out your cable, connecting yours with the network, in such a way that you can communicate with others. It is a precisely guided job.

We all know that nerves transmit information to other nerve cells in a kind of information relay. Most often, the recipient nerve cell is located very far. For example, nerves carrying information from the leg enters the spinal cord highway at the lower end of your vertebral column. Then it travels nearly two feet up to the base of the brain before it delivers the message. This is enormous distance considering cellular dimensions. The job of transferring the information to distinct groups of nerve cells in the brain involves laying out nerve connections precisely linking up the concerned cells. This obviously happens early in your development and often continues to happen on a regular basis as new skills are learned. It is now known that neuronal connections are re-modelled and re-created all the time to suit the demands of the individual. Newly learned skills would require establishment of a connection with the brain's memory centre and the nerve cell group involved in carrying out the task itself.

It is amazing how the neurones manage to do that. Experimental evidences are now throwing light on the mechanisms involved. It is now clear that virtually all neurones in the mammalian brain migrate from their origin to distant territories. In the developing nervous system, most neurones originate at various sites different from the location they will ultimately end up. What controls their migration? We now know that neurones are actively attracted to their final destinations! Molecular cues seem to guide neurones as they migrate to their destinations! Selection of migratory pathways seems to depend on interactions between migrating neurones and surfaces of neighbouring cells. Wrong destinations are prevented by molecular signals that repel the migrating neurones form territories they are not supposed to enter!

Wu et al reported the discovery of a protein in the developing mouse brain that does this job. This protein appears to be a member of the _Slit_ family of proteins, which were initially identified in the fruitfly, _Drosophila melanogaster_. It is now found to occur in the brains of amphibians, birds, rodents and primates, including humans. There have been reports of other experimental works by different groups of researchers relating to this intriguing finding. It is now believed that _Slit_ protein on the surface of the migrating neurone interacts with a receptor, named interestingly as _'roundabout',_ on the surface of nerve cells in territories where it is not supposed to stay. This results in a repulsive reaction, diverting the neurone away. Surprisingly, the _Slit_ proteins can also detach from the cell surface and diffuse out and do the same job.

Basically, the nerve cell has two options to find and establish a link with that faraway nerve cell. It can either move itself physically towards the destination cell or grow out tail-like projections called the axons to reach out. This way the nerve cell sits at the original location of origin but can reach out to the nerve cell it wants. It is true that there are nerve cells, which physically migrate as a whole.

All forms of organisms, from the beginnings of evolution, have relied on this ability to move in their environment to reach food sources or other supports. They have learnt to shy away from repulsive clues and seek out attracting ones. In the case of nerve cell journeys, some of them do not survive as they try to locate their destinations. In the 21 October, 1999 issue of _Nature_ , Wang and Tessier Lavigne of Department of Anatomy and of Biochemistry and Biophysics, Howard Hughes Medical Institute, University of California, reported that the molecular 'signposts' not only guide the growing axons but also support the survival of the migrating axons. By keeping the axons alive only if they follow the right route could ensure that the correct neuronal connections are formed!

The concept of 'Neural Darwinism' apparently operates here. During development, an excess of neurones are produced as a kind of standby. They compete with each other for a limiting amount of attracting signals as their survival depends on finding them. When they do find them, they survive and establish a lasting connection. If they don't, they are eliminated.

I guess it must now be clear how the cells in our nervous system accomplish the daunting task of carrying out the 'wiring of the nerve cable connections'. This process of nerve cell linking continues to occur in the brain on an on-going basis much of our life. Don't new customers join the BT? Doesn't it have to lay new cables? As the size of the brain increases evolutionarily, the brain cells are becoming more densely linked than ever before in evolutionary history. Our own society is facing the same problem.

I was talking a while ago about the inability of people like Christopher Reeve who have been unfortunate enough to damage their spinal cords. I indicated how bleak their chances are to hope for a medical cure. There is, however, some experimental work underway that hopes to exploit this neural linking capability to repair the damaged spinal cord. Martin Schwab at the University of Zurich, Switzerland, is exploring the chances of restraining the inhibitory proteins that prevent the neural fibre growth in order to let them grow at the cut ends. In the mid-1980s, he was the first to suggest that the nerve regeneration problem was due to molecules secreted by the mature nervous system that holds back the neurones from growing. The forward growth of the cut nerve cell is prevented by this inhibitory factor, which Schwab hopes can be overcome if we could find a way of restraining the action of the inhibitory factor. This could be done perhaps by the use of specific antibodies that will neutralise the action of the inhibitory factor. The concern is how to put the brakes on the damaged fibres from overgrowing after they are released from this inhibition.

The emerging concept nowadays is the truth that the neurones have no problem in re-growing if you provide them with the right help. Fred Cage, at the Salk Institute for Biological Sciences in San Diego, is trying to persuade injured neurones to grow by providing nerve growth factors in their vicinity. He hopes to deliver these growth-supporting factors in the form of genetically engineered fibroblast cells, which can make the nerve growth factors near the cut end of the nerves, where they will be transplanted.

There is a group of researchers who think the nerve cells are supported in their role of growth by support cells called the glial cells. I said that there are many billions of glial cells in our nervous system and that they outnumber the actual nerve cells. These glial cells provide the right working environment for the neurones. Geoffrey Raisman, a neurobiologist at the National Institute for Medical Research in London, believes that these glial cells are track-layers, enabling new connections between cells in the nervous system. He showed that transplanting such cells in the injured spinal cord of rats helped form a perfect bridge and the cut fibres grew along the tracks they laid, to the other side. The new connections formed allowed the rat to regain movements that were lost subsequent to the spinal injury. Mary Bunge, a researcher at the Miami Project to Cure Paralysis, a leading centre on the repair of spinal injuries at the University of Miami, and her colleagues, have come up with similar results on even bigger spinal injuries.

The basic theme of all the discussion is to establish links in communication pathways to enable signal transfer across two or more points. New connections are laid during development of the nervous system and this continues during the adult life as we learn new tasks. Repairing the damaged and faulty nerve cables apparently poses a problem for mechanistic reasons. Medical sciences can one day help us get over this difficulty like that BT engineer who sorts out faults in your cable!

Now the most important question is how is it possible for the brain to decode the same form of electrical signals into their original forms of sensory information? If the same form of electrical impulse is generated by touch, temperature, taste, light, pain and all other receptors, then how can the brain decipher the original information? How is it going to distinguish it was the pain information now and not a touch? How to separate out the taste information form a touch sensation? How to differentiate between sound signals from a visual stimulus?

Electrical impulses from a touch receptor are identical to that from a temperature receptor. In fact, electrical impulses from other types of sensations are also identical. But, the brain can still decode the information. It is amazing the brain can distinguish the type of sensation even though they all arrive in the same electrical format. How does it do that?

I am reminded of the original telegraphs trying to explain how the brain works out the delineation of sensations. In the telegraph system distinct wires allotted to the respective alphabets distinguished the alphabets. If a current came along the wire dedicated for a particular alphabet, it was easy to know what alphabet was being transmitted. In a way, sensations are identified the same way.

Within the nervous system dedicated sensory fibres are used for each sensation when information is transmitted along the spinal cord. Even more important is the fact that the sensation evoked by impulses generated in a receptor depends on the specific part of the brain they ultimately activate.

As we know, the receptors conduct the impulses down the nerves towards the brain. This information pathway is hardwired at the time of birth. Incredibly, the brain is divided into functionally distinct, specialised areas. Each region of the brain is dedicated to specific functions. For example, the visual signals are dealt within the portion of the brain on the back of our head. The memory is handled by a portion of the brain on the sides of our head. Similarly, the movement functions are controlled by motor cortex, another functional part of the brain. A distinct region of the brain called the sensory cortex handles the feeling of sensation of all types. The anatomical development of a human being makes sure the right sensory tracts are linked to the right information processing regions in the brain. So, the type of sensation is easily made out by the brain by the location at which it receives the signal.

If, by any chance, the brain makes improper connections during development then the person is likely to feel the wrong sensation. This medical condition is not unknown. It is called synaesthesia. In people with this rare condition one would see colour when listening to music, feel tactile shapes while tasting foods etc. In one particular form of synaesthesia the individual has an experience of colours when viewing letters or numbers! In simple terms, it is like making a telephone call and realising that it is the wrong number. The only difference is that synaesthetic patients will be making the wrong sensory call every time due to the wrong nerve wiring. In fact, the wiring is so wrong that, in equivalent terms, it is like the lights turning on when you switched on your TV or something like that!

The next question is how does the brain localise the source of sensory information? When we touch an object, how do we manage to know the feeling of touch is originating in the hands and not the legs or some other part of the body? When you suffer an injury on your leg, how do you feel the pain sensation in the leg only and not elsewhere? Of what use is it if it occurred that way?

Localisation of pain is as important as the ability to sense the pain. The idea of pain is to act as a warning for the organisms to escape from the source of injury. The trick is to do it at the hardware level again. The brain is not only functionally differentiated to sense the different forms of sensations but also to identify the location of the body from where the stimulus originates. This is made possible because the sensory cortex brain is divided into many regions, each concerned with distinct part of the body.

If you take the part of the brain that deals with reception of sensory information, it is further divided into regions dedicated for distinct parts of the body! The whole sensory cortex contains areas for hands, arms, legs, trunk, abdomen, lips, face, toes, fingers, back etc. All parts of your body are incredibly represented in your brain, region for region! These regions represent the sensory interests of each of these parts of the body! All parts of your body are linked to the respective parts of the brain through sensory pathways like your telephone is linked to the telephone exchange. If a call is made in your phone, the telephone exchange is able to locate the source of the call as yours because of the hardwiring.

It is now even possible to trace the cyber information too. Your e-mail message reaches the right recipient it was destined for because the Internet service provider links your message with the computer of the person you want to contact. By linking the parts of the body to distinct sites of the brain through dedicated information channels, the brain is conferred a similar ability to trace the source of the information 'call'. If the site in the brain concerned with the right index finger is fired up by incoming electrical impulses, it is a sure way of knowing the site of origin of the information as the right index finger. If the site devoted for the left toe fires, then it is easy to know the source as the left leg. It is as easy as that.

The funny thing is if you stimulate the neural pathways of sensory information transmission at any point in its travel towards the brain, you can feel the sensation. If you directly stimulate the brain region connected with the sensation, you can feel the sensation as if it was real. That is in the absence of real stimulation of the receptors! Or, if you physically stimulated the nerve carrying the sensations, at a point along its route, you still get the same sensation as if it was real. Again, no receptors are stimulated! Muller originally proposed it as the doctrine of specific nerve energies, quite long ago. It is all down to the wiring of your body information channels similar to telephone exchange connections.

This projection of sensory feeling to the exact location of the sensory receptor region has a bizarre effect in people with amputated limbs. They have their limbs amputated for some medical reason. Their cut end of the limb has the nerves, which have been cut too. These nerves are now exposed to pressure, touch and movement and they can fire electrical discharges even though they do not have the sense receptors beyond the cut end of the limb. Physical manipulation of the nerve is enough to fire them. The amputated patients feel the sensations of touch, pressure and pain that can be projected to the absent limb!

I said a while ago that the feeling of sensations could be provoked in the absence of real sensory receptor stimulation if you had a way of directly stimulating the sensory nerves that transmit the information from these receptors. This was called the doctrine of specific nerve energies. In amputees, the most bizarre aspect of this is the fact that the sensations are projected to the limb that does not exist! When the nerve that was formerly joined to the limb's sense receptors is stimulated by physical manipulation, there is no way the brain can find out if that was not real sensation. When the brain region that receives the input from the nerve is activated by electrical discharges initiated by physical manipulation, the effect is a real feeling of sensation by the patient who has not got a limb at all! Physiology textbooks call it the _'phantom limb'_.

There have been neurophysiological experiments, which have clearly shown that the feeling of sensation can be unreally produced when there is nothing to be perceived! Stimulation of the sensory cortex of the brain by mild electrical currents, sent through electrodes directly placed on the brain, produces real sensations depending on the site of stimulation. A subject can be made to feel the touch, pain, coldness, warmth and pressure when he has not done anything! He can be made to feel sensations at specific locations of the body as if by hypnosis! These observations raise profound questions about the very nature of our experiences. If some body can be made to feel something, in the absence of a real experience, just what can you conclude about the reality of the world? I know this is not a book on philosophy but I can't help wondering. I am reminded of the Hindu spiritualistic teaching that our world is a reflection of _'maya',_ a word that denotes unreality of our world and our experiences.

Hallucinations can occur in any of the sensory modalities of smell, taste, vision, hearing or touch. The subjects feel these sensations in the absence of real information input. Usually, hallucinations have an organic cause in the brain. An illusion is different from hallucination in that there is a real object and the illusion is an image symbol of it. Obviously, it is a misinterpretation of the object due to psychological reasons.

Illusions are prompted by external stimuli and ambiguous circumstances frequently within the framework of heightened expectations. For example, a person with a deep sense of guilt can find the rustling of leaves to be reproaching voices. The patient may strongly believe that there is an input of auditory information in the form of incriminating voices but the truth is there isn't it any. Illusions also have a pathologic basis in the brain.

# 8. PERSONALISED COMMUNICATION

Sensory information is all over the place. You take what you want and leave out the rest. We share this world with millions of types of other organisms and we are exposed to the same environment as them. Organisms have evolved 'personalised' sensory systems to selectively pick out information they want. It is a perfectly sensible scheme because it allows the species to maintain individuality. At the same time, they are protected from an overload of information.

Dogs can hear sounds in the ultrasonic range (above 20000 hertz), which we can't hear. Most animals can't hear them. Any stimuli coming in this portion of the information spectrum will only be accessible to the dogs. Dogs have an extremely sensitive sense of smell too. They can pick out the faintest of odours and that is why we use them for crime detection and hunting. Insects use scent molecules, called pheromones, to communicate with their potential mates. For obvious reasons, only members of their own species would respond to them. It is personal. A beautiful woman walking on the road is an extremely potent stimulus to a man but not to the dog next to you. Nor is she to the other life forms around you. A currency note lying on the ground is useful information to you while a cat would not even sniff it. A mouse is a good appetiser for the cat, but not for you.

Bees and migrating birds seem to have the capacity to sense the magnetic field of the earth to help them navigate. Most other animals are not capable of this feat.

Man speaks hundreds of different languages and dialects. I cannot understand more than three languages namely English, Tamil and Kannada. The latter two languages are spoken in Southern parts of India from where I come from. If I went to China, I cannot understand a single word of what the Chinese people speak. They can discuss the most confidential matters, and even military secrets, without worrying about me!

The fact that I, like many other fellow citizens of the world, cannot understand Chinese does not mean that the Chinese developed their language as a means of maintaining secrecy in their communications. Languages are tools of information transfer. Nobody thinks of language as a secret code. As it turns out, people in different countries have restricted information available to them because of the language barriers. We do not know what is happening in the scientific, spiritual and literary spheres of other countries. They speak, write and communicate beautiful ideas and thoughts between themselves. It was never intended by humanity to shut themselves in information-proof language barriers. That is the way systems evolve.

In the presence of other people who can understand the language, we take extra precautions to prevent important information from being overheard. We talk in hushed voices, use personal phones (no guarantee that it is not bugged), use fax (no guarantee of secrecy again), letters (no guarantee), or e-mail (no guarantee). By and large, these mechanisms work well in most situations to help maintain secrecy.

Sensory information capture is like walking into a supermarket and picking only those things you want. All shoppers in the supermarket could use most items displayed but they buy different sets of items to meet their needs. No shopping trolley will have the same set of items as any other. You could say it is also like going to the library and reading only a selective list of books or journals though, theoretically, you have access to all the books. It is also like surfing the Internet. You will be flooded with information, most of which have no relevance to you, if you are not highly selective in choosing what you want. It so happens that this information of value has to be fished out of a maze of information-laden world.

The selective information access arrangement helps members of the world utilise the resources in a non-competitive manner. It is true that some time there is 'cross talk' between communication channels of different species. For example, a piece of meat could be useful information to lots of carnivorous organisms. Similarly, water is a useful commodity to almost all organisms. This results in competition between them. Only the fittest get more access to water and meat. In most cases, the competitions are seen only amongst members of the same species only. This is because they access the same information as others. I do not consider a horse, a monkey, a cat, plants, microbial and animal forms as threats to my life. They mind their own business, living in a world of their own, accessing information I am not interested in. They are not interested in what I want, and so there is mutual harmony.

Apart from the personalised information channels, living systems also have evolved means of filtering the information so that they can pay little attention to what they don't want. It is like surfing the Internet again. You have to leave out what is useless to you. When you switch on your TV, it is you who will have to decide which channel you are going to view. You are selective. Obviously, there is no way somebody is going to see different channels at the same time. It is not possible. Our brain is not capable of doing that. We have to view the programmes by filtering out unwanted channels. Even in the channel you are viewing, you only watch some programmes while you leave out others. You neither have the time nor the interest in watching all of them. The TV stations broadcast a variety of programmes to cater to the tastes of all the people. That does not mean that they expect you to see all of them.

I said earlier that cells in your body communicate with each other. They do as much as you do in your society. Cells 'talk' to each other using a number of 'molecular languages'. Each language is a chemical molecule capable of conveying a specific message **.** Not all cells can understand all languages. Can we?

The cell membrane is like a boundary, which separates the cell from its environment. It has information capture molecules called receptors stationed on its surface. They are the equivalents of our receivers in our communication tools.

The cell membrane is actually a fluid structure. The receptors can float around the fluid environment of the cell membrane. There are so many types of receptor molecules here, each specific for one type of molecular message. The testosterone receptors will bind the testosterone molecules. The adrenaline receptor will bind the adrenaline only. Insulin receptors will bind only the Insulin molecule. Cells respond to only certain types of molecular information because the complementary receptor for that molecular messenger may not be present on other cells. This brings specificity to inter-cellular communications. Will you ever hope that your letter will reach the desired destination without writing the address on the letter? The cells don't do that too.

A messenger molecule like testosterone, which is a male sex hormone, can 'talk' to gonads because the gonadal cells alone have the specific testosterone receptor on their cell surfaces. It is a kind of one-to-one communication. No other cell can 'read' this message because they cannot decode the message encrypted in the molecule called testosterone.

Information capture through specific receptors is a ubiquitous phenomenon in inter-cellular communication. Specificity in cellular communication is made possible this way. The cells cannot respond to all the messages out there. They do not have the capacity to handle all of them, just like we cannot understand all languages.

Each biological task needs a specific molecular command, emanating from the regulatory cells. These molecular messengers have to be transported down blood stream to reach cells situated in the far corners of the body. Once in the vicinity of the target cells, the messages have to captured and decoded. There is a curious difference in the way the messages are passed on. Not all the molecular messengers can get into the cell, even if they speak the language the cell understands. Most of the message reception occurs at the cell surface, which means the receptors stop the information-carrying primary messenger at the cell gate. But, there are some other information carriers that are allowed right into the interior of the cell. They go straight to the nucleus and unload the information.

Figuratively speaking, such molecules enter the cell and proceed to the 'manager', the nucleus. The nucleus takes the message and responds. Steroid hormones, like the sex hormones, are examples of such hormones, which can enter the cell and go straight to the DNA to activate/deactivate some gene programs. They do so by binding to their receptors, which are unusually located, bound to the DNA. The steroid hormone/receptor complex has the role of removing the inhibitory protein complexes (which inactivate genes by removing acetyl groups on DNA bases). The steroid hormone-receptor complex also can bind to steroid-responsive DNA sequences accelerating the expression of certain genes, which are meant to be controlled by the steroid hormones like the sex hormones or the cortisol.

Not all hormonal or growth factor messengers are as privileged as this. In most cases, the messenger is stopped at the cell surface. They are not allowed in. Instead, the so-called 'second messengers' attached to the specific receptors take in the messages. These second messengers pass on the message to molecules called the ' _tertiary messengers'_. The tertiary carriers of message transmit the commands to the effector molecules, which bring about the desired effect. The information cascades inside life forms involve serial representation of the 'code' at multiple steps. Code transformation across cellular compartments is universal in biological communication. What is amazing is that the information is normally not lost or garbled in the process.

This reads almost like the usual process you encounter in an office. You meet the receptionist at the entrance and are either shown the door out, or let in, depending on who you are and why you are here. Occasionally, you are asked to meet someone and not the one you hoped to meet. Whether you will be allowed to meet the boss depends on who you are. The cells also do the same. They respond to molecular messengers in a discriminatory manner. I am sure you didn't expect the cells to be so bureaucratic. Did you?

I have to point out that the relay of information, irrespective of its nature, always needs these intermediaries. Let us say you want to write a letter to your friend who lives in a faraway town. The letter, posted in the post box, is cleared and sorted in your local post office, to locate its destination. The address serves to personalise the information transfer. The letter then is carried by a postman on road, rail or air to the destination town where it undergoes another round of sorting to narrow down the receiver. A different postman who works in that town then delivers it personally to your friend.

The point to be noted is the compartmentalisation of the signal transfer process. The post office in your hometown and the post office in the destination town, and the people working in these offices, are stationed at different geographical regions. They continue the task of information transfer without any break. It is a state of continuum. Above all, it has to be realised that the letter was only a carrier of the information and cannot bring about any effect on its own. The information in the letter spurs the recipient to carry it out. If it was an official letter to an organisation, your letter will be read and processed by staff in that organisation before it reaches the chief administrator who decides what to do with the information.

The cellular communication works much this way. External hormones and growth factors and neurotransmitters are like letters. They need a number of molecular intermediaries to carry their message into the cell, crossing the cell boundary i.e., the cell membrane. Information is literally transferred to other carriers who represent the message thereafter.

There are broadly 4 types of message-decoding receptors operating in our cells. In the first type, the receptors are actually ion channels that open and close in response to the primary messages. They are mainly involved mainly in synaptic transmission. The neurotransmitters like acetylcholine, GABA, 5-HT use this approach. The permeability of the ion channel to charged ions like sodium or potassium enables changes in voltage across the membrane. This is translated into an electrical action potential.

The second type is the G-protein-coupled receptor. They are by far the more predominant type of receptors in animal kingdom. The G protein-coupled receptors display a level of sophistication in information transfer that our technology cannot achieve. They are also the most interesting to the pharmaceutical researchers for the medical applications. The G-proteins are special polypeptides attached to the receptors and they play extremely important roles in signal transmission in biology.

. We will see a bit more about G-protein receptor function a bit later but it is worth pointing out here that they can transmit the messages across the cell in a variety of ways using either the cyclic AMP second messenger system, or the phospholipase C/Inositol phosphate second messenger system, or even use ion channels. The G protein-coupled receptors have the enzyme adenylate cyclase linked to the G-proteins. This enzyme converts ATP to cyclic AMP and this is a potent, diverse, extremely important second messenger in animal biology. Information carriers down the line mediate the action of cyclic AMP. The action of cyclic AMP continues until an enzyme degrades it. This particular cyclic AMP-degrading enzyme has a popular medical application. The drug Viagra acts by this mechanism. Even caffeine acts through this action of preventing cyclic AMP degradation. Similarly, G protein-coupled receptors also can activate another enzyme called phospholipase C through the agency of a unique type of G-protein. This enzyme acts on certain types of fat molecules present on the cell surface and liberate inositol phosphate and diacylglycerol. The final effect is release of calcium that acts the third messenger here.

The third broad category of cellular receptors is used by growth factors, cytokines and even hormones like insulin. These receptors linked to enzymes that can attach phosphate groups to tyrosine amino acids on various proteins. Therefore these kinds of receptors are also known as receptor tyrosine kinases. This class of receptors are of enormous medical significance as a number of drugs currently used in cancer treatment are of this type.

The fourth type of receptor use by cells to decode the primary messages is the nuclear receptors. This is what I said a little ago about the steroid hormone receptors. They act differently to the other types of receptors. The message decoding actually happens in a location deep inside the cell, inside the nucleus to be precise.

The problem in cellular communication using the second and third messengers is the specificity. This is because many primary messengers actually share the second messengers and third messengers. If you were to have one special second and third messenger for each type of primary messenger you are going to have a huge problem in managing the complexity of the situation. It is like having one postman dedicated to each household. If that is the real case I bet the vast majority of the government budget will be spent of paying the millions of postmen! Instead, it makes sense to have shared use of intermediaries. For the society it is easy to use common postmen and couriers to deliver letters written by different people, the specificity brought about by the address. The cells are economists too. They too have common secondary messengers between them. One of the problems in cellular communication is that many hormones share the same second messengers.

For example, the adrenaline receptor mediates the formation of cyclic AMP, in response to binding the adrenaline signal outside the cell. An enzyme called the adenylate cyclase, which is physically coupled to the adrenaline receptor, forms Cyclic AMP. It can sense the occupation of the receptor by adrenaline. This enzyme is in an 'off' state until the adrenaline receptor binds the adrenaline receptor. Once adrenaline binds, this enzyme goes into the 'on' state and churns out cyclic AMP. The problem is the cells use the same cyclic AMP system to transduce many types of hormonal signals.

For example, adrenaline, adrenocorticotrophic hormone, thyroid stimulating hormone, melanocyte stimulating hormone and vasopressin all stimulate cyclic AMP formation. Even neurotransmitters like serotonin and GABA use cyclic AMP as their second messengers. All these hormones and neurotransmitters have their own unique receptors on the cell surface. They all run their own metabolic programs. But, it gets a bit worrying when you see all of them coupled to the adenylate cyclase enzyme, which forms the same cyclic AMP. Now the trouble is how the cells bring about the responses unique to the hormonal and neurotransmitter signals, especially when they all seem to converge on the same second messenger system. Is the cyclic AMP formed any different, depending on the hormone? This is not possible. Cyclic AMP is cyclic AMP. There can be no two different types of cyclic AMP. Then what is the way out? There is scope to bring about variations in the amount of cyclic AMP formed. How is it accomplished?

Experiments have shown that there is hormone-specificity distal to the receptor level. In other words, the hormone retains a unique pathway of information flow despite using the same second messenger system. If you destroy one hormone's receptor, that does not affect the signal capture of other hormones. This experimental observation is interpreted as presence of independent signal transduction pathways beyond the receptor, though they use the same cyclic AMP formation to take the information forward. This interpretation is further supported by studies, which aimed to look for any additive increase in the cyclic AMP response when all the hormone receptors are stimulated by maximal concentrations of hormones. If they all went down the same path of information flow, then you would expect to see an additive effect in the amount of cyclic AMP formed. But, this is not seen. These experiments clearly show that the hormone's information content is captured by unique receptors and transmitted down specific pathways, despite using same cyclic AMP.

I cannot help but use the same postman example again. The postman may be the same for you and tens of your neighbours. That does not mean he delivers letters randomly. Does he? There is some specificity here. There has to be. The address is matched with the complementary house number on your door. That is enough for the postman. Adding further specificity is the street name.

Let us see if we can find out how the cells personalise the communication between them. The adenylate cyclase enzyme is the same and, naturally, the cyclic AMP is the same. It is true the receptor is at least different. Is there any difference at all in the coupling of the hormone to its second messenger system? Yes, there is. This specificity lies in the type of another regulatory molecule in this complex formed between the receptor, the hormone and the adenylate cyclase enzyme. The fourth entity is the G protein.

G protein is now known in biological communication as the one of the most influential, versatile, and ubiquitous mediator of information transfer. It affects so many types of biological signalling in so many types of cells. An explosive amount of knowledge is accumulating on the role played by the G proteins in cellular communication. G proteins are made of three units of different proteins named alpha, beta, and gamma. A single G protein is made of three types of proteins joined in a molecular complex. I told you in the second chapter how gene information will change by combining the products of some genes with products of other genes. This is a classic example. None of these three types of protein units can function alone as a G signalling protein.

Alpha protein unit can be of two types. It can be either inhibitory in nature or stimulatory. In other words, if a G protein coupled to a hormone had the inhibitory type of alpha sub-unit, then the response will be negative. If, on the other hand, the alpha type was stimulatory, then the signal is positive. Any G protein will have only one type of the alpha sub-unit, stimulatory or inhibitory. Beta and gamma units are not discriminative of yes or no responses. They exist in a trimeric complex with an alpha sub-unit. This three-member unit is coupled to the receptor, the hormone, and the cyclic AMP-forming enzyme.

When a hormone binds the receptor, the alpha sub-unit dissociates from the beta and gamma units. Alpha unit can breakdown the high-energy molecule Guanosine triphosphosphate (GTP), liberating the energy. Once this is over, the alpha, beta and gamma units re-assemble together and are ready for another bout of response.

It has been shown experimentally that there are 20 variants of the alpha sub-units known within the broad classification of stimulatory and inhibitory types. At least 4 types of beta and 6 types of gamma sub-units are known too. Quite a significant number of different G proteins can be formed if you used these variants in different combinations. Could each hormone have a uniquely assembled G protein, with its own type of alpha, beta and gamma sub-units? Yes, it is what happens. In fact, this custom-made G protein is responsible for the hormone specificity. Adding further variation to the scheme is the experimental finding of 5 types of the enzyme, which forms the cyclic AMP! This is as sophisticated as you can get!

Messenger sharing is seen at the level of tertiary messengers as well. For example, a majority of the hormones use calcium as the third messenger. If this is the case, how can the cell distinguish the commands from different hormones and neurotransmitters and growth factors? In other words if a hormone a, b and c activate calcium signalling system, how does the cell know if it has to do the task as encoded by the hormone a, b or c? What do the tiny cells do? This is a similar problem as I mentioned a while ago in neural transmission of electrical impulses for all modes of sensations and how the brain actually deciphers the original sensory modality. The cells actually seem to depend on a frequency mode transmission here, a strategy different from the brain, to sort out this issue.

Calcium is stored inside cellular compartments like endoplasmic reticulum and mitochondria. It can be mobilised into the cytoplasm in a highly regulated manner in response to certain second messengers. Calcium represents a final common pathway in information flow in cells because so many processes depend on it. It exerts its effects by binding to a protein called calmodulin, which is a member of many protein complexes. Binding of calcium turns these complexes into the 'on' mode. A number of events like muscle contraction, cell motility, mitosis, secretion of granules etc depend on it.

Free calcium concentration in the cytoplasm is about 100 nanomols per L. Calcium concentration in the interstitial fluid is about 12,000 times more than this. There is a marked inward directed concentration and electrical gradient with respect to calcium. It is difficult to emphasise the significance played by this calcium flux in cellular metabolism. There is so much dependent on this calcium movement across membranes. In order to derive signalling benefits, the calcium flux across membranes is highly regulated. Despite the high gradient, calcium is prevented from entering the cell unless signalled. It is dependent upon 'open/close' status of ion channels, which exchange calcium for either sodium or hydrogen ions. These channels are opened or closed depending on the needs of the cell in response to certain hormones or, sometimes by voltage. Voltage can open the channel just as transistors are turned 'on' by current flow.

Every time a signal triggers the calcium messenger, the cell recruits more calcium from its intracellular stores so that there will be plenty of it. This is to amplify the signal. The number of times calcium level rises and lowers (frequency) and the range over which the fluctuations occur (amplitude), are unique for each messenger. If you plot a graph showing calcium levels in the cell against time, the graph will be oscillatory than a straight line. The spikes show a characteristic frequency due to specific agents. If the frequency of calcium oscillation is x cycles per second then it has to be the hormone y and something like that. This cellular mechanism resembles the frequency mode (FM) broadcast of our radio.

I said a little while ago that receptors on target cell surfaces actually 'decode' the message by binding the primary messenger molecules. Here again the cells can be made to do more than one task with just one primary messenger. For instance, insulin is one such messenger, which can bring about multiple effects in a cell. How does the cell know which effect it is supposed to bring about? Determining how many receptors are occupied by insulin actually does this. Receptor occupancy rate is the mechanism by which the cells bring about more variety in their communication strategies. Less number of activated receptors would signal a type of cell response compared to the situation when more of them are activated. Cells distinguish the level and type of response needed by sensing how many receptors are firing.

With simple communication tools, systems always try to pack as much variety in information as possible. For a single cell, this level of sophistication in communications is incredible. If you look at our offices, we have mechanisms to neatly file various transactions marking them clearly, as to what category they come under. With computerisation, it makes it even easier to track down the information as to when and from where they came in. If you talk about brain as a system, it has memory to aid its information department. The question of how a single cell manages all that is not easy to answer.

Interestingly, the cells have the unique capacity to fine-tune their responses to the molecular signals. They can vary the numbers of receptors stationed at the cell surface. The numbers of receptors can be increased or decreased, depending on the situation. It is called the up regulation and down regulation of receptors, a common strategy adopted by cells to alter their responsiveness to signals from other cells. If there was a state where the number of signal molecule was abundant, then the cells go into state of unresponsiveness by decreasing the number of receptors, similar to the habituation of sensory receptors I discussed before. Similarly, if the signal molecules are less, then they go into a state of hyper-responsiveness by increasing the number of receptors in order to 'hear and see' better. Doesn't it make sense? Why should you waste information capture systems unnecessarily?

At this juncture I want to quickly mention that microbes such as Cholera bacteria can disrupt our cellular communication networks by attacking these G proteins!! I call it biological hacking **.** The cholera bacteria produce a toxin, which strikes our G-protein communication system! What? Did I make myself clear? Yes. I did. However incredible it sounds, it is true. How could a single-celled organism be so precise to sabotage the communication system of our cells, like cyber-criminals?

The cholera toxin stimulates the alpha subunit of the G protein and keeps it active permanently instead of letting it return to the basal state. This is the trick of the toxin made by the bacteria. The cell goes into a state of frenzied stimulation, which needless to say is a waste of time and effort. The cell that is the target of these bacteria is the intestinal cell and none else. Again, there is specificity here. The intestinal cells, because they are stimulated in frenzy, go into a permanent mode of secretion of juices, which as you know have a role to play in digestion. The result is loss of litres of fluid and salt in such a short time that people die by the scores in hours! It has happened in India not long ago. It has also happened in other countries. Cholera still remains high on the World Health Organisation's agenda. All because the bacteria is hacking into our cellular communication network!

Cholera bacterium is not the only hacker we know. The other one is the bacterium that causes the disease called whooping cough. We even vaccinate children for it. It strikes at the G protein to keep it permanently in the inactive state. God, these bacteria!

Cancer cells can hack into the normal cellular communication networks too, just like the cholera and whooping cough bacteria. The cancer cells can turn them into a chronically 'on' state, by creating a mutated form of growth factor receptors and positioning them on their cell surface.

Growth factors are like hormones and neurotransmitters in that they are cell-signalling molecules for growth functions of the cells. There are many types of growth factors, named after the cell type or tissue that is predominantly benefited by the given growth factor. For example, there are growth factors for the growth of epidermal cells. There are other growth factors for the growth and multiplication of blood cells. There is fibroblast growth factor for the purpose of maintaining the diverse functions of the fibroblast.

The cancer cells are like conmen **.** They can access unfair portion of the resources by making altered forms of the growth factor receptors and putting them up on their surface. Because of their structure resembling how a receptor would like in the active state, the growth factors stimulate the growth of these cells at the expense of others. Nutrients flow into these cunning cells more rapidly! This happens because it is the role of these growth factor receptors, in the 'on' state, to facilitate the flow of nutrients into the cells. I thought only hackers siphon money this way by logging into financial accounts of other poor citizens!

Every one would agree that we are increasingly facing more and more threats to our own personal information in the modern society. In today's social world, we are increasingly facing the necessity to do things online. We buy things from the cyber super markets. We place application forms for jobs online. We seek information from everywhere. E-commerce is a rapidly catching fad in our financial industry. Banks operate online. Naturally, in the context of the theme of this chapter, the question arises, 'Is our financial information safe?' Is anybody out there who can access our information? There may well be. In fact, there are. There are thieves who have turned to high-tech means to access your information. Personalisation of information transfer in the modern digital world is a challenge.

An emerging crime in the Cyber-world is on-line 'mugging', according to Gary Grant, a risk management specialist at the Swedish IT firm Defcom WBK International. As on-line banking catches up more and more, this sort of waylaying customers will only increase. The criminal can get in between you and your bank during an on-line transaction and pose to you as the bank and to the bank as you. He will be able to instruct the bank to pay into his own account while the bank may be thinking it was you who wanted it that way. By the time this comes to light it will be too late. I read with interest a term _'e-trust_ ' in an opinion interview with Ross Anderson, published in the _New Scientist_ dated 6 November 1999. Ross Anderson, whose team in Cambridge University was working creating the 'ultimate instruments of online confidence' in the form of software tools that encrypt information so that it can only be read by people who hold the right decoding keys. Perhaps, Anderson is aiming for a hormone-hormone receptor kind of unique decoding mechanism, I guess!

In order to prevent encryption technologies from spreading, the U.K government encouraged the European commission to introduce regulations designed to compel member states to license the export of encryption software for security reasons. But, ridiculously, such software is available on the Net. US banks and other organisations use Data Encryption Standards to maintain secrecy of data but they believe their data is no longer secure enough from 'information theft'. Recently, the U.S government invited the crypto community to develop what could be Advanced Encryption Standard for their transactions. Criminals today can evade detection by using encryption to keep their e-mails safe and secret. There has been a growing concern about this interesting twist to the story because it was always felt it was we good people who have to protect our data. Now we face the need to tap into the secret communication networks of the criminals to know what they are up to next. Police now have little interest in intercepting phone conversations because it is so costly and tedious. Instead, they are after the traffic logs of who e-mailed who and at what time. Criminals therefore communicate as non-conspicuously as possible. Prepaid mobile phones, which you can buy without giving your name and address, is the biggest threat now because the thieves can use them without letting their identity known.

The information technology is now overwhelming the eavesdropping capacity of the U.S and may force a restructuring of the nation's spy agencies. National Security Agency (NSA), which employs more than twice as many people as the Central Intelligence Agency (CIA), considered a significant reorganisation to meet the demands of the era of the Internet. NSA used to be such a secretive organisation, for obvious reasons, that people call it the _'No Such Agency'_. But, with the world's fastest supercomputers, and with the help of Britain's GCHQ, NSA is said to account for 80% of the intelligence operations of the U.S. Now the NSA is stretched to its limits because of proliferation of digital technology and encryption methods. The agency has now recently resorted to recruitment of hackers because of the shortage of computer experts to do the job! Hackers have been offered scholarships in return for returning to the NSA during their free time, and spending at least 5 years for them after graduation.

Encryption used to be the domain of only military, intelligence services and the diplomatic corps not long ago. Now it is so widespread and commonplace. It is now argued that governments should have access to decode people's encrypted messages and duplicate their electronic signatures. Britain's Civil Service is adopting an e-mail security protocol called _'cloud cover'_ , in which the security officers will get copies of the 'electronic keys' that are used not just to de-crypt messages but also to create the digital signatures on them. This could lead to leakage of departmental information and also to the embarrassing possibility of not knowing if it has been forged or not. People are even thinking of digital elections where voters vote electronically via a polling system made secure by encryption but it is thought the scope for fraud is excessive. I talked about information warfare in the first chapter and it has to be accepted that the criminals are not the only culprits here. Even the governments are known to tap phone conversations, hack into computer networks and jam the radar. Britain's signals intelligence service, the GCHQ, does it. The Times newspaper carried a news report some time ago about how the French intelligence is intercepting calls made by British industrialists with a view to gather valuable economic information that can be passed onto their French counterparts.

Mobile phone signals are made of short, dense bursts of information compressed into binary bits and transmitted. It is possible for a listening station to grab this information if they knew your number or your recipient's number. French intelligence eavesdroppers have been targeting British defence firms, petroleum companies and other commercial targets by targeting individual numbers or sweeping sets of numbers. It is claimed that there are at least eight such monitoring centres in France doing this job.

Motorola recently patented a technology to snoop satellite phone calls according to a report in the New Scientist. To allow eavesdropping, the ground station sends a control signal telling the satellites to send it a replica of the speech signals, which can be intercepted by someone who can access the ground station. The callers will not know that they are being bugged at all because their conversation is in no way affected by this. The British human rights group Liberty is very concerned that a commercial company like Motorola is enabling a technology for eavesdropping. They wanted the Regulation of Investigatory Powers Bill, to be amended so that it will not allow the police and security services to interfere with any communication without judicial warrant.

Surprisingly, eavesdropping as a phenomenon is seen in nature too. Jayne Yack of Carleton University, Ottawa, and James Fullard of the University of Toronto, reported in Nature (Vol.403, p265-266, 1999) a novel mechanism used by nocturnal neotropical butterflies called _Macrosoma heliconiaria_ which live in the Barro Colorado island, Panama. These butterflies have ears on their wings! These ears are different from ours not just in the location but also it their function. They can 'hear' ultrasound that the bats make to locate their prey by echolocation. By listening to the bat's frequency, like criminals tapping into police radios, the butterflies are able to evade the bats. When stimulated with an intense ultrasonic stimulus emanating from the bats, the butterflies perform one of several flight manoeuvres such as upward or downward loops, steep dives or climbs, horizontal sweeps, swift change of speed or spirals. The wing ears control these responses in that if you ablate these ears the butterflies are no longer able to do these flight manoeuvres. In a close range encounter with a bat, the eavesdropping ability of the butterfly helps to escape from the bat.

# 9. SIGNAL AMPLIFICATION

Biological primary molecular messengers like hormones are necessarily kept at a minimum concentration. The cells have evolved to work with low hormonal signals. One of the things about hormones is the fact that hormones exert their information transfer function only for a fleeting moment of few minutes at the most. They are removed from the cells quickly. Such quick and short-lived responses may not be possible if you had loads of the hormonal molecules around the cell. It would be very difficult to control them.

Take the case of oestrogen. It is the female sex hormone. It plays indispensable roles in enabling women to maintain a normal reproductive function. Let us calculate how much oestrogen is available in the blood of all women on our planet put together. There are nearly 6 billions of human being on earth. Roughly, half of them are women. If you extract the oestrogen in the blood of all the three billion women, you would still find that you have only a total of 250 grams of oestrogen! Just 250 grams of oestrogen for maintaining the reproductive functions of all the women in the planet put together! I guess this will give an idea of how sensitive cellular information machinery is. It is really amazing that such miniscule quantities of information we can achieve the extent of reproductive activities that go on in our world!

Organisms have to be quite sensitive to sensory stimuli to be able to perceive some of the weak signals. Our human sensory system has some stunning abilities to pick up faint sensory information.

It is sufficient for the human eardrum to vibrate with a amplitude of 0.0000000006 millimetres, which is one-tenth of the diameter of the hydrogen atom, for the ear to sense an acoustic signal. The human ear can perceive a sound producing a pressure of 0.0001 microbar (0.0001 dyne per sq.cm). The membrane of the cochlea, an inner structure of the ear connected with sound transmission, is displaced by one hundred -millionths of a centimetre. This is equal to one-hundredth of the hydrogen atom!

An organism is exposed to sensory signals all the time. Strong signals pose no problem to the organism because the sense receptors fire rapidly to the 'on' state. Unfortunately, our world is not always as unambiguous as this.

Life systems have to be alert to signals that are not intense enough to arouse the receptors. The problem with lowering the sensitivity of the receptors too low is you never know if it was a background noise or a real weak signal. Naturally, the sensory system has to strike a balance between lowering its detection limits and ignoring noise. Noise in the sensory context refers to the random fluctuations in the system, giving rise to stimulation of receptors in the absence of real sensory stimulation. These random, non-specific fluctuations can be due to the Brownian movement of the molecules and atoms in the medium, which physically stimulate the detectors. These fluctuations in the physico-chemical systems cause the appearance of weak activity, which is usually not enough to cause any real activation. But, the real problem is to distinguish these random fluctuations in the system from real weak stimuli.

Organisms have evolved capacities not to miss out on weak stimuli. Amazingly, it has been found that sensory system of animals not only is capable of distinguishing weak stimuli from noise but actually use the noise to amplify the weak signals!

I know that it sounds counter-intuitive. How can background noise make life easier for the organism to detect weak signals? Isn't it going to be a bother? In our daily life, we detest noise. It is considered detrimental to signal detection and information transmission. Radios can have static noise and we tune them out. We buy mobile phones that are clear and crisp. Noise filters are designed at considerable expense to be put in consumer goods and scientific equipment. Even coffee pots have noise filters. We detest noise, no doubt about that, even though the term noise has different meaning depending on the context.

Unbelievably, organisms have evolved to take advantage of optimum levels of noise to amplify weak signals! It is proposed that the phenomenon of stochastic resonance may explain how animal sensory systems do it. Stochastic resonance was originally proposed in the 1980s by physicists to help model global climatic changes. By definition, stochastic resonance is an effect by which certain non-linear systems detect and transmit weak signals, enhancing the signal in the presence of optimum levels of noise. Stochastic resonance is now known to occur in wide range of physical systems. As I said earlier, it was originally proposed to explain periodic recurrences of earth's ice ages. It is believed that ice ages recur with a periodic interval of 100,000 years. A period of ice age is followed by normal climate for a period. Earth's Climate is quite dynamic and is characterised by large fluctuations. Small perturbations can be greatly amplified by these large fluctuations to cause ice ages. A weak periodicity in the earth's orbital parameters might cause regular transitions in a bistable energy potential used to model long-term climatic changes in the global climate. It is still a matter of debate whether the glacial-interglacial periods are periodic and are indeed linked to the earth's orbital motions. The role of stochastic resonance in paleoclimatology is a matter of ongoing debate.

In 1991, A. Longtin and A. Balsara reported in the Physical Review Letters (Vol. 67) the discovery of stochastic resonance in sensory neurones. Shortly after, John Maddox, editor of _Nature,_ wrote a news and views article in Nature discussing the role of stochastic resonance in the nervous system, bringing the topic to the wider reach of the scientific community.

Kate Wisenfeld at the School of Physics, Georgia Institute of Technology, Atlanta, and Frank Moss of the Departments of Physics and Astronomy, University of Missouri at St. Louis, Missouri, reviewed the phenomenon of stochastic resonance in the 5 January 1995 issue of _Nature_. They had used a mechanical analogy to describe what it is. A particle existing in a double-well potential can switch between the two wells, depending on the excited state of the particle. If the excitement was strong enough, then the particle jumps to the other well crossing over the barrier. Weak signals cannot do that because they cannot excite the particle to a sufficiently high potential. On the other hand, if there was a random noise alone it can induce irregular switching between the wells apparently by combining noise with the stimulatory power of the weak signal. Stochastic resonance is a non-linear co-operative effect whereby the small signal entrains the noise-induced hopping, so the transitions are surprisingly regular. This regularity can improve with the addition of more noise. A small regular influence, for example, a weak sensory input from the environment, can have a larger effect if environmental fluctuations in the form of noise are available to be tapped.

In short, stochastic resonance is about detecting weak signals in a noisy environment. Now it is known that even sensory neurones exhibit the properties of stochastic resonance. A neurone is a two-state system. It is 'on' when there is a signal of sufficient intensity to fire it. Since external signals are periodic, the neurone returns to the basal 'off' state after the stimulus is over. Achievement of sufficient depolarisation of the neurone is dependent on the integration of the steady sensory input until the result exceeds a critical threshold whereupon the neurone fires. The integrator is then reset to zero in order to be able to respond to the next signal.

A simple model of a sensory system can therefore be viewed as to consist of a neurone, a threshold and sub-threshold signal and added noise. A noise plus the weak signal helps the neurone to cross the threshold in one direction, triggering a pulse in the output. Stochastic resonance could account for the exquisite sensitivity of some animals to weak signals embedded in a noisy environment. A number of experiments addressing this issue have supported this concept.

The crayfish _Procambarus clarkii_ has a primitive sensory system. It has mechanoreceptor hair cells, which are specialised to detect weak, coherent water motions produced by the approach of a predator, such as a fish. John Douglas and colleagues at Departments of Biology and Physics, University of Missouri at St. Louis, Missouri, studied the crayfish mechanoreceptors electrophysiologically. A weak sub-threshold signal was used to stimulate these cells and their electrical activity was recorded. They showed signs of stochastic resonance.

All sensory systems operate as threshold detectors. Noise in external environment apparently helps in detecting weak sensory input. An intriguing hypothesis relating to this concept is the supposition that neurones can actively generate internal noise to amplify the signals when external noise is not available. To date, no experimental support has been found for this neurone-induced internal noise.

Muscle-spindle receptors relay the information about the movement of the muscles and their orientation in space as we move. This sense of movement, and balancing of our body as we move, requires the continuous relay of information by these receptors on the muscle spindles. Paul Cordo and colleagues of the Robert S. Dow Neurological Sciences Institute, Oregon, experimentally showed that these muscle receptors in humans could detect weak signals better when noise is introduced through the tendon. Firing activity in the nerve linked to these muscle receptors showed clear stochastic resonance behaviour.

The ability of humans to detect a sub-threshold touch stimulus has been experimentally shown to be significantly enhanced by a particular level of noise. James Collins, and Thomas Imhoff of the Neuromuscular Research Centre and Department of Biomedical Engineering, Boston University, Massachusetts, and Peter Grigg of The Department of the Physiology, University of Massachusetts Medical Centre, showed that input noise acts as a 'negative masker' for sub-threshold stimuli. They proposed the incorporation of this principle into the design of haptic interfaces for teleorobotics and virtual environments. 'Haptic' means 'touch' in the Greek language. It could also have clinical applications for people who have reduced cutaneous sensory capacities like people with sensory neuropathies and stroke.

Russell and colleagues reported in the 18 November 1999 issue of _Nature_ that the electroreceptors of the Paddlefish use a noise-enhanced signal detection system. These fishes use their electrical receptors to detect the electrical signals from their prey, the minute zooplankton such as _Daphnia_. The zooplanktons generate such signals from nerve excitations, which are used to move their swimming and feeding appendages. Working on the assumption that certain levels of externally applied electrical noise can enhance the ability of paddlefish to locate and capture plankton, Russell and colleagues tested it by applying varying levels of electrical noise using electrodes. At an optimal level of electrical noise, the ability of the paddlefish catch of the plankton was greater than when there was no noise.

Other biological sensory systems such as the cricket's cercal system, which detects air disturbances, and the cochlear hair cells in frogs, which detect the weak sound signals, have been shown to be employing stochastic resonance principles.

Stochastic resonance has been pursued for other technological applications too. Quantum Magnetics Inc, San Diego, demonstrated stochastic resonance in a bi-stable superconducting quantum interference device (SQUID) loop and is exploring the possibilities of using it in magnetic sensing. SQUIDS fabricated from high-temperature superconducting materials are in principle much cheaper to operate, but are inherently noisy. It may be possible to optimise their performance by using stochastic resonance to exploit the internal noise for weak magnetic signal detection. Arrays of coupled SQUIDS could even boost the sensitivity of these bi-stable systems even further.

It is possible that stochastic resonance could have applications in electromagnetic communications too. Weak carrier signals, operated in both amplitude- and frequency-modulated modes, have been detected well in the presence of optimum levels of noise. The detector here was a bi-stable system known as Chua's circuit. Chua's circuit and Quantum Magnetics SQUID have been implemented on silicon chips, possibly indicating the approaching technological application.

Interestingly, human perception sometimes can be ambiguous when the sensory input is weak and unclear. It may be difficult for the brain to decide when the signals are ambiguous. The brain essentially enters a bi-stable state where the perception can spontaneously switch between the two alternatives. This switching can be a random process. The observer's attention and perception is biased between the two states until then. W.R. Bennett reported in Vol. 72, 1994 issue of _Physics Today_ that stochastic resonance can be demonstrated in a neural network simulation of this process.

Living tissues have extremely low frequency electromagnetic fields on them. Interaction energies of such fields, after penetrating the tissue, are to be up to three orders of magnitude smaller than the average thermal fluctuations. How could such fields affect the activity of cells? It has been shown that voltage-sensitive ion channels in cell membranes behave like threshold devices in cell membranes. They switch randomly between open and closed states in response to thermal fluctuations. If stochastic resonance is relevant, the effect of weak, extremely low frequency electromagnetic fields might be greatly amplified. Whether it has any significance in biological functions is only speculative now. But, it is indeed true there are so many cell functions controlled by voltage-gated ion channels. It could be sodium, potassium or a calcium channel.

Sensory receptors are designed to respond to signals of a critical intensity. It obviously makes sense for the organism to be sure the signal was real and sufficiently strong for activation of a conduction response in the nerves. The nerves, as you know, have to accommodate this signal in the premium conduction space in the nerve cables. Once nearer to the brain, the signals undergo a round of sorting out to reach the destination where it will be processed. It is clear that unimportant signals should be weeded out. The same applies to extremely weak signals too. Amplification of a signal does help the organism in situations but it has to be recognised that even this requires a critical level of intensity.

Signals would die out at a very early stage of capture if they were not strong enough. Normally, about 10-15 sensory receptors of a particular sensory modality would feed their data into a single nerve ending. These nerve endings are collection points for sensory data picked by these 15 receptors serving a defined area of your body. For each defined region of your body, there is a nerve ending devoted for collecting the data. They all join up to form the trunk of the nerve cable. It is this nerve trunk, which collectively takes the sensory data of all modalities down the information highway. There are nodal points on this pathway where information is relayed to the next nerve cell in line.

If the stimulus was strong enough, the activity in the sense receptors will succeed in making the nerve ending to fire. To make the nerve ending to fire electrical impulses, the signals captured by the receptors have to be of an adequate strength. In most cases, a single sensory receptor rarely can activate a nerve ending on its own. But, if the sensory stimuli in a given region of your body are strong, such as a pain stimulus or one of strong touch, then the chances are many of the sensory receptors will fire simultaneously. Because these receptors converge on a single nerve ending, there is the effect of additive stimulation. The signal intensity, as depicted by a single receptor, is added to the signals brought in by other receptors. This convergent input of data amplifies the intensity of the signal. A strong sensory stimulus normally is one that is likely to last for a reasonable length of time. Fleeting stimuli are rarely of use to us. But, if the stimulus persisted for a little longer, the receptors continue to fire impulses into the nerve endings for a longer period. Repetitive stimulation of the sense receptors helps amplify the signals. It helps prevent the neuronal action potentials from dying out.

A strong stimulus has the same effect as a prolonged signal. The continuous data transmission from the sense receptors is amplified at the level of their convergence at the nerve ending. There is further integration of the input at the neuronal synapse acting as the nodal transfer pints in the spinal cord. I said there are at least three such transfer nodes within the level of the spinal cord before the data is despatched into the brain region. They do act as amplification sites because they are convergence points for the data streaming in from a geographical region of the body. Evaluation of the intensity and area of stimulation is made possible by this convergence of data, which also acts as a mechanism to amplify the signals.

In a way, it is like what happens in our own society. If you had a problem within your company or neighbourhood, you would make out a complaint to the higher authorities. If yours was the only complaint received, the chances are nothing will be done about it. It is like a weak, transient stimulus. There is no time for acting on each and every such transient information. But, if many in the community felt the problem, and if many of them informed the authorities, then there is a real chance the higher-ups will listen. This is amplification.

When you have a particular number of sensory receptors serving a region in your body, not all of them will be firing up in response to the stimulus. An equivocal stimulus may activate the firing of some receptors and fail to fire the remaining receptors in a given group. The stimulus-integrating centres will then have to weigh out the number of receptors activated against the unstimulated ones. The majority wins. A stimulus will pass through only if the majority of receptors are activated. It will be ignored if the number of receptors activated fails to meet the critical numbers. Depending on the exact site of the stimulus and the intensity, the neuronal activity expands to larger regions. Neighbouring groups of receptors are fired up too, in response to the expanding stimulus if it was sufficiently strong.

Our nervous system constantly faces situations where it has to decide between yes or no to an incoming stimulus. I am sure we all face this difficult task all the time in our social lives. Once it reaches a certain level of critical acceptance threshold, we act on our decisions. In most circumstances of our lives, we arrive at a decision after much deliberation. Making a decision is a tough task. We have to ignore a number of good alternatives in favour of something, which we feel is the right decision. We weigh the positive and negative factors before we decide. We act on something only if it comes out the winner in this analysis. Most often, we have to shut out alternative courses of actions even if they are close to the winner. Look at our elections. It is the party that wins the majority of votes that wins. A party that wins 47% will assume power of ruling the country while another party that won 40% votes will be shown the opposition seat. Now the point to be noted is the 40% people who vote for the losing party are ignored. They are like the sense receptors, which were in the minority. A sensory stimulus will be let to activate a conduction response if a sufficient majority of receptors say yes.

Nerve cells are not always stimulatory. There are nerve cells that are designed to inhibit another nerve cell to which they are linked with. They are inhibitory neurones. Their job is to shut out activity of certain nerve cell groups. An action is a result of weighing out the stimulatory and inhibitory inputs. Inhibiting the negative inputs will have the same effect as amplifying a signal.

Stimulatory and excitatory inputs on a critical nerve cell group will determine whether an action will follow in a particular direction. A negative input will shut down the action in that direction. The 'on' or 'off' status of a neuronal function group is determined by many other inputs which converge on this point. There are some situations where nerve cells form circuits with several other nerve cells in a polysynaptic connection. This feeds data down many pathways, widening the reach of the information. These diverging circuits are different from the converging type I mentioned about a while ago.

In our brain, there is another type of neural amplification recognised. A nerve cell that stimulates another nerve cell through a synaptic connection and this, in turn, stimulates a third neurone. The third neurone re-stimulates the first nerve cell that set off the stimulation. If you name these neurones as 1, 2 and 3, there is a prolonged, continuous, and mutual stimulation in a circular manner. This prevents the signal from dying out. This type of neuronal signal transfer is called the reverberating circuit. It is said that this kind of a mechanism operates in short-term memory functions. For example, we can easily remember some names or phone numbers for a short period of time after we learn it for the first time. We can look up a number from the directory and go on to dial the number without re-checking it. We seem to be able to remember it for a short period of time. This is made possible by these reverberating circuits in our brain. Prolonged neuronal information transfer holds this memory across a group of neurones, amplifying the fading impulse for a brief period of time.

Signal amplification is possible in the biochemical world of cells too. I said earlier that less than 250 grams of oestrogen is enough to run the reproductive functions of all women on the planet. Given the extent of human reproductive activity on our earth, it is indeed surprising that such miniscule quantities of the hormone can do this job.

In fact, all hormones and other informational molecules in our cells work in extremely low concentrations. They are so negligently low in concentration we need sophisticated equipment to measure them. The real wonder is how are these cells able to swiftly respond to changing environmental demands apparently mediated by these extremely low levels of molecular signals. It is indeed true that these molecular signals are amplified.

All of us have had some injury or other at some stage of our lives. We have all bled from our injuries, small or big. A bleeding from any site of our body means your closed circulatory system has developed a breach. Most often, our injuries are trivial. We bleed a few drops and things become all right. If, unfortunately, you have a bigger injury, you may lose more blood. It is like leakage of water from the plumbing system in your house. The size of the leak will determine your speed of response.

Our circulatory system has about 5 litres of blood circulating in it. It is not a lot really. A road accident victim can lose 1-2 litres of blood in no time. The reason doctors infuse saline or donor blood immediately upon arrival to the hospital is to expand the blood vascular volume. Reduced blood volume in your vascular system reduces the amount of blood circulating through your body, thereby reducing oxygen and nutrient delivery to the tissues. Even a small injury can deplete your blood if you did not have a way of sealing the leak.

Blood clotting is a fully automated modular function that gets kick-started the moment the leak is sensed. This is like plumbing work to plug leaking pipes. In haemophilic patients, this blood clotting is defective. One of the molecular factors is made defectively and it means the patient cannot successfully clot the blood. These patients, naturally, have to be very careful to avoid injuries at any cost.

Blood clotting is an emergency biochemical function that should be ready to respond like our fire service. It can be called into action at anytime. In a sense, blood clotting is a real double-edged sword. Do it excessively, or unnecessarily, you are in a problem. Blood will become thick, preventing the transport functions of the blood. It is not just nutrients and oxygen that cannot be transported but even the informational molecules like hormones cannot get anywhere. The whole system breaks down. This is exactly happens when someone has heart attack or stroke. There is a localised formation of an inappropriate blood clot somewhere in your heart or brain. The result is devastating. Luckily, this clot formation does not progress beyond a localised point even though it causes great damage even by this small clot itself. People who survive heart attacks or stroke are advised to aspirin every day because it inhibits the prostglandin synthesis necessary for signalling the aggregation of platelets. Platelets are blood cells just like red and white cells. They aggregate at the site of blood leak, sealing it like a loose plug. If they aggregate inappropriately, the result is a blood clot, which blocks vital blood supply.

There are patients who go into a widespread, often fatal, clotting all over the body. It often happens in severe septic conditions. Medically, it is called disseminated intra-vascular coagulation.

This is the reason why the clotting mechanism has to be tightly controlled. It has to be swift in action when you need it. When you do not need it, obviously, it has to be shut down. In most other biological situations, such pathways are kept in check by lining up the mediators in a state of inactivity. They have to be turned 'on' only if stimulated by the right signal molecules. In the case of blood clotting, there are multiple steps involved before the blood can fully clot. It is not possible for any one molecule to do it. There are about 15 different types of protein molecules involved. They have to function in a linear order, one after another, each activating the next step. This cascade of stimulatory activity culminates in the final function. Effectively, you have many levels of logic gates introduced here. Unless you get past one step, you can't go to the next one. It is not without reason the blood clotting is so tightly controlled with so many control steps.

Interestingly, all the blood-clotting factors are enzymes. They are inactive in the resting state. They can be turned 'on' by cleaving off a tiny bit of the enzyme. I am sure readers would recall this mechanism of 'gating' in the case of hormone activation. This kind of recurrence of motifs is nothing to be surprised at all in the world of communication, whether it is your body or your society.

Being enzymes, every blood-clotting factor can cleave a bit of its immediately next agent. This happens only when the right signal is around. The first step in the process is conversion of inactive Factor XII to its active form. After multiple additional steps of activation of a number of other factors, prothrombin is converted into the active form. This prothrombin will activate fibrinogen to its active form, fibrin. This fibrin is a protein again. It is capable of forming a mesh of loose blood clot. Platelets can be caught up in the mesh, mediated by precise molecular interactions, forming a tight plug. The leak is sealed. It happens in a matter of less than a minute.

All of these blood-clotting factors are manufactured in the liver. They are circulated in the blood in the inactive form. When someone has a liver disease, they may not make enough of these clotting factors. A common method doctors use for testing for impaired liver function is to see how quickly someone can clot their blood. A prick is made on the forefinger and blood is sucked into a tiny, narrow bore capillary tube. The time taken for the blood clot to occur gives an indirect idea of the integrity of the liver function.

The whole point of discussing the clotting mechanism here is to highlight the amplification of the clotting response. I said it has to be quick or you stand the chance of bleeding to death. If you depended on clotting factors to be made available after freshly decoding their genes and manufacturing them on your protein synthesis factories, you can be slow enough to die from bleeding. It is going to take you many minutes and possibly hours to do that. Our cells do not work that way. Imagine a system whereby your fire fighters don't stay in their base at all times. How much delay will there be if all of them have to be hauled from their houses each time there is an alarm? Worse still is recruiting new fire fighters into service only after there is a call out. It is crazy. Our body has a ready store of formed, but functionally inactive, blood clotting factors so that they can be deployed into service after a quick round of activation.

If you measure the concentration of the first clotting factor in the sequence, Factor XII, it is 30micrograms per ml. A microgram is equivalent to one part in a million. There is about 30 parts per million of Factor XII in your blood. It sounds so small but then all cells evolved to respond to such quantities. If you measure the final concentration of fibrinogen, the final product of the pathway, it is found to be 3 milligrams per ml. A milligram is a part in a thousand. There are 3 parts of fibrinogen in a thousand. Suddenly, you find the final product has been formed in much greater quantities. There is at least a hundred-fold amplification of the signals necessary for the blood clotting response in an emergency!

I am sure signalling systems of our body strike two mangoes in one shot. First, they achieve a multi-level regulation. Second, there is amplification of the response. It is like the spread of the word in the community. One man tells two people, two people tell four people, and four people tell eight people and it goes on.

This principle is exploited in some type of scientific equipments too. For example, in Liquid Scintillation Counters used for measuring radioactivity, this kind of a mechanism is designed to amplify the weak signals obtained from clinical and research specimens. Basically, it is a radiation-measuring device. It is used for estimating the quantity of hormones and other biological molecules in the blood of patients. I told you how small their concentration can be. You need to measure them by somehow tagging them with a tracer and detecting the signal emitted by the tag. They are tagged with radioactive atoms like iodine etc. As expected, the signals emitted by these radioactive atoms are radiation like beta and gamma rays. They are going to be so weak because of the exceedingly small concentrations of the molecules. You need to find a way of amplifying the signals.

In a Liquid Scintillation Counter, the gamma rays emitted by the tags are passed through a primary solvent, which absorbs the radiation. It is usually an aromatic hydrocarbon like toluene or xylene. Then it is passed through a primary scintillator compound, which essentially gets excited by the radiation energy and emits a flash of light. This is scintillation. These flashes of light are passed through photomultiplier tubes where they are converted into electrical pulses. A photomultiplier consists of a light-sensitive layer made of a compound, which will eject electrons from their outer shells on bombardment with photons. This is the photoelectric effect. For every 3-4 photons, one electron is ejected.

The electrons accelerate towards the first of the several intermediate electrodes called the dynodes. They are maintained at successively higher positive potentials exerting a pull over the negatively charged electrons. Each of these dynodes is made of compounds ready to lose electrons on bombardment with electrons liberated from the preceding dynode. Effectively, there is a multiplication of the number of electrons travelling down the tube. If each dynode is capable of yielding 4 electrons per electron from the preceding dynode, the multiplication achieved with 10 dynodes is 1,048,576! Now we are talking measurable currents because movement of electron is an electrical current.

Polymerase Chain Reaction (PCR) is a technique that has become a 'staple diet' for molecular biologists. It has transformed the way we understand genes. One of the main problems faced by cell and molecular biologists was the extremely low concentration of bio molecules inside the cells. They are produced in such small quantities and they only stay for a very short time. This made their identification and measurement practically impossible. With the availability of PCR, it is now possible to amplify the molecular signals. Using a novel technique, the molecules produced during a cell function are made to copy themselves, mimicking the way they make copies during replication of DNA. What were only a few copies of a particular gene product can be made to make millions of copies in an hour or so, on your table as you were relaxing sipping your coffee! No wonder we now have such a powerful way of knowing what those 'invisible' molecules are!

I am sure it can be shown that this kind of amplification of a response will be seen everywhere there is an intermediary in the signal transfer. In fact, all forms of cell signalling will involve amplification. Because, every cellular process involves a primary messenger (e.g. hormone, growth factor, neurotransmitter), a secondary messenger (e.g. cyclic AMP, cyclic GMP) and then a tertiary messenger (e.g. Calcium, Protein Kinase C etc). Imagine the level of amplification possible here. One primary messenger activates, let us say, ten second messenger molecules. Ten second messenger molecules could activate may be 100 third messenger molecules. In many cases, it is found that the second or third messenger activates another mediator like the phosphate-attaching enzyme. This, in turn, activates another phosphate-attaching enzyme. Then we have the final effector molecule that is turned on. You can count such a pathway to involve 7-8 steps of amplification. At every stage, more molecules are activated.

For example, in the case of the metabolic program to breakdown glycogen to liberate the fuel molecule, the glucose, there is an enormous amplification of the signal for obvious reasons. You need energy in a flash. It is estimated that there is about 250,000-fold amplification of the response here!

This process of glycogen breakdown also is amplified partly by another mechanism. Glycogen is a very big molecule. It has a branched structure. Each branch has many molecules of glucose attached like leaves in a branch of a tree. When glycogen-degrading enzymes arrive on the scene, there are plenty of sites for all of them to act simultaneously. If there was only one point of degradation, then the amplified numbers of active glycogen degrading enzymes will be waiting stupidly for their turn. It is absurd to amplify the response so much if you cannot exploit it fully. The branched structure of glycogen allows simultaneous action of many enzymes. It is like many animals eating on the leaves of the trees at the same time. It is like many outlet points for services like banks, cash points or McDonalds. Multiple outlets make it possible for many to be served at the same time.

Amazingly, even gene function can be amplified **.** There is a programmed amplification of specific DNA segments during embryonic development of organisms like protozoa, invertebrates and vertebrates. At specific time points and in specific cell types, there is amplification of a large number of gene copies, thereby assuring an abundant supply of a critical gene product. Different strategies are employed to effect amplification in different situations.

During development of insect eggs, for a brief period, the ovarian follicle cells that surround a maturing oocyte secrete large amounts of the various proteins that form the eggshell. This is a supply problem. When the time is right, the eggshell protein genes have to show an enhanced rate of response to the signals that activate them. This is in order to meet the demands of the forming eggshell. Some types of insects have multiple copies of the eggshell protein genes. They are actually repetitive families of eggshell protein genes. In effect, the organisms have multiple copies of the gene messages, which are decoded to generate abundant copies of the messenger RNA for forming the proteins.

In some insects like the fruit flies, there is only one copy of the eggshell protein genes. About 18 hours before the eggshell protein synthesis starts, the eggshell protein genes are amplified. In a 5-hour critical period, there is prolific decoding of the amplified genes. DNA is amplified about 15-fold in a localised manner! Only those genes involved are selectively made to increase their copy numbers! Can you believe that? It has been found that short DNA segments within the eggshell protein genes confer this ability to amplify themselves. If these segments of information conferring 'amplification capability' are inserted into other genes, they too can be amplified. It is all down to information.

An egg demands a huge increase in the potential to make various proteins and not just eggshell proteins. There is so much going on within a tight time window. There has to be an explosive decoding of information contained in the DNA and most often it involves a concomitant increase in the protein synthetic apparatus. The ribosomes, I said, are the protein synthetic factories for the cells. They are made of a special type of RNA, called the ribosomal RNA. In addition, they are complexed with tens of different proteins here. In order to meet the high demand for making ribosomal RNA, organisms have multiple copies of their genes. They are further amplified still further to ensure an adequate supply of ribosomes for the growing embryo. In the Xenopus eggs, there are already 600 copies of the ribosomal gene copies, which are amplified over a thousand-fold during development. Interestingly, these additional copies are held in separate, extra-chromosomal nucleoli! No further amplification occurs after fertilisation! It is amazing how cells can so effectively amplify their information content spatially and temporally!

One of the most incredible findings in cell biology is the cell's ability to compensate for mutations, and the consequent reduction in function of some proteins. An un-programmed amplification of the mutated gene can occur, generating multiple copies of the mutated gene to the extent of 50-fold, and these repeat copies of the genes are arranged in tandem! There is a 50-fold excess production of the mutated protein product. Because of the mutation, the protein functions with reduced capacity but the reduction in quality is compensated by increase in quantity! This has been shown to occur in mice and Chinese hamsters.

Organisms surprisingly can amplify failing signals in the DNA during other situations too. Bacteria, eukaryotic cells and even tumour cells have been shown to selectively amplify failing genes. It has been demonstrated that cancer cells can overcome loss in function of some gene products by making multiple copies of the genes concerned. Patients who have cancer are often given a drug called methotrexate. It is capable of interfering with the cell's ability to make DNA bases by inhibiting an enzyme called dihydrofolate reductase. The idea is to 'information-starve' the cancer cells by interfering their ability to make copies of their DNA. The cancer cells overcome this problem by creating multiple copies of the dihydrofolate reductase genes, thereby making it impossible for the methotrexate to inhibit all of them. As you might expect, the cancer cells have saved themselves from destruction by becoming resistant to methotrexate's killing action! Resistance to cancer drugs is being recognised as a big problem. This is one such mechanism by which the cancer cells evade the killing action of the medicines.

# 10. INFORMATION PROCESSING

Let us move on to the world of brain and information processing. Because the fundamental role of the brain is information processing it is not surprising that there are quite a few strategies that we use inside our brains that beat our technology. The brain is the ultimate information processor.

The computing ability of an individual nerve cell is hardly worth mentioning. However, a collection of nerve cells have the power to process more information. A leech is a tiny organism. It has a very primitive nervous system with about 40 neurones. With this simple nervous system, the leeches can bend, crawl, and suck. It can respond to touch. Bill Kristan and John Lewis of the University of California at San Diego have studied the way the leech nervous system functions. The idea of studying leech nervous system was to understand how the neurones interact to produce behaviour. Being small, the leech nervous system is easier to study the basic strategies that networks of neurones use to store information and compute. It is the hope of these researchers that this will throw light on how our brain, with billions of neurones, functions. Kristan and Lewis have discovered that, incredibly, the leeches can perform the equivalents of mathematical calculations using only 40 neurones. Prod a leech with your finger, and it will bend away. Behind this bending lies the ability to locate the position of touch with mathematical precision. When you touch the leech at an angle of ?, the corresponding neurones fired at a rate proportional to the cosine of ?. The leech neurones are able to do simple trigonometry as if they are silicon chips in a calculator. To bend away from touch, a leech has to trigger strongly those inter-neurones that will pull more or less away from the point of contact, and only weakly pull in other directions. The factor cos   achieves this task.

Neurones that calculate sines and cosines have been seen before in other organisms, even in monkeys. This is the first time anyone has managed to show a neural system actually making use of it. Grigori Orlowsky and his colleagues at the Karolinska Institute in Stockholm have revealed a similar three-layer network of sensory neurones allowing the mollusc _Clione limacina_ to do rudimentary mathematics. John Miller and his colleagues at the University of California at Berkeley showed in 1991 that the cricket does similar calculations too. Their experiments probed the way a cricket's nervous system detects tiny currents of air and uses them to work out the location of a nearby predator or mate.

The parasitic worm called _Caenorhabditis elegans,_ one of the most commonly used organism for biological research, has only 302 neurones in it. But, it can move, react to stimuli, and can locate food by recognising the visual image patterns! If you had a silicon chip with 302 transistors, it is as good as useless.

It is no secret that it is the dream of every computer researcher to crack the way our brain processes and stores information. Neural network is a standard computer terminology nowadays. Our central nervous system contains neurones divided into neuronal pools, each differing in its own sensory, integrative or motor functions. A neuronal pool may contain 1000s or even millions of neurones arranged in circuits. Each neurone, on an average, forms links with about 1000 other neurones. Since there are 100 billion neurones in the human brain, the total number of nerve-to-nerve linkages is about 100,000 billions! This is a staggering number, no doubt!

A neurone can connect to another by two ways. It can link up through the tail end called the axon. The axon of a nerve cell establishes a connection with another nerve cell by what is called a synapse. A synapse is an interface between two neurones. Most often, neurones link up through tiny projections from their head end. These projections are called dendrites. These dendrites can be many in number, arising from a single nerve cell. In contrast, a neurone has only one axon. Dendrites increase the number of connections a nerve cell can make with others. As I said just now, anything up to a 1000 dendritic connection is possible. In other words, a single nerve cell can speak to 1000 other nerve cells. Some neurones do not have the tails. They have to necessarily exchange information through dendritic connections in the form of transfer of electrical potentials.

A neurone can be looked at as having a 'business end', the cell body. The tail end is the axon. The dendrites are extensions from the cell body. A nerve-to-nerve link can be on the cell body or the dendrite. In cerebral cortex, 98% of the synapses are on the dendrites and only 2% on the cell bodies. In spinal cord nerve cells, about 10,000 synapses are found on each nerve cell, of which 2000 are on the cell body and 8000 on the dendrites. As you can see, in the spinal cord, relatively more (20%) of the nerve-to-nerve connections are on the cell body. Perhaps, it is a reflection of differences in the role played by the spinal cord and the cerebral cortex in handling the information. The cerebral cortex obviously processes the data while the spinal cord is more of a conduction cable. Information processing probably requires more extensive networking of the neurones, which is only possible through dendritic links. In the human forebrain, the most complex information processing structure in the known universe, a single neurone can be linked to 40,000 other neurones!

In order to give you an idea of the extent of neuronal connections, I will have to illustrate it with some interesting calculations found in W. F. Ganong's Review of Medical Physiology, a standard medical physiology textbook. I read it as a medical student. He says that if you imagine the cell body of a spinal nerve cell to be as big as a tennis ball, the dendrites of the cell would fill an average-sized room! The axon would run up to 1.6 kilometres in length though only 13 millimetres in diameter!

The figure of 100,000 billion nerve-to-nerve links is frighteningly large. No wonder the human brain is the most exquisite and most powerful information processor in the manifested universe! It still weighs just about a couple of pounds! If a leech nervous system can do so much with just 40 neurones, I can only imagine the limitless possibilities for the human brain.

Just for the sake of curiosity, I was trying to look at the complexity of our computer networks of the modern world. There are an estimated 200 - 500 million computers around the world in an inter-connected network. I may be wrong in the true estimate because of the rapid rate at which people are taking up this technology. Still, the number I quote is only less than a tenth of the world population. However, there are signs that Internet is poised to progressively link more and more of the world population. Many years ago, less than 1% of the U. S households were on-line. Today, it is more than a third to half of the American population, representing tens of millions of people. To achieve similar levels of penetration, the radio, the telephone, television and cable TV had to wait significantly longer periods of time.

But, to be fair, it has to be said that the computer as a concept was conceived in 1824 by Charles Babbage and did not see the light of day until 1943. Tommy Flowers built the Colossus electronic code-breaker in 1943 at Bletchley Park. Initially, the general impression was it would not be of any use other than for processing the insurance claims! The same year, IBM chairman Thomas Watson declared resoundingly that there is a world market for 5 computers! The computers were so big those days, occupying the whole room; it wasn't thought that they would ever become as small as we see today. Even Bill Gates got it all wrong when he said in 1981 that a computer with a memory of 640 kilobytes would be enough for anybody to do anything. He was wrong by a factor of 100 and even may be 1000!

In the biological world, information processing abilities coincided with the appearance of nerve cells. Evolution of a separate cell type in the form of neurones enabled the organisms to segregate the function of information to a separate group of cells. The most primitive type of nervous system is seen in organisms belonging to the class coelenterata. The coelenterates were the first to develop special nerve cells with a high degree of irritability and conductivity. These cells were sensitive to external influences and were capable of transmitting them to other cells. Separate clusters of nerve cells emerged when the co-ordinated action of many contractile elements was required. Such clusters form the nerve rings encircling the umbrella of a jelly fish and cause the whole umbrella to tighten up or come loose, thus enabling the creature to move swiftly in water. In flatworms, which are the descendants of coelenterates, all the nerve cells are concentrated in strands around the body in intricate patterns. A diffuse network of nerve strands was undoubtedly an improvement compared with the network of randomly scattered nerve cells. In Annelids, which must have descended from the flatworms, all the nerve strands connecting them hold only the long processes of cells. Almost every segment of the worm has a pair of ganglia connected to each other. Besides, each ganglion is joined through the nerve strands with the corresponding ganglia of the preceding and following segments. The ganglia came close together in higher worms, making up a compact structure. Now the features of contemporary vertebrates start emerging.

One of the most primitive representatives of the chordates, the Lancelot, has a nerve cord but no cerebrum, yet. The cerebrum first appears in cyclostomes and in fishes. The brain of the primitive animals is divided up into the same sections as the brain in humans. The difference is only in the complexity. Moreover, the well-developed forebrain of man gives him the capacity to do sophisticated mental functions.

In mammals, the brain development was rapid. Individual zones, each of them responsible for a certain kind of information processing, developed. There were separate regions for controlling vision, hearing, olfactory and skin sensations. Association areas developed in higher mammals, for communication between the processing zones mentioned earlier. In man and the apes, the association areas are predominant. They process the information that reaches the brain and make some sense out of it.

If you look back at the sequence of events, it reads so much like the Local and Wide Area Network that our own society has found to solve the problem of inter-linking information-handling points. The brain & the individual information-processing association areas, which inter-link them, are probably equivalents of our computer networks and databases of information. They make it possible to handle complex information, as well as generate meaningful knowledge out of information junk.

Neurophysiologists map out the functional specialisation of brain regions by positron emission tomography (PET) and functional magnetic resonance imaging (FMRI). Both PET and FMRI measure changes in blood flow to specific regions of the brain while human subjects perform various mental tasks. The blood flow changes to regions of the brain is said to reflect the metabolic demand resulting from altered levels of neural activity. These tools have now made it possible to study brain activity in humans at the spatial scale of a few millimetres and on a time scale of a few seconds. Unfortunately, these tools offer little insight into the nature of signals encoded, the computations being performed and the interactions between the different regions of the brain. At least we now know the regions of the brain that control a variety of cognitive functions. Functions like memory, pain perception, smell, speech, mathematical ability, vision, movement, sensory feelings, analytical reasoning, anticipation, perception of speed, perception of colour, etc are done at well localised sites in the brain, as studied by PET and FMRI.

Insights into how the brain processes and represents information have been obtained using microelectrodes. Researchers can measure directly the electrical activity of individual neurones, or of functionally related clusters of neurones. Individual neurones have been found to be selective for detection of upward motion, downward motion, specific colours, density of visual texture, orientation of line segments and other visual features. Certain neurones in the temporal and frontal lobes of the brain are active during short-term memories of specific objects or places. Other neurones in the brain have been shown to fire when the animal deploys attention to one or other regions of the visual scene.

Unfortunately, even this kind of analysis begs fundamental questions about how signals are created, encoded and transmitted by single neurones and assemblies of neurones. Individual neurones receive input from thousands of other neurones. The microelectrode studies can hardly help us know the responses of only a few neurones at the tip of electrode.

A study of micro-circuitry and interactions between neurones are the most difficult because of physical inaccessibility of neurones in intact organisms. Intracellular recordings in thin slices of tissue removed from the brain, or simple invertebrate nervous systems, have identified distinctive compartments within a given neurone. Each of these compartments receives synaptic signals from a different input neurone. Most excitingly, in the brain of the housefly, as it sees a bar moving downwards, it is possible to monitor activity sweeping through the individual compartments of the neurone! This is perhaps totally unexpected because it has always been thought that a neurone acts in a common manner to a synaptic input. If it true that a single neurone has distinctive functional compartments within it, then it should be possible for it to respond to many dendritic inputs, integrating them! Each neurone is a microprocessor! Each part of it is capable of handling information input differently in response to different dendritic sources!

Decision-making is one of the most important functions performed by the brain. We all know how difficult it is to reach a decision. Neuronal signals connected with decision-making have been localised to the lower parietal cortex of the brain and this region also deals with reward expectation and reward potency. Interestingly, decisions seem to depend on the reward expectation and personal history as much as the sensory details available at the moment.

What would be the necessary processing rate of a computer if it has to yield a performance on a par with the human brain? Hans Moravec, at the Robotics Institiute at Carnegie Mellon University, had some interesting estimates in an article on robots in the December 1999 issue of _Scientific American_. He took the case of the vertebrate retina as the model for estimating the neural computation rate.

Retina is half a millimetre thick and approximately two centimetres across. It consists of cells capable of sensing light. It is also populated by cells involved in image processing of the incoming visual data. The retina can be viewed as consisting of a million tiny image regions like a television camera, capable of detecting edges between dark and light, and motion. Each of these regions is connected to its own dedicated conduction fibre in the nerve that takes the signals to the brain. Each image region in the retina performs about 10 detections each second. The million or so such image regions would add to a total of 10 million detections per second. Moravec believed that efficient computer software would require at least 100 computer instructions to do the same edge or motion detection. Therefore, in order to do the retina's 10 million detections per second, the computer would require 1000 million instructions per second!

The retina is said to weigh about 0.2 grams. The whole brain is about 75,000 times heavier at 1.5 kilograms. It would take 100 trillion instructions per second for the computer to do the same amount of information processing as the brain! A typical PC would have to be at least a million times more powerful to perform like a human brain! I may be again wrong here because the computers these days are becoming more and more powerful by the day.

Moravec estimated that it would take 30-40 years to close in this gap between computer power and the brainpower. In the years before 1980s, computer power doubled every two years. Then it came down to 18 months for a doubling of the computer power. In the 1990s, each year saw a doubling of the power of the computer for a given price. More importantly, the size of the computers has also steadily fallen. At present, I believe we are in the region of a 1000 million instructions per second range at least in the desktop PCs, requiring another 20 years to reach the 100 trillion mark.

The brain is an evolved machine capable of processing sensory data **.** Its ability for perception stems from the survival needs of the humans as an organism. They have to navigate through their territory, locate the food sources, enemies, and mates. But, a number of organisms with smaller brains, like insects and birds, can do that too. Why did man need such a complex data-processing organ of this power?

Language and mathematics placed an extra processing power requirement on the evolving brain, I guess. But, the brain did not evolve primarily for doing mathematics and that is why it is so clumsy and slow when it comes to doing mathematical calculations. We take minutes to do mathematical tasks, in spite of having billions of neurones, while a calculator or a computer can do it incredibly faster though they use far less processing units. The brain evolved as a multi-purpose data processing organ, more likely to meet tasks requiring complex sensory data evaluation and perception than doing mathematics **.** Mathematics is only a few thousand years old. But, other survival tasks date back to the earliest times in the origin of vertebrates. It is no wonder the brain is not up to speed in churning out numbers when it comes to a comparison with a computer.

With the backing of the vast experience in the field of robotics research, Moravec ventured to predict the likely course of evolution of future robots. A robot of any sort will be expected to navigate through a territory, doing simple chores. It will need to perceive visual data to recognise objects and routes in order to execute these tasks. In 1999 it was hoped, by 2010, we will see robots as big as a man but with the cognitive ability of a lizard, said Moravec. It can do vacuuming, delivering packages and taking out the garbage etc. It will be 2040 before we see robots similar to the ones seen in science fiction movies, capable of freely moving around with the intellectual capability of a human being. More powerful computing abilities and falling costs give rise to this optimism.

Computers will eventually be able to carry out perception, cognition and thought, assuming we solve how the brain does it in the first place. The bottleneck here is not just the increase in processing power but also solving the riddle of brain function. Whether biological information processing will be amenable to computer simulation is a question.

Now 'universal robots' with lizard-like 5000 million instructions per second-minds, can be programmed for almost any simple chore. Still, they can only handle contingencies explicitly covered in their application programs. They may not be able to adapt to changing circumstances in their environment on their own.

100,000 million instructions per second robot could not only adapt but also can be trained. At the level of a mouse-like intelligence, in predefined circumstances, they can positively or negatively reinforce signals. Doing things fast and to keep the batteries charged are positive reinforcements while hitting or breaking something will be negative. Just as in our own social world, negative outcomes will be shunned while favouring anything positive.

If an animal has this level of data processing ability, it also has limited capacity to improvise. They can only do things in a pre-programmed manner. When I watch wild life documentaries, one of the thoughts that occur to me always is why those poor preys like the deer, buffaloes and zebras don't fight back. Just one or two lions, or tigers, will chase a herd of hundreds of deer or zebras, or even wild buffaloes, which run for their lives. Collectively, the combined weight of the prey will be many times that of the tiger or lion that are chasing them. Things would be totally different if only they stood their ground and retaliated. They could smother the odd numbers of lions or tigers if only they knew they had the power of the combined physical might. Instead, these hapless animals run around and watch their own members of the species being gored to death just under their own eyes. Standing a few metres away, they just do nothing but watch the predator tearing apart their food. I wonder if these poor preys have so limited data processing abilities in their brains that does not allow for improvisation. They can only act in biologically programmed ways. Fighting a predator that is individually more powerful than it has not been allowed for in their 'hard and software' of their brains, I guess. I do not how many would agree with this line of thinking.

A 5 million million instructions per second (5 trillion instructions per second) would reach the stage of a monkey-like brain, capable of learning quickly from rehearsals in simulations modelling physical, cultural and psychological factors. Purpose, value, physical properties of things, goals, beliefs, feelings, and preferences will all begin to matter. At a 100 million million instructions per second (100 trillion instructions per second), robots will be able to abstract and generalise. They could make medical diagnoses, make financial decisions and analyse data and configure computer systems! Moravec predicts, by 2050, robotic intelligence will surpass our own!

Thinking back on the course predicted for robotic research, it is very clear that what is envisaged has actually occurred in the real world of evolution of nervous system of animals. As animals became more complex, it coincided with evolution of a networking of neurones resulting in a steady increase in their processing power. As many organisms settled down with the limited abilities they still have, like insects, birds and other less complex creatures, man went straight ahead to a data processing power of 100 trillion instructions per second. Incredibly, these 'human robots' are cheap to produce and fun as well. A man and woman, with no scientific education, can sexually reproduce a human being that can learn, adapt, generalise and improvise. Can't they?

A human child's brain is naïve and incapable of doing what an adult brain can do. Our brains acquire the complex functions as they grow in response to stimuli from the environment. Learning and experience shapes the networks of brain neurones. Human beings differ in their cognitive abilities simply because their brain networks of neurones are different. Biologically, they are all alike but cognitively they are all different. Depending on their life experiences, education and learning, human beings grow in a unique manner over the years. They do require many years of learning to meet socially demanding technical skills but their ability to perform tasks expected of the species are learnt quite easily within a few years.

The point to be noted here is it is not the awesome increase in computing power alone that stands between us and the thinking robots as proposed by Moravec. It is also dependent on understanding how the brain thinks. We are hoping to learn the tricks from the already perfected brain to try and replace it in our work. It sounds a bit paradoxical. Why not let the brain do it instead? Won't it save the need for thousands of programmers? We are trying to find workers for our factories trying to save the cost of paying them every month. But, aren't you going to pay the programmers who will manufacture these robots? Or, is it going to self-replicate itself, producing copies of itself like how the humans do? By designing these super robots we are aiming to make man redundant. It sounds a bit scary. One day these thinking machines may roam the world and possibly rule it.

_Matrix_ was a recent Hollywood blockbuster movie, starring Keanu Reeves. He lives at an unspecified time point in future. Unfortunately, he lives in the time period when such 'thinking' machines are around. Worse still, these machines know how to 'farm' humans! Even worse, these artificial creatures can make us live in an artificial world, making us believe that it is all real!

A complex information processing structure would depend on its ability to exchange information between its functional compartments. It would heavily depend on moving data across locations within their processors for further action. Exchange of data, at phenomenal speeds, is integral to their function. Data acquired through different means will be stored at points that are all inter-connected in a manner that allows free flow of information. Data integration and evaluation will require functional distinct compartments.

Your own Internet will take you to databases of your choice where you hope to access data you are looking for. If you are looking for some medical information, you would need to access _Medline,_ for instance. You log into other databases if you need something else. Often, when your search involves things of a vague nature, or something broader, then you need to really explore in the 'information jungle'. Where is the useful information in the Web's billions of pages? Information may be free but you need to know where to look it up. Google has literally made it so easy for us these days. We 'Google' for information every single day and the word 'Googling' has become a new word in English.

In the brain, I said cognitive functions are compartmentalised. A simple task of seeing some thing may involve interactions between different compartments of the brain. The optical cortical neurones, where the visual data reaches, will send in the data to the memory databases in the temporal cortex for comparison. An evaluation of the visual input is done here for details such as whether the image is new or something already known to us. If so, what is its nature? Is it safe or dangerous? Has it got anything in it that requires our response? If that is an old friend of you, are you going to say a loud hello? Or, if your past experience with him was not pleasant, are you going to just walk past? This reasoning would require the services of the frontal cortical nerve cells where the structures called the association areas pool all data together.

After a decision is reached, say a decision to shout a hello, then a quick communication has to be established with the region of the brain concerned with the spoken language. Remember spoken language is controlled by a different centre in the brain, which means there is a separate centre for controlling written language. As you are sending in the signals to the spoken language centre, the words are being assembled here in a correct manner within the set grammatical rules. The name of the person will have to be sent back to the speech location from the memory centre. The nerve cells in the speech controlling centre will have to send in commands to the motor control regions governing the movement of mouth and neck muscles involved in generation of the sound.

The simple function of seeing a visual image and responding to it, involves exchange of data between many different neuronal cell groups. Sensory data is made available to all accessing neurones like we access data on the Internet or your Local Area Network. It is something like what goes on in your Local Wide Area Networks without doubt. When you produce your credit card to the cash point machine or the shop assistant, it is scanned with optical character recognition systems to identify you. You number is beamed to the central data storage site, which responds by sending all data it has about you. You may actually be in a site thousands of miles away from your hometown but distance is no more a problem. Details about how much you have spent and how much more can you be allowed is instantly beamed back. Your name and other details are sent back too in a manner similar to how your brain deals with visual data by comparing with stored images and other data.

When you do the same with your bankcard, even when continents physically separate you, all information about you and account is instantly available. This kind of a free flow of information is central to any information processing system. Banks use a similar mechanism of scanning your cheque leaves for the account details and checking your signature. The cheque, when presented to this scanner, is scanned for these details in a manner similar to how your scanners work with your computer when you scan a document or picture.

You may have wondered how the identity cards work when it is used to let you in at an unmanned entry point. They work very much like the way your credit cards are recognised by scanners. At a car parking lot in your workplace, when you swipe your ID card, is instantly scanned and the number checked with the database held remotely. Once the identity is checked, the entry or exit gate at the parking lot is opened. All this happens as if by magic.

Departmental stores use a similar local network of data exchange points too. When a product is scanned with the bar code reader, the price and name of it is retrieved from the central data storage point that functions exactly like how your memory centres work. Information flows back to the shop assistant's desk. As you buy the item, the information is sent to the location where the inventory is kept. As you buy one item, the store is now having one less of the product than before your purchase. This keeps the store well informed of what it has to stock and when. In fact, orders are placed automatically to the suppliers once the stock reaches a critical limit. This means the suppliers are also in the network.

The networks become huge when it comes to large organisations. A stock market could generate an enormous number of transactions every day. The Data generated could be phenomenally large. Every organisation today has its own information network of its own to suit its function. In our hospital, within the Pathology department, we have a dedicated network called the _Telepath_ that links up all pathology disciplines like Clinical Chemistry, Immunology, and Microbiology etc. All tests done on clinical specimens are entered into this network for access by users within the hospital. It may be someone in the lab who is feeding in results of tests of patients, or it could be someone at the consuming end like a doctor who is accessing the results of a blood test carried out on his patient. Computer terminals can be placed anywhere within the hospital to widen the reach of the network.

A network of computers can do a lot more than a single one. More interestingly, the processing power of the networked computers can be combined.

Tom Sterling, who works at Caltech and NASA's Jet Propulsion Laboratory, built the _'Beowulf_ ' cluster supercomputer by taking an open source operating system, _Linux_ , and adding network drivers to it. His idea was to build a cheaper supercomputer simply by stringing together lots of PCs. It only cost him a tenth of what it would have taken to build a supercomputer with the same processing power. It was also flexible in that you could add more power any time you want by simply adding in more PCs in the network. A similarly built string of computers at Los Alamos National Laboratory ties together 140 PCs and carries out 50 billion floating-point calculations per second. It only cost a mere $300,000! The European Southern Observatory at La Silla, Chile, has chosen Beowulf for their supercomputing needs.

The concept applied in Beowulf 'supercomputers' received a further boost when Intel's latest chip then, the IA64, or _Merced_ , was launched in 2000. This chip has a 64-bit architecture, making many scientific applications and high-resolution graphics work more precisely. Linking up 64-bit PCs for scientific purposes will open up many cluster supercomputing in many labs around the world. The arrival of gigabit-per second Ethernet local area networks will require such supercomputing abilities in many more locations.

Larry Smarr, director of the National Computational Science Alliance, thinks this idea can be extended over the Internet too, linking up tens of millions of PCs. In contrast, the world's most powerful supercomputer uses only 10,000 Pentium Pros.

A 32-bit PC chip can address 232 other items. A 64-bit PC chip can address 264 items. In distributed computing terms, it means that one processor can access the memory of every other computer on the planet! When the 128-bit chips arrive, they can allow the user to address a number of computers equivalent to the total number of atoms in the entire universe!

Distributed computing has already been tried in a more exciting project called the Search for Extraterrestrial Intelligence (SETI), hosted by the Space Sciences Laboratory at the University of California, Berkeley. It is an ambitious attempt to harness the spare power of home computers spread across the globe. The observations from the Arecibo radio telescope in Puerto Rico, aimed at receiving radio signals from aliens, are broken into 12-second chunks and sent over the Internet to volunteers. It is possible for someone to download the SETI screensaver to be able to do it. They can then sift the data for any signals that might have come from an alien civilisation. The Arecibo telescope generates 35 gigabytes of data daily and this is not the kind of data avalanche that can be handled by any supercomputer. With about 1.8 million computers across 210 countries as of 2008, SETI has the ability to compute over 528 TeraFlops. This is comparable to the world's fastest supercomputer, the _BlueGene_! While the project has not yet detected any ET signals it is said that several candidate target positions in the sky have been identified where noise spots do not explain the spike intensity.

David Anderson, a computer scientist at the University of California, Berkeley, who leads this project, says he has plans for using similar approaches to solve other supercomputing problems in rational drug design and protein folding etc. _'Distributed.net'_ is a web-based group that uses distributed computing to solve cryptography keys. But, one of the problems with distributed computing is the slowness and unreliability for solutions requiring tight coupling.

_Cactus_ is a novel software tool recently released by its developers Joan Masso and Paul Walker at the Max Planck Institute for Gravitational Physics in Potsdam. This software allows linking of even supercomputers or even clusters of distributed PCs and even portable PCs over a high speed network, parallelising their program to run virtually any computer system. The National Science Foundation spent $2.2 million on this _Cactus_ software development.

Grid computing is the application of several computers to a single problem at the same time. These problems are of a diverse nature ranging from drug discovery, seismic analysis, economic forecasting etc. Grid computing is distinguished from conventional cluster computing in that it tends to be loosely coupled, heterogeneous and geographically dispersed. It is a special case of parallel computing that relies on computers connected to a network by a conventional network interface such as Ethernet. In contrast, in a supercomputer there are many parallel processors connected to a local high-speed bus. The advantage of a distributed computing system is that the cost is far less. The Berkeley Open Infrastructure for Network Computing (BIONC) is currently the most popular volunteer-based distributed computing platform as of April 2009. It supports a number of active projects like modelling of clinical epidemiology, protein folding, climate forecasting etc.

We are now living in a world where information of all sorts is constantly required by us in various forms and sizes. We are now able to link up with different forms of networks to access the data we need. My argument is even the brain is a network similar to our computer networks. The brain is a network of neurones. I gave an indication of the intricacy of the inter-neuronal connections. As I said before, neurones could be organised into groups, each concerned with a specific function. These neuronal pools are linked in a wider network, allowing free flow of sensory data.

All cognitive functions require exchange of data between functional compartments within the brain. The brain is unique in that it can synthesise knowledge out of the data. I can't say the same for any other form of data networks man has at its disposal in his society. This unique ability of the brain is said to reside in the part of the brain called the frontal lobe, just above your forehead. This is a new evolutionary development. Man is unique in the animal world in that his frontal lobe is well developed. His frontal lobe incorporates a more extensive networking of the neurones whose job is not sensory data capture but data processing instead. It is exclusively used to make sense out of the sensory information sent in by the various forms of sensory receptors. The brain regions that receive the data input feed into this unique region in the brain for further processing.

The basic architecture of an information-processing structure seems to be a network of information storage sites and functional units. A function site is a location where a specific information-dependent function is performed. In a computer, it could be addition, multiplication or some other form of information processing. Greater number of complicated tasks will become possible with an increase in the number of such units. Most personal computers have an interface between their hardware and software called the _Instruction Set Architecture_. This moves data from their storage sites on the microprocessor to function sites where the information is processed. The information storage sites and processing sites reside on the chip and they are connected by hardware structures called the _buses._ The _'bus'_ sounds a right name in a figurative sense because it carries the information between sites as if they were 'commuters of the information world'.

I have to remind the readers that our brain relies on such movement of stored data to neuronal groups connected with the specific cognitive function. As the neurone number increases in a specific functional pool, the task becomes proportionately complex. This is like the increase in processing power obtained when you increase the number of transistors in a chip.

An increase in the transistor number increases the number of instructions they can handle per second. In 1987, a microprocessor contained about 100,000 transistors and they occupied a space of one square centimetre of silicon. They could perform 20 million instructions per second. In 1997, the same number of transistors could be held in a space of one square millimetre! By 2007, it has reduced to one-tenth of a square millimetre. It is expected that this shrinking silicon size will enable packing more and more transistors. We could soon be having billions of transistors! The reduction in transistor sizes will also enable squeezing of more information storage locations and more functional processing sites onto each chip. The architecture of moving data across sites will still require the chip's wires to link the sites. There will be delays in moving data too, limiting the chip's performance. Wires have to be laid out linking them for creating the ' _buses_ ' to move the data.

In the current architecture of our computer hardware, taking full advantage of the microprocessor inside every computer requires several add-on cards, such as a modem card, a graphics card, a sound card, a video card etc. These cards are plugged into your computer case enabling you to do the multimedia applications. The cards carry a customised chip with unique wiring patterns to allow specific applications, which means moving the right data to the right place.

Anant Agarval, co-director of the _Raw_ project at the M. I. T. Laboratory for Computation Science, worked with his colleagues to develop an entirely new kind of microprocessor called the _Raw_ chip. This chip is expected to do away with separate permanent wiring patterns on the chips for each application. Instead, it will rearrange its 'wires' automatically, depending on the application you want to run. This will involve re-routing information flow through what are called the 'logic gates' **.** A logic gate is an arrangement of transistors that controls the direction of electric current on a microchip and hence the flow of information. The logic gates can be so arranged to customise the 'wiring' for each application when you need it.

The difference between today's computer architecture and what the _Raw_ project proposes is what controls the operation. Is it going to be the ingrained wire? Or, is it going to be the software itself by virtue of its ability to command the movement of information down changing pathways, depending on the application you want to run? _Raw_ chip makes the software to do the job. What is the advantage of it all? The _Raw_ chip will be incorporated into a single device that could perform a variety of applications like encryption, speech recognition, games or communications. A multiplexer routes several signals along a single wire segment, selecting one of the incoming signals, depending on the needs of the application running on the chip.

Agarval's group has termed the device the _Handy 21_ , to denote the 21st century. A single chip inside it will have several perceptual interfaces: cameras, video displays, and speech. It will use an antenna for communication and an analogue-to-digital converter. You can virtually make the device to do all these functions as if it was a 'digital chameleon', ready to change their functional roles at your command. On your order to change into a cell phone, the device will dutifully locate the appropriate software, download it and configure the wires of the _Raw_ chip to give it the characteristics of a mobile phone!

Our brain is a versatile device, working exactly like a _Raw_ chip. It can switch to different functions with ease. It can enable speech like a speech synthesiser, see like a camera, do mathematics like a calculator, produce mental imagery like a video game, store data like a disc, dictate movement like a robot-controlling computer, feel like a sensor, capture information like your antennae, process data to produce thoughts and ideas. I wonder how the brain can switch to these functional modes with ease. Often, they perform more than one of these functions at the same time.

Integration of sensory information, sensory-motor co-ordination and many other functions that are critical for learning, memory, information processing, perception, and the behaviour of organisms is dependent on the transfer of data between different regions of the brain. In 1949, D. O. Hebb suggested that this is accomplished by the formation of assemblies of cells whose synaptic linkages are strengthened whenever the cells are activated or ignited synchronously. Hebbian assemblies have remained an intriguing concept for a long time. Until the past decade, the technology needed to experimentally demonstrate them in the brain has lacked.

Studies over the last decade have shown beyond doubt that very fast electrical activity in the frequency range of 20-70 Hz increases during the formation of percepts, memory, linguistic processing, and other behavioural and perceptual functions. Wolfgang Miltner of Department of Biological and Clinical Psychology, Friedrich-Sciller-University, Germany, and his colleagues have reported in the 4 February 1999 issue of _Nature_ that increased gamma band activity (20-70 Hz) is also involved in associative learning. They have found that gamma-band coherence increases between regions of the brain that receive the two classes of stimuli involved in associative learning in humans. I said a while ago that sensory stimuli reach different regions of the brain. They are freely exchanged, during cognitive tasks. This is what is called associative learning. Two or more groups of neurones are made to fire coherently during this process. These researchers concluded that an increase in coherence could fulfil the criteria required for formation of hebbian cell assemblies, binding together parts of the brain that must communicate with one another.

Formation of stable connections between brain areas involved in the formation of different sensory stimuli is a good model for formation of cell assemblies. It provides an unambiguous and easily specified means of requiring that widely separated areas of the brain interact. Coherence, or in-phase synchronicity, in electrical activity could provide the formation of cell assemblies acting as functional units.

The linkage or communication between brain regions can be directly mediated by a third, intermediary region(s). Or, it could be oscillations in a third region(s) that caused the gamma coherence in both areas. At the neuronal level, superficially placed cells have been found in cortical layer of the brain that behave as 'chattering cells', acting as excitatory pacemakers. This excitation spreads to other regions, firing them in sequence, generating coherence in electrical activity.

Advances in neurobiology have clearly shown that nowhere in the brain is there a single centre evaluating or co-ordinating neural computations. Then how are the results of many parallel computations bound together to permit coherent perception and action? It is achieved by jointly activating an assembly of neuronal cells across potentially diverse functional groups, rather than by the response of an individual, highly specialised neurone or just one functional group of neurones. The advantage is that an unlimited number of different cell assemblies can be generated, each representing different function. We are now talking something similar to the _Raw_ chip. Cells drawn from a large pool of functionally specialised neurones in the cerebral cortex can be dynamically regrouped. This is perhaps is what Agarval's group is proposing. Their idea is to use the same chip to perform different functions by automatically re-wiring the information path to suit the application using software.

Electrode recordings on the cortex have shown that neuronal responses are bound together by the synchronised firing of individual neurones with a millisecond precision. Synchronisation may allow selective association of distributed neurones, precisely similar to the distributed computing I was talking about before.

The classic theory of neurology viewed neural coding as dependent on the rate at which they discharge action potentials. Only recently are we beginning to realise that information may be transferred in the brain by precisely timed spikes of electrical discharges.

An attractor, yielding a stable spiking precision in the sub millisecond range, governs the dynamics of synchronisation. Markus Diesmann and colleagues at the Department of Neurobiology and Biophysics, Albert-Ludwigs-University, Germany, recently reported their findings suggesting that cortical network of neurones can transmit synchronous spiking in a stable manner without requiring dedicated synapses provided enough neurones are recruited in successive groups. Each neurone in a group contributes a single spike to the passing valley. A neurone may recover from the electrical firing and be ready for engaging with another group. Thus, each neurone may participate in multiple volleys with different neurone compositions. Several such volleys may propagate through the network simultaneously, allowing multiple synchronous processes to coexist while maintaining their identities. The difference could be in their temporal coupling. This model shows how a combinatorial neural code, based on the continual reconfiguration of the cortical network into short-lived functional groups depending on the immediate computational demands is possible

The same neurone participating in reconfigured networks in the course of data processing is perhaps similar to a same employee occurring in different committees dealing with different things. Even in an office set up, tasks are assigned to groups of staff that are used in a combinatorial fashion. The same individual can form a group with different people to do different tasks. Do you get the idea?

Mario Galarreta and Shaul Hestrin of the Department of Anatomy and neurobiology, University of Tennessee, Memphis, have shown that neurones that are inhibitory in nature are important too in generating co-ordination amongst neurones in a network. These inhibitory nerve cells are believed to shut down out-of rhythm impulses in the neurones that are expected to fire synchronously with others.

Gua-qlang and Mu-ming Poo of the Department of Biology, University of California at San Diego, La Jolla, California, have found evidence to support the view that repetitive, correlated firing of neurones can induce a persistent increase or decrease in synaptic strength, depending on the timing of the pre-and post-synaptic excitation. It has long been known that synaptic modification can be brought about by a stimulus converging on a neurone. This group has found that localised stimulation can modify synapses between neurones located remotely! Depending on the inter-pulse interval, neuronal connections can be weakened or strengthened! This brings into focus a novel 'delay-line' mechanism of encoding temporal information, converting it into and stored as spatially distributed patterns of persistent synaptic modifications in a neural network.

Researchers have been trying a number of approaches to try growing a network of neurones in the lab in order to understand how they communicate signals between them. Jerry Pine, a biophysicist at the California Institute of Technology, Pasadena, uses a novel approach. He was among the first researchers who measured the electrical activity of cells by growing them on top of an array of electrodes. The electrodes help him to 'listen' to the electrical chatter between the neurones like the way people chatter in a party room. We know they are all talking but who is talking to whom and what? Pine is trying to sort this problem by assigning places to the neurones on special holes on a silicon chip, each hole with an electrode at the bottom.

Most importantly, it is possible to build computing devices out of a combination of neurones. Grown in the correct patterns, a combination of stimulatory and inhibitory neurones could constitute a logic gate. Depending on our technical ability to grow bigger and more complex circuits it should be possible to produce neuronal function units like in a real nervous system. One of the real incentives of growing for such neuronal circuits as logic gates is that the combinations of neurones can do much more than a logic gate on a chip.

In the world of science fiction, one can freely imagine anything they like. How about a thought-controlled computer, which does not need keyboards, monitors and other interfaces? Would it be possible to communicate over digital networks by thought alone? In the 9 March 1996 issue of _New Scientist,_ Peter Thomas, a professor of Information Management at the University of West of England, Bristol, wrote a fascinating article on thought-controlled machines. He claimed that some researchers are trying to work out how to connect the brain directly to digital devices like computers, databases, and video cameras!

However, the progress is limited by our lack of understanding of the precise ways our brain works. Designing a brain-machine interface is not going to be easy because we do not know if the brain relies on digitising sensory data. The general approach to developing such an interface takes the course of studying the burst of electrical wave activity of the brain by using electrodes placed on the skull. Medically, it called the electroencephalograph, or EEG. Neurologists commonly use it to investigate people with fits and other neurological disorders.

The wave patterns observed on the EEG represents the electrical activity generated by interactions between the billions of neurones inside the brain. It is long known that the electrical activity patterns on the EEG changes depending on whether the brain is relaxing or active. A characteristic wave pattern with a frequency between 8 and 13 Hz, known as alpha waves, is seen when we are relaxing. This will change to a faster rhythm of 15-30Hz when we are aroused by stimuli.

Researchers have begun to look more closely at the EEG changes to look for patterns as the brain is engaged in cognitive functions. For example, how would the wave pattern change when speaking a specific word? How would it change when you are looking at some image or object? Given the EEG wave pattern, is it ever going to be possible to tell what the subject was doing? A team of computers researchers led by Michio Inoue at the University of Tottori near Osaka in Japan developed a database of EEG patterns taken from a subject concentrating on some words. The computer tries to match their EEG signals with the patterns in the database to guess the word the subject is thinking. As of 1996, the computer had a vocabulary of only 5 words, and it took 25 seconds to guess it right at a success rate of 80%. I do not know how much progress has been made in the last few years. In comparison, I think children learn words very rapidly. Perhaps, our brains are inherently well equipped to recognise words and patterns.

Shiao-Lin Lin and colleagues in the Department of Neurology at the National Taiwan University Hospital in Taipei have been studying spikes of brain activity when the subject is doing a mental task or before an action is planned. They could see the brain spike occurring even before the subject actually does the action. It should be possible to assist people with disabilities if we can figure out the brain waves denoting different human actions. Stephen Roberts, a lecturer in Neural Computing at Imperial College, London, is researching just that. He is trying to study the brain activity pattern as a subject is intending to move a limb even before the action has actually occurred. This brain activity obviously underlies neuronal information processing of the situation and generation of the brain's commands to the muscles.

Whatever is the objective of the researchers there is no escaping the fact that analysing the complex EEG signals is not an easy task. If at all, we cannot hope to go beyond a few tasks at the most. I cannot really imagine some one coming up a reliable database of EEG changes denoting the complex physical actions and thousands of words in our languages.

Instead, it might be easier to find a way to connect the brain directly to the digital data flowing from the brain. This is the line of thinking of some of the researchers. They call it the neuro-compatible interface'! Richard Norman's team at the Department of Bio-engineering at the University of Utah, have been developing ways of supplying video images directly to the brains of people who have lost their sight. Most causes of blindness are due to defects or damage to the external visual apparatus, the eyes and the nerves. The brain region controlling the visual image information processing, the optic cortex, is mostly normal in them. If that is the case, why not feed in visual data directly to this region, bypassing the visual image conduction pathways? Norman's group hopes to do it by implanting devices directly to the brain. These devices consist of an array of electrodes. The plan is to convert images from a videocorder, and transform them into electrical signals and excite the neurones directly through the electrodes. Results seem to show that the approach is capable of creating artificial vision even though it is not capable of producing an image with the clarity of normal vision.

With respect to hearing, a cochlear implant is a bionic device. It can help pick up sound signals the normal way an ear would pick them up. It is now available for people who have lost their sense of hearing. The problem with this implant is the inability to ignore sound data that you do not want. A normal ear-brain system knows how to ignore wasteful, useless sound signals and concentrate only something that matters. But, a cochlear implant would make the world sound like a noisy market place all the time.

The drive for improved and more powerful means of information processing is making man seek newer avenues all the time. Going beyond neurones and circuits, he is also looking at another entity, the DNA.

DNA is pure biological information. It drives evolutionary change. An organism evaluates its immediate environment using its signal detectors. This data is used by the organism to come up with an adaptive response, which in a way is a form of information processing. Genetic solutions arrived at by organisms represent the optimum ways of surviving in the given environment. In short, evolution is all about information-processing.

Obviously, evolutionary processing of data is a terribly slow process. It could take hundreds or thousands of years for complex organisms like us to manifest changes in our gene machinery. Unicellular life forms evolve faster because their rate of mutation is greater. They divide much faster into the next generation making it more likely for new mutations during the DNA replication.

We say the brain is slow to process data when compared to the computers. The response time of the brain can be many seconds to minutes. It could even take many hours if the problem is complex. Some of the scientific problems can be quite intractable for years in spite of combining the power of many human brains as if they are some form of super brain analogous to a supercomputer. The slowness of the brain is more than compensated by its productivity.

The DNA is slower than the brain. It takes ages for it to select the best genetic solutions. A number of alternatives are tried in the form of a number of types of species who evolve in a Darwinian manner. We have no clue to the mechanisms underlying arrival of the fit genes but we know that they are arrived at. It is the subject of intense controversy and debate. Even reputed scholars get into heated debates on the underlying principles.

Some how we humans have started to think that DNA could solve some of our vexing information processing problems! Researchers have turned to the DNA to see if the inherent information-handling abilities of the DNA can be exploited!

DNA computers are still a pipe dream. But, the payoffs are enormous if it ever succeeds. It could become possible to perform trillions of calculations simultaneously in a single test tube of DNA! In a single jar of DNA, we could store millions of times more information than today's largest computers!

At the Second Annual meeting on DNA-based computing held at Princeton University researchers were upbeat about seeing DNA computers becoming a reality. Scientists have long known that DNA is a 'natural computer', which stores incredible amounts of information in an accessible manner. How to harness DNA's natural computing ability is the real problem now.

A number of ways of using DNA to compute have been proposed and tried. Researchers have even tried to encode information in a digital format by treating DNA strands as strings of 0s and 1s. If you wanted to denote 110100, then you need to link six strands of DNA, end to end. The first 1 can be denoted, for example, by a 5-base sequence and the second 1 by a different 5-base sequence. Similarly, 0s occurring in different locations will be represented by different DNA sequences because the DNA computer needs to be able to keep track of the order of digits.

A silicon computer tackles a problem by checking every possible solution in turn. If there are only a few variables, then it is fine. But, if the number of variables increases, then the problem becomes intractable. It may take weeks, months or years for combing through all possibilities.

A DNA computer can attack this problem in a different way. DNA works by the method of extraction. It selects possible solutions, out of an infinite number of possible solutions, eliminating some each time in a parallel manner. This is how real evolution really works. A number of species or a number of gene variants evolve to meet the environmental demands. As every biologist knows, many of the genes and species die out leaving fewer of them to choose from. This happens over millions of years. In the end, selective few genes and species remain as solutions to the environmental challenges. We call it the survival of the fittest.

A DNA computer should be able to perform billions of times as many calculations as a silicon computer. A test tube of DNA can hold quadrillions of DNA strands, each encoding a string of hundreds or thousands of 0s and 1s. One can imagine the memory a DNA computer can have.

Richard Lipton, a computer scientist at Princeton University, and his two students Dan Boueh and Christopher Dunworth, showed that it was possible in theory at least to crack the Data Encryption Standard, which the U. S government and IBM developed, and currently widely used in business. The key is a string of 56 digits, which can be discounted only by searching through 256 different sequences! It takes a massively parallel computer, costing millions of dollars to do that. But, the DNA computer can do it at a fraction of the cost.

Molecular computers are still in their developmental stages. They have an enormous potential to offer significant improvement over silicon computing but we have to wait for some more years before they become a reality. But, researchers have a different idea. If we can't make a molecular computer soon, for example a DNA computer, why not use the principles of DNA computing rather than use DNA as such?

Using DNA as such to compute is one thing. We are yet to demonstrate convincingly that it can be used in a practical manner. The advantages are obvious but we have to wait for it to happen. But, people have already started to use the principles of functioning of the DNA to solve intractable problems!

It all started in the 1960's when John Holland, then working at the University of Michigan, carried out research on what was to become genetic algorithm. He believed and showed that similar techniques to those observed in Darwinian evolution could be used to solve problems very different from surviving on planet earth.

Genetic evolution offers a life system the best chance of survival by shuffling, mutating or exchanging gene characteristics. If a life system had a string of characteristics A to Z, it can change its characteristics by exchanging some of the characteristics with its mate during reproduction.

It can also undergo mutation, which changes the gene characteristics at one or more points. If some of the characteristics are more attractive then organisms with such characteristics are selected. That is, they survive better. Ultimately, a successful life system may have a set of characteristics, which has been arrived at by exchange and mutation.

What Holland believed was these principles should be applicable to other systems as well. The other systems could be many other things like telephone companies, airline operators, manufacturing companies, financial institutions. He considered genetic algorithm as an adaptive search towards optimisation.

All the ideas and language of genetics like chromosomes, genes, and mutations were used. Holland's idea was to maintain a pool of solutions to a given problem. Depending on how good it was, each solution was given a fitness rating. Holland stored possible solutions as 'chromosomes' that is a string of numbers, each having a particular meaning in the problem. These 'chromosomes' reproduced according to their fitness. Only some fixed numbers of chromosomes were stored and those, which were less fit, would tend to die off.

How is this model going to help to optimise? Our modern world is now so full of problems that need to be solved in an optimal way without having to try every solution there is to a problem. Telephone companies, airline operators, manufacturing companies, financial institutions have all looked for the quickest way to reach a fit solution that will work. A decision has to be taken immediately, whether we are heading in the right direction or not. Optimisation can be achieved if we have a sort of gradient in terms of cost effectiveness of solutions.

The goodness of a solution can be rated and given a value called the cost function. If we think of the cost function as a landscape, and altitude as a measure of the cost, then the optimal solution is where the deepest valley has its bottom. The landscape of possible solutions is called the search space and any single possible solution is a single point in this space.

This can be explained by a simple example to illustrate this. If you get lost in a hill in the night, what would you do? Visibility is poor and you feel thirsty. The immediate need is to find some water more than finding your way back. Where are you going to search?

You cannot run blindly east or west, right or left. You may or may not find water that way. Moreover, it is going to take time. The simplest solution is to pick the direction that seems to lead down hill most quickly, and choose to walk that way. It is common knowledge that water flows downwards. Another piece of information available to you is the slope of terrain immediately around you. These two bits of information help you make a beginning. This method, known as gradient descent is an example of adaptive search in which information gained from the previous step guides the next step.

Researchers have tried a number of alternatives to gradient descent approach to optimisation. The evolutionary process by which plants & animals arrive at the fittest life systems for a given ecosystem inspired John Holland. Genetic algorithm is an adaptive search towards optimisation.

A 'chromosome' contains all the information necessary to specify a position in the 'search space' as a set of numbers called 'genes'. New chromosomes can be formed from old ones by two ways: by part exchange or by mutation.

Part exchange of chromosomes occurs during crossing over that happens during meiotic cell division. Small region of the mother's and father's chromosomes exchange place. This process is known as genetic recombination. Though crossover helps pooling of resources from mother & father, it is not going to offer dramatic improvements because, within the species, there is not much variation in the gene content of a father of a mother. You are not going to find a sudden, altogether new function emerging as a result of cross over. In other words, cross over limits our 'search space' in an ever-increasing way. New possibilities of survival cannot be explored this way.

Mutations, another way of creating new chromosomes, are capable of allowing new search capabilities. Mutation changes the value of genes in a beneficial or a harmful way. Harmful changes destroy a system because they are less fit. Beneficial changes offer new capabilities that help the system to search for new potentially important ways of adapting to the environment and survive. Genetic algorithm draws on this gene pool kept well stocked by mutations. The driving force of crossover of portions of DNA during meiosis also helps.

All of us have attended meetings of some sort where groups of individuals discuss about aspects of a problem that needs solving. I feel a process akin to genetic principles discussed so far is operational here too though we don't realise it. People offer various solutions and we mix good ones, in part or whole, to come up with a final strategy. The final solution is a patchwork of ideas that came from different individuals like a chromosome is built by nature to arrive at evolutionary strategies.

Software designed around the principles of genetic algorithms comprise of a large number of fragments of computer codes and a number of logical operators that encourage these code strings to combine, change or be destroyed. The process may start with many thousands of strings, which are randomly generated rather than written to perform specific tasks. These strings are basically 0s and 1s, equivalent of our genes. These strings will be subjected to fitness tests to determine how well the desired problem can be solved. The more successful the code is at solving the problem, the fitter it is. The fit solutions are kept, while the rest are eliminated from memory, like how the species become extinct due to their inefficiencies in adapting to their ecosystems.

The extent of applicability of genetic algorithm to non-genetic systems is evident in the way a British brewer-Bass Taverns-turned to algorithmic approaches to deciding where to put their new pubs! There may be many candidate catchment areas and several possible locations for a brand with a broad appeal. The brewer may want to site more than one pub where a number of catchment areas are close to one another. Taking into account the presence of rival pubs how can you choose the best sites out of a possible hundred? This is a combinatorially explosive problem. There are a possible 4950 combinations if you look for two sites out of a hundred. The number increases to more than 75 million for five sites. For 10 sites, there are more than 1000 billion possible combinations!

Bass Taverns enlisted the help of Artificial Intelligence offered by a London-based company, _Search Space_ _,_ to try different algorithmic approaches. The genetic algorithm turned out to be a more intelligent approach as it 'evolves' a best solution! _Search Space,_ meanwhile, has developed a generalised version of the system called _X-Locate_ , which the company says can be used for many kinds of retail location analysis and planning.

_Unilever_ , one of the largest consumer goods companies in the world, had placed an advertisement in a science magazine, calling for applications from adaptive computation researchers with experience in one or more of the following: neuro-fuzzy systems, neural networks, evolutionary systems, adaptive agents, genetic algorithms, landscapes or machine learning. Why would a manufacturing and marketing company need people with knowledge of evolutionary systems, adaptive agents & genetic algorithms? What is the use of all these biological concepts in an industrial set up?

The truth is even financial institutions have learnt to use the biological rules of natural selection, sexual reproduction and mutation, upon which genetic algorithm computer models are based. They use it to manage billions of pounds of investments. Wall Street firms have used genetic algorithms to invest money in stock markets. Richard Bauer, a business professor at St.Mary's University of San Antonio in Texas, has even written a book ' _Genetic algorithms and investment strategies'!_

Insurance companies and Investment bankers are constantly looking for a way to replace hunches and personal experiences with a systematic approach to making decisions. Now the power of computers is making it possible to simulate the complexities of big businesses though this line of research is still in early days.

In 1987, W. Brian Arthur, an economist at Stanford University, California, teamed up with John Holland to create a 'virtual stock market' inside their computers. Their model included software agents, representing the traders. Each one is expected to come up with its own assessment of the prevailing stock market conditions depicted as A, B, C and so on. The virtual traders then decide whether to buy or sell depending on which conditions are prevailing in the market. The decision of buying is taken if a set of conditions is fulfilled. If other conditions prevail, the decision is to sell.

Traders re-evaluate their rules and bring another rule into play if it has proved profitable in the past. It is also possible for the traders to recombine successful rules to form new ones that can be used to test the market. This is how genetic recombination occurs during sexual reproduction. The genetic algorithm generates new rules by combining features from two 'parent' rules! Variants of this model are being used in investment companies.

This agent-based, genetic algorithmic approach can also be used to simulate any complex businesses like supermarkets and telecommunication companies. _Simstore_ is a model of a real Sainsbury's supermarket employing genetic algorithmic approach to solve the problem of finding the shortest distance a shopper has to forage to complete his shopping. Obviously, the store has an interest to encourage impulse buying by stocking things randomly and changing the position where commodities are stocked. John Casti developed this model, a professor at Technical University at Vienna and also at Santa Fe Institute, New Mexico. He has collaborated with Ugur Bilge of _Simworld_ , a software company based at London. They have come up with ways of helping the supermarket to stack things up in ways that will reduce the distance travelled by the shopper while also encouraging impulse buying.

I am a regular Sainsbury's shopper myself. I have noticed that my regular store had the habit of changing the locations of a number of items without any notice. From my personal experience, I found it annoying because you do not know where to find something. This frustrates me especially more if it happens to be a weekend when the shop is crowded and you can't move your trolley around freely. I realise now the store's practice was the result of their approaches to make me and others buy more things impulsively!

In October 1997, the Boston-based State Street Global Advisors, the third largest investment management company in the US, took over Advanced Investment Technology of Florida, a pioneering designer of electronic analysis of financial data.

Pareto partners of London manage about £15 billions of corporate and government pension funds. They were looking for a safe way of making their investment decisions. Its director, Ron Lieshcing, looked around to see if he can get help in from artificial systems that can analyse the data for him. He found the Modular Knowledge Acquisition Tool Kit, developed by Charles Dolan at Hughes Electronics, to be interesting. This firm was one of the contractors who were working for the US Department of Defence.

Dolan uses a neural network model to come up with sophisticated expert systems. A neural network is one that can be trained to solve a problem. It learns from previous actions like a normal human expert. Dolan modelled a complex network of different thought processes that interact to reach a decision in much the same way as neurones do. It aims at homing in on key information hidden in a mass of economic data, a capacity called 'feature extraction'. A human expert can do it with ease but to make a decision to do that is difficult.

A problem is treated as an 'environment'. Solutions are like the 'life forms', struggling to meet the demands of the environment. Over many 'virtual generations' the fittest solution is found to be the optimum one.

Financial organisations all over the world, such as Banque Nationale de Paris and the World Bank, are now toying with genetic approaches to financial information processing.

Conceptualisation of data and feature extraction out of a mass of information is a big challenge to organisations and organisms. Our human species has evolved bigger processing centres in the brain to meet this challenge.

In October 1998, a program called _ThemeScape_ was launched, which was capable of turning unstructured information from a database or web search, or even memos, letters and newspaper articles, into a topographical landscape of information structure. This program combines the statistical analysis and the simple grammatical rules used by humans in natural language to locate the key concepts in the data.

It separates a sentence into its constituent parts and looks for common themes. The concepts are extracted and placed in multiple virtual locations, based on its content. Documents related conceptually are placed nearer to each other in the matrix. Don Mason of Cartia, Redmond, Washington, calls it 'conceptual navigation'.

It is possible to analyse around 250,000 documents simultaneously allowing users to home in on the information more quickly than they could be obtained by reading a long list of keywords. It is a tool to read all document files and extract the major concepts within them. Contrary to the regular operating systems, this switches the focus away from where to find the file from to what it is about! Another company called _Autonomy_ , based in San Fransisco, has produced a similar data categorisation and navigation tool called _Agentware_.

If I were to predict the direction the databases of our world and their networks will evolve, I would without doubt point towards a state where people will not only get raw data but also processed information! I have repeatedly shown how the evolution of our computing machines and their networks has resembled the evolution of our nervous system to the last detail. If that is the case, what is left is the formation of knowledge-generating databases like how the association areas in the frontal lobe do.

People have been talking about artificial intelligence and thinking machines. We have not got them around but I feel they only represent a natural stage in the process we have set ourselves into. Do not doubt it. You will see it in your own lifetime.

Before I close this chapter, I thought I would make you think for a while about something I find fascinating. Collective human knowledge is responsible for the enormous power of the human species. The brain size of the human species may have been evolving over tens of thousands of years. Over the few thousand years of history, man has lived the life of a simple, unsophisticated organism. We were not technologically or scientifically advanced at all. We were adept in using tools and materials but the real escalation of human knowledge started between the 15th and 16th centuries AD. Can you think of a reason why?

In my opinion, man found a means of exchange of information around this time. I mean a mass communication tool. I am talking about the paper. Suddenly, it was possible for free flow of ideas and thoughts over large distances. People could interact with each other at the intellectual level. This free information transfer was, in my opinion, a form of networking of the human minds, a form of distributed computing. This amplified the power of the human brain tremendously. In this network, each human brain is the equivalent of a neurone or an individual computer. Within three hundred years, man has advanced by leaps and bounds. Man has transformed his society. As more and more human brains were 'networked', the distributed computing abilities of the brain networks produced outputs infinitely more complex than was possible in the era before the paper.

In the last 30 years or so, man has had another boost in his capacities in the form of his computers. Internet has thrown open an endless possibility for man to link his brain with others, through his computer! I have no doubt we are going to see man transform his society into a 'distributed network of minds'!

Scientific enterprise can be viewed as one such distributed networking of human minds. What is not possible for a single scientist is achieved by a collective effort, where information is exchanged by way of journals, conferences and other means. We are now past the age of narrow-minded divisions of science and moved on to the era of inter-disciplinary science. Biologists sit next to physicists, mathematicians and computer scientists and talk the same thing!

# 11. INFORMATION-BASED CONTROL SYSTEMS IN THE HUMAN BODY

A cell can be viewed as a factory where a number of things are going on at different geographical locations within the cell. There are many programs running in a cell at any given time. When one part of the cell is engaged in glycogen degradation, another part of the cell could be proceeding with preparations for cell division, cholesterol synthesis, amino acid synthesis, amino acid catabolism, energy generation, control of ionic fluxes, protein transport, gene decoding etc. In many cases, the product of one pathway enters another forming a biochemical loop.

Unlike a silicon computer, a cell is capable of running many applications at the same time as if it were a parallel processor **.** It indeed is a microprocessor, no doubt, in my opinion. It integrates inputs like any silicon computer or a neural network. A silicon computer has transistors to regulate information flow in specified, programmed directions by controlling the current flow. A cell has equivalents of transistors in the form of enzymes, ion channels and receptors, which can be switched between active and inactive states quickly by a number of mechanisms.

Metabolic programs in a cell are costly exercises. Every metabolic pathway requires the concerted action of a number of enzymes. It is labour-intensive. It is not energy cheap. A metabolic program places an enormous workload on a cell just as in any other complex system. No wonder entry of a cell into a metabolic pathway is tightly controlled. It is common sense that if you want to stop something, it is better to stop it early. Of what use is it if you stop a process after it is nearly half finished? The cells regulate entry into biochemical pathways at the level of the first or second enzyme in the pathway. I said the pathways could have anything from 5-10, or even more, enzymes catalysing the reactions in sequential order. The first or second enzyme of this long chain is the critical step where control can be exerted. Once this control point is crossed, then there is no turning back. The program will go onto completion. The critical enzyme controlling this 'gate' is called the committed step. Once past this step, the cell is committed to the program until it is finished.

How is the committed step controlled? This requires integration of inputs of varied nature, converging on the regulatory enzyme. It could be one or more positive stimulatory influences and one or more negative feedbacks. More than one source of input can converge on the same enzyme, inducing it or inhibiting it. The signals are weighted to produce a functional output. In my opinion they are no different from the 'AND', 'OR' and 'NOT' Boolean logic gates. There are many logic gates in the cellular metabolism.

For example, cholesterol is a vital molecule needed by practically all types of our body cells. It is required by the nerve cells as an insulator. It is required by adrenal glands, testes and ovary for making some very important hormones like cortisol, testosterone, oestrogen, aldosterone etc. Failure to make these hormones can result in an inability to control the salt balance, an inability to fight stress, an inability to mediate the reproductive functions etc. We always hear about the bad effects of high cholesterol especially its effect in blocking coronary arteries. What is not known is the fact we all need about 300 mg of cholesterol a day for meeting the requirements of the body. Most people think that by avoiding cholesterol in our diet we can completely keep cholesterol out of our system. The truth is that if you do not eat cholesterol in your diet the body will make it in-house, using raw materials available in plenty.

The in-house production of cholesterol is regulated to the required level, shutting its own production when the needs are met for the day. There is a feedback loop operating here. The main control point in cholesterol biosynthesis pathway in our body is the HMG CoA reductase enzyme, which is at the very beginning of the production line that actually consists of about more than 12 enzymatic reactions, occurring one after another like an assembly line. This control point (HMG CoA reductase) acts as a sensor of the cholesterol demand supply and can be shut down, or activated, depending on the need. This has been exploited very successfully by pharmaceutical companies by way of using inhibitors of this enzyme for cholesterol lowering. Drugs such as Lipitor, Zocor, Crestor etc are some of the biggest selling drugs on the planet. Lipitor, made by Pfizer, was the highest selling drug in the world and accounted for nearly a fifth of all annual sales of Pfizer. Lipitor was the first ever drug to achieve $10 billion per year sales!

A typical feed back influence is usually the product the metabolic program generates. When the product accumulates in the cell, there is no point in the cell continuing with its production anymore. Inhibiting the committed enzyme of the pathway shuts down this 'manufacture'. The product physically binds a site on the enzyme to change its conformational state, arresting its catalytic activity. It is precisely similar to the marketing analysis of our industries. Don't they sense the glut in the market and quickly slow down or shut down production?

A positive influence, or positive feedback, is one where a stimulus converges on the regulated step to maximally stimulate it. It could be from a molecule that needs the product of the metabolic program to produce something else. The simple way to explain it in layman's terms would be to consider a case where a product in demand needs two or more value-added raw materials. One of the raw material is made by one company _x_ and the other by another company _y_. Both the raw materials are purchased and used by a third company z to make the specialised product. What would happen if the company _x_ slows down its production rate of the raw material? Now the positive feedback, in terms of placement of purchase order from the companies _y,_ would activate the company _x_ 's production rate. This is what happens in the cellular metabolism too. The precious resources are tightly controlled to avoid wasteful expenditure on products not needed at that point in time.

It is not an uncommon principle in cell biology to have a mechanism of simultaneous control of a number of genes and proteins. Bacteria are known to have a single 'switch' to activate each of their metabolic programs consisting of many individual enzymes. They save the hassle of having to regulate each and every enzyme individually. Once the decoding of the first enzyme gene of the pathway is activated, and then the other genes are activated together in one shot. In computer terms it is like having a single code for a series of linear instructions. The advantage is that, instead of commanding each step, one could command a whole process with a single control point. In unicellular life forms 'operons' represent an example of such linearly controlled sets of enzymes.

For a particular metabolic process, say lactose metabolism, all necessary enzymes for lactose metabolism are contiguously situated on the same chromosome. So, with just one 'switch' you can activate the expression of genes for the whole series of enzymes. For the bacteria, the utilization of lactose is a program that can be operated by a single genetic switch. In higher organisms like us such operons are not seen. Because we have a whole lot more genes we can afford to have more complex regulation of each metabolic pathway. Assuming we have one control system for each component enzyme in the pathway we are faced with potentially many more control points. Enzymes performing a concerted metabolic function could often be located in completely different chromosomes in man, which adds more complexity. But, we minimise the complexity by way of selecting one or two important steps in the metabolic pathway for more rigorous control. They are called the 'committed steps' which, when completed, will mean that there is no stopping after that step.

Development of DNA microarray technology has now opened up a flood of information about the simultaneous expression of genes while a cell is engaged in a metabolic program. There seems to be a tight co-ordination of groups of genes functioning in a common process even in higher organisms even though they are not strictly operons. The term ' _synexpression groups_ ' has been proposed to designate sets of genes that share a complex spatial expression pattern of genes in multiple tissues that function in the same process. Christof Niehrs and Nicolas Pollet of the Division of Molecular Embryology, Deutsches Krebsforschungszentrum, Heidelberg, Germany, who coined this term, think that this hardwired correspondence between gene function and regulation shown by synexpression groups unravel a degree of order that was previously unsuspected and exposes networks in gene function. The basis of animal diversity is now thought to be not due to differences in gene products but the differences in gene networks. The Drosophila geneticists proposed the concept of such genetic modules or 'gene cassettes' some time ago.

It is now known that even human species shows simultaneous control of many of its genes during a number of its cellular processes. This enables access of information contained in multiple genes at the same time. When a task needs a wide variety of information this is a much better scheme than trying to access individual bits of information separately. There are a number of examples cited to support this view. The one that I am going to take is the developmental control of the embryo.

During embryonic and foetal development, cells are multiplying quite rapidly. As cellular differentiation sets in, groups of cells are assembled into organs of different types. It is generally known what stage in the foetal growth the various organs appear. The difficulty in converting a non-specific, immature, undifferentiated cell into a sophisticated, unique organ type is the need to confer all the structural and functional characteristics on the growing organ precursor cell within a narrow time window. If it was a muscle cell that was being formed, you need to be able to make it possess a number of molecular structures unique to a muscle cell. These molecular structures are products of separate genes. If one had to activate each of these genes, with separate regulatory factors, it becomes a task in itself to control these accessory factors. It does not make sense. It would be far simpler if you had a single molecular regulator, which will control all necessary genes in one shot. This single regulatory molecule can bind to the activating regions of a number of muscle-specific genes to turn them 'on' simultaneously. This scheme will enable a simultaneous availability of muscle-specific structural and functional genes within a short time frame than would be possible if you had to rely on separate regulators.

Our recent understanding of embryology has thrown new light on presence of such region-specific, organ-specific genes, which are activated at critical times of development of body morphology. When the mass of cells that is the embryo starts to differentiate into body regions, we find that certain genes called the _homeotic_ genes are activated in defined regions. The products of these _homeotic_ genes are the universal regulators of a number of genes in different types of cells. All coding regions of _homeotic_ genes contain homologous DNA sequences of about 180 base pairs, known as _homeoboxes_. These segments give rise to related protein domains in the various _homeotic_ gene products.

The characteristics of _homeobox_ domain are highly conserved in evolution. Sequences similar to fruit flies occur in yeast, invertebrates and vertebrates, including humans. In each species studied, _homeobox_ proteins are DNA binding proteins that interact with a variety of sequence motifs, after combining with other proteins, to contribute to complex regulatory responses. This means distantly related organisms utilise similar developmental strategies!

If the embryo is developing the neck region, then it becomes necessary to form the muscles of the neck region, the bones of the neck, the internal organs here like the food pipe, windpipe, thyroid gland, parathyroid gland etc. All these tissues and organs have to form simultaneously to complete the task of neck growth. _Homeobox_ genes, specific for the neck region, have the ability to turn 'on' the genes of all these neck tissues in a global manner. The scenario is similar to building a house or something. When you are building the kitchen part of the house, for example, you need to think of various components like plumbing, cupboards, taps, electrical wiring, fitting the kitchen appliances, work surface etc. They are all dissimilar items. They are also sourced from different suppliers and need uniquely trained fitters. The fact of the matter is that a timely coordination between all of them is needed so that all required parts are installed within a narrow time window. Usually, the construction supervisor would be the one to oversee this process and make sure all sub-contractors do their job in parallel.

If you map out the gene activation patterns in the developing neck region, a single type of _homeobox_ gene would be found to be active in all these diverse neck tissues. Obviously, thyroid gland, food pipe cells, neck muscle cells, parathyroid gland cells have little in common in structure or function. The only similarity is their anatomical proximity. But, in the plan of body development, the way things happen is a kind of universal communication of regulatory information to all the cells concerned, irrespective of the cell type. As I said before, it is like building a house. You need to plan for windows, doors, electrical points, bathrooms, kitchen, bed rooms, plumbing etc as you move on with the construction of each floor. The supervisor of the construction should coordinate the suppliers of these dissimilar items like bathroom fittings, window fittings, plumbing, electrical fittings etc to do their work in parallel so that the construction of the house is smoothly completed without having to dig the floors and demolish the walls to fix the missing parts.

The tissue specificity of the _homeobox_ gene will be conferred by the combination of _homeobox_ gene product with products of other genes. You could have a broad backbone of the _homeobox_ gene product to which addition of other products will make them slightly different for each different task and different for each type of cell susceptible to its action. The signal may be a bit common, but with modification of the basic message, it can be made really specific for the different cell types.

These _homeobox_ genes play the role of task forces and steering committees created for initiation and completion of big projects. These steering committees oversee the development of the project from a single control point. Because the purpose of this committee is to co-ordinate the functions of diverse groups of individuals involved in the task, there is a mechanism in place to drive the process in a coherent manner. If you were to divide the project into too many divisions and allow them to independently manage their completion, the chances are it will end up in chaos. The _homeobox_ genes are the steering committees for the 'body plan morphology project'!

I am puzzled as much as you as to why information management of biological systems match that of our modern strategies to the last detail. If you really agree with me that the similarities are real then it becomes a matter of curiosity as to why it should be so. In fact, this kind of concerted, simultaneous control of multiple processes and molecules is not unique to embryo growth only. Similar schemes can be seen elsewhere in our body metabolic programs that go on all the time.

We also have a similar scenario in the case of enzymes catalysing individual biochemical reactions in major programs. I said a little while ago that enzymes could be reversibly 'activated' or 'deactivated' by simple biochemical modulations like addition or removal of a phosphate group to one of the constituent amino acids like Tyrosine or Serine etc. One can view the activation status as 'On' and the deactivation status as 'off'. I also said that this is a novel mechanism of information transfer unique to biology. Reversible attachment of chemical groups like phosphate, methyl and acetyl groups to alter the functional status (On/Off) of wide variety of proteins is a universal phenomenon in cell biology. By such a simple trick the cells are able to effectively influence the flow of metabolic information in the cell.

Let us look at an example. Carbohydrate metabolism is of huge importance to our body. Glucose is a vital fuel for our body cells and the brain in particular. Brain preferentially uses glucose for energy. Other organs can use glucose as well as other fuels like fat, ketone bodies etc. The medical condition where blood glucose levels fall to critically low limits is called hypoglycaemia and is life-threatening.

As a doctor I am aware of the medical consequences of low blood glucose levels. I had a rather amazing experience during my hospital practice that demonstrated how dramatic the effect of low blood glucose could be. I was investigating a patient who had come to us with suspected pituitary disease. Pituitary gland can easily qualify as the 'centre of metabolic control' of our body. It is the ultimate gland, which controls production of a number of vitally important hormones like growth hormone, cortisol (stress hormone), male and female sex hormones, salt-controlling hormones, thyroid hormone, milk production hormone (Prolactin) etc. Cortisol is a hormone that enables us to deal with daily stress. The stress could be mental, physical or biological. The mental and physical stress is obvious but the biological stress is not that obvious. The biological stress could be things like low energy availability for example. Low glucose level is such an example of biological stress. In such situations more cortisol would be produced by signals from the pituitary to the adrenal gland, which is a small but extremely important gland sitting on top of your kidneys on either side. Coming back to my patient with suspected pituitary disease it was likely that he had reduced ability to make enough cortisol, possibly due to lack of pituitary signals.

In hospital practice specialist doctors would carry out a test called the Insulin stress test to investigate patients with pituitary disease. This test would involve administration of Insulin to the patient with a view to producing deliberately low blood glucose levels. This is a dangerous procedure because the fall in the blood glucose can be quite low and therefore we are expected to have concentrated glucose solutions readily available by the bedside to be administered to the patient when needed. The test would go like this: administer a set amount of Insulin and periodically measure the blood glucose levels. The aim would be to drive down the glucose to pretty low levels. This is an artificially produced biological stress and is expected to stimulate the patient's adrenal and pituitary glands with the resultant output of cortisol hormone and others. If the patient has an expected level of increased production in these hormones then it is concluded that the patient's adrenal and pituitary glands responded to the biological stress appropriately and therefore the pituitary gland is working alright.

As part of the assessment of the effect of declining glucose levels on the brain function we are expected to give simple arithmetic tasks to the patient. It is also conventional to ask basic questions like his name, date of birth, home address etc just to make sure his brain function is OK during the test. This particular patient of ours had his insulin infusion and we were keeping an eye on the blood glucose levels. We were also assessing his mental function. He was doing these mental arithmetic well and was able to answer questions relating to time and place and his identity. As his blood glucose fell really low his brain was not able to carry out the mental tasks that he was performing very well only moments ago. He was not able to do simple multiplications and additions. He was not able to answer questions about himself, which he answered only minutes ago! When I gave him the same simple mental arithmetic questions, which he answered perfectly only moment ago, he was staring at me blindly. He was looking confused as to who was he and who was I!

It was quite dramatic and I could only liken this to perhaps our computers, which can shut down when the power supply is withdrawn. It was almost like his brain was being shut down due to lack of energy that was supposed to come from glucose! We intervened rapidly and gave him a shot of glucose through his veins. Within moments he was able to answer questions correctly and do his mental calculations! The brain power was back! It was really a remarkable experience for me. Interestingly, he had no recollection of the events that happened only minutes ago. He could not believe that he failed to do the mental tasks given to him! Obviously, his brain could not 'save' the event in his memory files due to lack of 'power'. He had lost the unsaved memories, just like you lose the unsaved computer files at the time of power breakdown. It is remarkable to say the least when you see the similarities between computer and the brain. Do not underestimate the sugar that lies on your dining table! It is the source of power for human intelligence!

Insulin is being used by thousands of diabetic patients all over the world and they all face the risk of varying degrees of excessive fall in glucose. This is one of the side effects of insulin and that is why diabetic patients are encouraged to carry some sweets to deal with such emergencies. At the other extreme, from a forensic medicine angle, it is not unknown for murderers to use insulin as a lethal weapon!

Coming back to the discussion again I need to remind that the point I was driving home was the fact that multiple programs can be controlled simultaneously by means of one or just a few switches. Glucose availability is perhaps a vital biological need and demands exquisite controls to maintain adequate levels of it at all times. We achieve this by way of feeding (an immediate source), or by mobilising glucose from body stores (glycogen is the storage form of our glucose in our body and it is kept in our liver and muscles), or by making glucose within our body using some raw materials etc. Glucose mobilisation from our body stores for example can be signalled by release of quite a few hormones in our body such as adrenaline, cortisol, glucagon etc. All these hormones should be simultaneously released in one go, which is what I said about coordinated control. That is what happens in your body. These hormones not only mobilise glucose from body stores but also enable us to make glucose from other raw materials.

In times of low energy availability, or increased energy requirement like in a danger situation where you are expected to increase physical activity our body cannot rely purely on glucose. Fat is even bigger in terms of stored energy and it makes sense to target our fat stores as well. So, ideally, you would expect the hormone signals that trigger glucose mobilisation to also trigger fat mobilisation. You are right. That is the beauty of our body's ability to synchronise metabolic programs, using simultaneous commands.

Glycogen, I said, is a storage form of carbohydrate in your body. In times of need you can break it down to release glucose as a source of energy. In times of plenty you do the opposite - make glycogen from the abundant glucose available in your blood. I suppose the times of plenty are the times you have just eaten. Such opposing pathways (glycogen breakdown versus glycogen synthesis) are quite common in metabolism and it is vitally important for the cells to initiate the appropriate direction to suit the need.

For a start, this change in metabolic direction is easily accomplished by having dedicated molecular controls in the form of hormones. Insulin is the primary hormone that directs the flow to glycogen synthesis whereas adrenaline is its counterpart favouring glycogen breakdown. As you can imagine, there is no place for insulin when you are starving. Equally so, there is no place for adrenaline when you have just eaten.

To illustrate this better let us look at an example in your social life. When do you save? When do you dip into your savings? Do you ever do both at the same time simultaneously? You save when you have more than you can spend. You mobilise your bank savings when the income is not enough to meet you expenses. The answer to this question more or less explains what would happen inside your body as well. Your body needs to 'save' the food energy for the rainy day. Equally important is the ability to access this stored food energy when there is a demand like when you tap into your bank savings.

In the above example of glycogen, directional flow of signals down the glycogen breakdown pathway or re-routing towards the pathway towards synthesis of glycogen from glucose, are 'etched' in the 'cellular microprocessors'. Exquisite control mechanisms operate that will activate one pathway (for instance, glycogen synthesis) while shutting down the glycogen breakdown pathway. Regulatory agents determining the direction of the metabolic flow often have reciprocal control effects. That is, a regulatory molecule could have a positive effect on one arm while at the same time having a negative effect on the other arm.

For e.g. Adrenaline can activate glycogen breakdown consistent with its profile of releasing energy to meet the demands in an emergency. Insulin, on the other hand, favours glycogen synthesis.

Interestingly, the above pathway can be reciprocally controlled as well. That is, adrenaline can activate glycogen breakdown as shown above. At the same time it can inactivate the opposite process i.e. glycogen synthesis. Of what use is glycogen breakdown if glycogen synthesis is not shut down at the other end. Similarly, insulin can activate Glycogen synthesis and at the same time inactivate glycogen breakdown process. Otherwise, there is what we call 'futile cycle' where some one is trying to fill water in a tub when the plug is not closed. The person who intends to have a bath will need to not only open the tap but also close the drain in the tub. Do you see what I mean by the term 'reciprocal control'?

The motif behind reciprocal control is simple. In most instances it is done by the reversible attachment of phosphate group to the enzymes that take part in the concerned metabolic pathway. The difference here is that, though addition of phosphate is the same, the effect of the addition of phosphate has a positive activation effect on the glycogen breakdown enzyme whereas it has directly the opposite (negative) effect on the glycogen synthesising enzyme! The addition of phosphate is done by a single molecule on the two different enzymes at the same time! The outcome is positive in one arm and negative in the other arm of the program! Amazingly, such dual and reciprocal control of flow of metabolic information is not uncommon in your cells!!

Even more amazingly there is more incredible stuff happening in cellular communication landscape. Continuing with the adrenaline example a bit more I can show how there is bewildering complexity in cellular information transfer that is yet to be matched by modern information revolution.

The role of adrenaline is to enable the famous 'flight or fight' response in times of danger. You have to determine, often pretty quickly, whether you want to take to your heels or fight like a man in a danger situation. I guess animals have evolved innumerable ways of detecting and responding to danger over hundreds of millions of years, haven't they? Whether you decide to run or fight both ways you need energy to support the physical activity needed. One such energy source is glucose, which you have accounted for in the above glycogen example. I told you little earlier that the other source of energy is fat. It is accessed from your body stores by means of a metabolic program called lipolysis, meaning breakdown of fat. A number of hormones can activate this program, including adrenaline.

Therefore, the message held by the adrenaline molecule is transmitted to more than one group of cells to bring about differential effects. Muscle cells respond to adrenaline message by breaking down glycogen. Fat cells respond by mobilising fat from the body stores. The sum total of the effects is increased energy availability for your body to run or fight.

Interestingly, there is a shared information transfer path between these two programs. The sequence of events from adrenaline binding to the adrenaline receptor and generation of cyclic AMP and the subsequent activation of cyclic AMP- dependent Protein Kinase A is common to adrenaline-mediated events. Then you see the cyclic AMP-dependent protein Kinase doing two things. One, it is activating the glycogen breakdown enzyme. Secondly, it is also activating the fat breakdown enzyme called the Lipase. In this example of biological information transfer there is one message, but two meanings. Putting it another way, one message but different decoding by two different users (in this case muscle and fat cells). How smart can you get? Can you tell me anything that even comes close in your technological world?

If you thought it is the end of the story then you are wrong. Let us go back to the scenario of a danger situation. Don't you also require faster heartbeats and more blood flow to help you with the demanding task of intense physical activity associated with running or fighting? People often say it is the 'adrenaline rush' when they are in an intensely stressful state. You feel your heart pounding against your chest, your skin gets blanched and you look pale, your eyes widen, your hairs stand on their heads, you breathe more heavily to take in more air. That is a typical set of reactions every one goes through when they are afraid.

Faster heart pumping means more blood delivered carrying more oxygen and nutrients like the mobilised glucose and fat to the cells, eye dilatation allows to see more of the environment where you are facing the danger so that additional visual stimuli may allow you to exploit the information, you look pale because your blood supply to the skin is reduced by means of constriction of the blood vessels so that you do not end up bleeding a lot if you are injured, and rapid and deep breathing allows more oxygen uptake.

As you can see all these responses, including the energy mobilisation we have talked at length, have a common goal i.e. to allow you to deal with danger. Would it surprise you if I said that all these physiological reactions are accomplished by a single message carried by adrenaline? How come a single type of encoded message brings about diverse effects acting on different cell types?

This is achieved by means of two different types of adrenaline receptors - _alpha_ and _beta_. Some types of cells like skin, eyes, intestine, and urinary bladder have the _alpha_ -receptor. Whereas the other cell types like the heart, blood vessels, lungs, muscle, fat etc have the _beta_ type of receptor. Interestingly, the _beta_ type of adrenaline receptor comes with two subtypes - _beta_ 1 and _beta_ 2 adding more possibilities for differential decoding of the message. Adrenaline receptor types present in the heart, lungs, and blood vessels mediate multiple effects such as increased heart rate, dilated bronchi and constricted blood vessels using the same primary message - adrenaline.

The trick here is to keep the same message but have differences in the decoding machinery. Because the decoding machinery associated with these receptor types is differently hard-wired you get the ability to activate difference pathways of the cellular information transduction system. This is typical of the cell-cell communication within your body. A message between cells is often part of a bigger scheme of things where a number of actions need to be accomplished simultaneously. This is carried out by the differential decoding of the same message by the various cells concerned. The sum total of all effects is a meaningful, concerted life process.

In our body there are many hormones and neurotransmitters that are decoded by multiple receptors. It is not as if adrenaline was a one-off wonder. This means that these informational molecules are 'interpreted' differently by multiple user cells. Dopamine, a neurotransmitter, has at least 5 distinct receptor types present on distinct cell types that are all responsive to the signal coming in the form of dopamine. As you can imagine, each dopamine receptor type mediates a unique action.

Another example of a multiple receptor neurotransmitter is Serotonin. Serotonin is chemically 5-hydroxytryptamine. It is formed from the simple amino acid called Tryptophan by addition of one hydroxyl and one amino group. With these simple additions the molecule becomes intensively information-rich. It has some thing like 7 receptor subtypes across the brain which mediate diverse psychological functions.

_Nature_ journal carried a supplement with its 2 December1999 issue devoted to the discussions on the current understanding of the world we live in and which direction science is taking us for a better worldview. Leland H. Hartwell at the Fred Hutchinson Cancer Centre, Seattle, Washington, and his colleagues, had written a fascinating article on our modern understanding of the ways cells work in an organism. They argue that the best language to describe cellular functional modules and their interactions will have to be found in computer science and engineering! The capacity to transform information from one form to another on the basis of a set of rules is the essence of computational science. There is a bidirectional flow of information between different levels of biological organisation. Signals that an organism receives from its environment can influence which genes it expresses and thus which proteins it will have at any given point in time.

They view evolution as a form of computation **,** in which the inputs are environmental measurements, the outputs are signals that modulate behaviour, and the rules generate the outputs from the environmental inputs. A biological system uses information about the current environment to predict possible future environments and generate responses that maximise the chance of reproduction and survival. They cite the example of circadian rhythms to illustrate the organism's ability to predict future environmental changes. Organisms show circadian, 24-hour rhythms, in most of their cellular and behavioural functions like eating, reproduction, sleep etc. They synchronise their outputs to coincide with predicted environmental changes in light intensity over the day and season.

Once the entrainment of cellular and behavioural functions set, then the outputs of the bio system occur in a predictive manner. The best example I can think of is the way our digestive system synchronises with food intake times. We eat at regular times of the day. The timings do not vary too widely under normal circumstances. Surprisingly, our digestive juices start secreting half to one hour before the anticipated time of arrival of the food! Secondly, the peristaltic contractions of the intestine, which is necessary for propelling the food down the gut, start much before the expected time of arrival of food! We feel them as hunger pangs. In fact, even the digestive enzymes in the intestinal brush border cells show increased activity before the arrival of the food. These are experimentally observed findings that clearly show that a biological system does compute the future environmental inputs, in this case the food availability.

Electrical engineers design circuits to perform specific functions. Leland and his colleagues think even the cells of organisms have evolved modules to perform biological functions. They think the properties of a module's components and connections between them are analogous to the circuit diagram of an electrical device. Design principles of biological systems are familiar to electrical engineers **,** they claim!

A system capable of rapid transitions between two stable states can be driven by a positive feed back loop. Despite widely fluctuating inputs, a system can be made to maintain an output parameter within a narrow range. Coming to think of it, most of cellular biochemistry revolves around positive and negative feedbacks regulating enzyme activities and also levels of vital molecules and ions. No doubt they can be viewed as closed loop control systems.

I would like to mention here about some neural control systems in our body. A number of our vital body functions are controlled by specialised groups of neurons at the base of the brain. These are separate and independent of the conscious information processing centres in the frontal cortex and the sensory/motor information processing centres in the rest of the brain. These neural control systems work like any typical automated control systems we see in our modern life.

For example, hypothalamus is a small structure in the base of the brain that consists of specialised neurons. They control a number of visceral functions, including regulation of heart rate, movement of food down the gut and control of urinary bladder etc through the agency of the autonomic nervous system. Drinking and eating are also regulated by neurons located here. Thirst centre, consisting of a collection of neurons, can perceive the osmolality of the blood and initiate the drinking behaviour if there is less water in the body! Temperature of our body is regulated with the help of temperature sensors located here. Even common medications we take for fever control (like paracetamol and aspirin) act on the temperature control centre in the hypothalamus. Together with the limbic system (a collection of brain structures at the base) the hypothalamus also mediates emotions. The limbic system control emotions and the limbic system is actually a collection of diverse neuronal structures in the upper part of the brain stem and nearby regions.

A collection of the brain neurons called the circum-ventricular organs lie on the walls of the ventricles of the brain (a fluid-filled compartment inside the brain) and also include those present in part of the hypothalamus, pineal gland, pituitary gland etc. They can sense the chemical composition of the blood and regulate extreme changes in fluid levels, blood pressure, nutrient concentrations etc. Pineal gland also controls the day-night rhythms in our body and helps us to synchronise our body to the sunrise and sunset.

Medulla Oblongata is a specialised nerve group in the brain stem. They contain several groups of nerve cells that control vital body functions like rhythmic breathing (through what is called the respiratory centre) and increase or decrease of heartbeats. That is why injuries to the brain stem are always fatal. The neuronal groups called the basal ganglia, lying deep within the cerebral hemispheres, regulate the initiation and termination of voluntary movement working closely with neurons in the cerebral cortex. In conditions where the basal ganglia are damaged the person is unable to produce accurate body movement.

The brain is actually divided into distinct neuronal groups for control of distinct functions. Many of them are carried out without the need for conscious input. The compartmentalisation of the brain into structural and functional regions enables us to run our body functions independent of each other. When some one has damage to the brain the functions affected will be typically restricted to one modality. It could be loss of smell sensation, a loss of speech function, loss of memory, disturbance to body movement but no effect on sensations, impairment of body coordination etc. The presentation of symptoms by the patient will be so characteristic that it is easy to pinpoint the likely location of brain damage. Global loss of all functions of the brain is usually seen in major trauma or during conditions of extreme oxygen lack. Damage to the brain of such types is usually inconsistent with life.

These control systems in the brain (controlling breathing, heart beats, temperature, blood pressure, Thirst, movement etc) are dependent on two-way flow of information. Sensors detect the current situation prevailing within the body, with regards to the parameter in question, and transmit the information to the control centres. These control centres, I said earlier, are groups of neurons that can compute an appropriate response and send a command to the relevant cells or organs. They operate in an independent manner as a self-contained loop, a design feature very common in engineering. In my view, the engineering control theory can explain this sort of a design principle. Positive and negative feedback are central to these brain control centres.

Our body is full of other endocrine control centres too. 'Endocrine' is the medical term for hormones. Hormones are produced by endocrine glands. There are many endocrine glands in our body like Pituitary, Thyroid, Adrenal and Parathyroid. They produce vital hormones which act as informational molecules capable of directing the function of many target tissues. Endocrine glands are the 'bureaucrats' of our body. They are closed-loop systems. They receive inputs from the respective sensors that determine whether there will be stimulation or inhibition of the hormone production. The inputs work by feedback mechanisms typical of engineering systems. The output is maintained within a tight limit.

Higher organisms like us need more sophistication in the control systems operating in our body. That is not to say that a prokaryotic life form is something to be dismissed. The controls operating in a prokaryotic life system is still an engineering wonder. Relatively speaking a human life form is certainly more complex no doubt. It may take a few centuries to make engineered systems as complex as humans.

The complexity of higher organisms may be partly due to the fact that DNA does not just 'code' for genes. It is really an 'operating system' in computer jargon. It controls a whole load of diverse cell processes.

A cell can be considered to run a variety of applications like metabolism (protein, fat and carbohydrate), division, ageing, communication, secretion, energy generation, detoxification, repair, excretion, programmed cell death etc. It also needs to keep up with changes in internal and external environment over time.

The cell also has to interact with 'users', which could be other cells as well as internal molecular components in very sophisticated ways. We have listed all conditions that make it necessary for the cellular device to have an operating system like your computer! The body as a whole runs a number of devices like eyes, ears, nose, mouth, heart, lungs, brain, kidneys, liver, muscles, bones, reproductive organs etc. They depend on DNA for their formation during embryogenesis. They depend on DNA for selective expression of the genes necessary for execution of their specialised functions. If you look at the body as a whole you find a number of applications the body is capable of. It can range from eating, digestion, breathing, heartbeats and circulation, excretion, locomotion, reproduction, ageing, reproduction, immunity etc. DNA in my opinion serves as the operating system for the cells in particular and the body as a whole.

If a device does only a very straightforward, simple and repetitive task then you do not need an operating system. A typical example would be a microwave oven. Microwave ovens do only a single, hard-wired program all the time. All you need is a numbered keypad and few pre-set buttons. But, a device like a computer runs a number of applications like Word, games, music, video, printer etc and they need an operating system to organise and control the hardware and software. These days operating systems can be found in a number of electronic devices like cell phones and wireless access points. Windows is perhaps the most common operating system running most desk tops these days. Macintosh and UNIX are other types of operating systems available. Special purpose operating systems are designed for use in applications like robotics, real-time control systems etc.

Depending on the system the software and hardware resources may vary. In a desktop computer they include the processor, memory, disk space etc. In a cell phone it could be the network connection, the keypad, the phone dialler etc. The operating system ensures that various programs and inputs compete for the attention of the central processing unit (CPU). Demand for memory, storage, input/output bandwidth is met by taking into account the limited capacity of the system. The operating system's task is to manage the processor, the CPU, memory, devices, storage, application interface and user-interface.

Obviously, the operating system in a big computer is going to be more complex than one present in a smaller device like a cell phone. This is no different from the case of a cell from a multi-cellular organism and a single-celled organism. Going back to the analogy of operating system it is worth pointing out that a single computer application (e.g. word processor, spreadsheet or game) may cause several other processes to begin, including communication with other devices, or even with other computers. In fact, dozens of background processes could be running without you being aware of it. That is why when you start your computer it appears to be slow to start if you keep too many applications running. This brings out the distinction between an application and a process in computer jargon. It is the processes, not the application, which is controlled by the operating system for scheduling and execution by the CPU.

To run any of the applications the body is capable of (listed a few paragraphs ago like eating, digestion, breathing, heartbeats and circulation, excretion, locomotion, reproduction, ageing, reproduction, immunity etc.) it has to run many processes. In terms of the bigger picture whole body applications like, for example, the application of locomotion would require the processes like muscular contraction, nerve stimulation from brain, balance and coordination from cerebellum, vision, energy generation from muscle and liver stores, catabolism of fat, catabolism of carbohydrates, increased heart beat and circulation by the heart, increased breathing by the lungs and so on. But, it is not readily apparent to us that all these processes happen to make locomotion program possible. Just as your computer applications come alive at your command, with a number of background processes, your body can accomplish physiological applications with the support of a number of biochemical and neurological processes.

Operating system also provides a consistent Application Program Interface making it possible for software that works in one computer to also run on another. It is flexible enough to run hardware like printers, disk drives, etc from 1000's of different manufacturers! What kind of application program interface and user interface does the DNA provide in the bio systems? How do the body devices and body applications run managed by the DNA operating system?

If I took the growth hormone from a pig and inject it into a human being both these molecules will work just fine inside the human body just as they did inside the pig's body. This means there is there is a consistent Application program Interface in biology as well. The application here is the Growth.

You could take another example. If you want to run the application of reproduction one could use animal-derived oestrogen or testosterone and it will work just fine in the humans also.

To be honest biology is full of examples of compatibility between animals and plants when it comes to driving applications and processes.

For those who are particularly interested in understanding similarities between DNA operating system and the Computer operating systems like Windows/Unix/Macintosh I want to tell them that we need to define devices, applications and processes. We need to be clear what each term means. Then it becomes easy to start looking for similarities and dissimilarities.

In biological systems devices are eyes, ears, nose, mouth, heart, lungs, brain, kidneys, liver, muscles, bones, reproductive organs etc. Applications are eating, digestion, breathing, heartbeats and circulation, excretion, locomotion, reproduction, ageing, reproduction, immunity etc. Processes are muscular contraction, nerve stimulation from brain, thinking from the brain, balance and coordination from cerebellum, vision, memory, hearing, internal sensory information capture, energy generation, and so on. To run an application many processes need to run in the background.

I guess the DNA ultimately has all blueprint for constructing all the body devices but also the blueprint for running the physiological applications and processes, as well as the biochemical and neurological processes with them. It is truly the Operating system par excellence. The fact that DNA is present in all forms of life (except RNA viruses and prions) means that it is a universally successful operating system. Just as in a successful operating system in a computer there is that possibility in living forms to provide upgrades rather than dismantling the computer (or the life form) every time you want to improve the operations. Upgrades come in the form of evolutionary changes.

In a way it is intriguing that DNA (basically a chemical molecule) enables the formation and maintenance of anatomical organs including brain. It is not the kind of readily apparent relationship that people can appreciate. Even after the organs are formed during embryogenesis the continued actions of the organs need expression of selective tissue-specific genes in the organs concerned. If you look at the brain in particular we have the curious situation whereby the chemically held information (in DNA) paves the way for neurally held information such as memory and knowledge too.

SELECTED READING

1. 'Complexity' by Roger Lewin, Phoenix paperbacks, 1993.

2. 'Origins of order' by Stuart Kauffman, Oxford University Press, 1992.

3. 'Rules of engagement' by Ian Stewart in _New Scientist_ , 29 Aug 1998, p 36-40.

4. 'One law to rule them all' by Mark Buchanan in _New Scientist_ , 8 Nov 1997, p 30-35.

5. 'How nature works, by Per Bak, Oxford University Press.

6. 'Consilience' by Edward O. Wilson, Little, Brown and Company, 1998.

7. 'Rise of the Robots', by Hans Moravec in _Scientific American_ , Dec 1999, p 86-93.

8. 'The Future of Computing' by Michael L. Dertouos, 'Communication Chameleons' by John.V. Guttag, 'Raw Computation' by Anant Agarval in _Scientific American_ , Aug 1999.

9. 'Striving for coherence' by Wolf Singer, _Nature_ , Vol.397, 4 Feb 1999.

10. 'Coherence of gamma-band EEG activity as a basis for associative learning', Wolfgang H. R. Miltner et al, _Nature,_ vol.397, 4 Feb 1999.

11. 'Think you are clever' by Alison Motluk in _New Scientist_ , 7 Nov 1998.

12. 'High speed data races home' by David D. Clark, 'The Internet via cable' by Milo Medin and Jay Rolls, 'DSL:Broad band by phone' by George T. Hawley in _Scientific American_ , Oct 1999.

13. 'Computing 2010: from black holes to biology' by Decian Butler in _Nature_ , vol.402, 2 Dec1999, pages C67-C70.

14. 'The neurobiology of cognition' by James Nichols and William Newcombe in _Nature,_ vol.402, suppl, 2 Dec1999, pages C35-C38.

15. 'Firm Forecast' by John Casti in _New Scientist_ , 24 Apr 1999, p42-46.

16. 'Trust me, I am an expert' by Clive Davidson in _New Scientist,_ 6 Dec1997, p 26-30.

17. 'Oscillations in Basal ganglia' by Thomas Wichmann and Mahlon R. Delong in _Nature_ , p621, vol.400, 12 Aug 1999.

18. 'Hard Wiring' Adam Rogers in _New Scientist_ , 12 June 1999, p 38-43.

19. 'Distributed synaptic modification in neural networks by patterned stimunation, by Guo-qlang BI and Mu-ming Poo in _Nature_ , vol.401, 21 Oct 1999, p 792-796.

20. 'A network of fast spiking cells in the neocortex connected by electrical synapses' by Mario Gallareta Shaul Hestrin in _Nature_ , vol.402, 4 Nov 1999, p 72-75.

21. 'Two networks of electrically coupled inhibitory neurones in neocortex' by Jay R. Gibson et al, in _Nature_ , vol.402, 4 Nov 1999, p 75-78.

22. 'Thought Control' by Peter Thomas in _New Scientist_ , 9 MAR 1996, P 39-42.

23. Basic Medical Microbiology, Robert F. Boyd, 5th Ed, 1995, Little Brown & Company.

24. 'Jim's Bright idea' by Rob Taylor in _New Scientist_ , 6 June 1998, p37-40.

25. 'Beyond Reality' by Mark Buchanan in _New Scientist_ , 14 Mar 1998, p 27-30.

26. 'The future is bright' by John McCrone in _New Scientist_ , 25 Oct 1997, p 41-44.

27. 'The book of genes, News and views commentary by Peter Little in _Nature_ , vol.402, 2 Dec 1999, p 467-468.

28. "'Finishing' success marks major genome sequencing milestone"- a report by Declan Butler in _Nature,_ vol.02, 2 Dec 1999, p 447-448.

29. 'The DNA sequence of Human chromosome 22' by Ian Durham et al. _Nature_ , vol. 402, 2 Dec 1999, p 489-495.

30. 'Contrasting paradigms for hereditary hyperfunction of endocrine cells' by Stephen J. Marx in _Clinical Endocrinology and Metabolism,_ vol.84, no.9, p 3001-3008.

31. 'Life found two miles beneath the ice cap' a report by Nigel Hawkes in _The Times_ , 10 Dec1999.

32. 'Catch the wave' in _New Scientist_ , 5 June 1999, p 29 -33.

33. 'Forget silicon, try DNA' by Robert Pool in _New Scientist_ , 13 July 1996, p 26-31.

34. 'Take it to the limit' by Philip Ball in _New Scientist,_ 2 Aug 1997, p32-35.

35. 'Its sink or swim as a tidal wave of data approaches', a briefing by Tony Reichhardt in _Nature,_ vol.399, 10 June 1999.

36. 'Dynamics of complex systems' by Yaneer Bar-yam, Addison Wesley, 1997.

37. 'To walk again' a report by Phyllida Brown in _New Scientist_ , 14 Aug 1999, p 35-40.

38. 'The power of writing' by Joel Swerolow in _National Geographic Journal,_ Aug 1999, p 110-116.

39. 'Let us learn lincos' by Charles Seife in _New Scientist_ , 18 Sep 1999, p 36-39.

40. 'Fishing for function in noise', a news and views article by James J. Collins in _Nature,_ vol.402, 18 Nov 1999, P 241-242.

41. 'Stochastic resonance without tuning', J. J. Collins et al, _Nature_ , vol.376, 20 July 1995, p 236-237.

42. 'Stochastic resonance and the benefits of noise: from ice ages to crayfish and SQUIDS', Kurt Wiesenfield and Frank Moss, _Nature_ , vol.373, 5 Jan 1995, p 33-36.

43. 'Noise enhancement of information transfer in crayfish mechanoreceptors by stochastic resonance', John K. Douglas et al. _Nature_ , vol.3655, 23 Sep 1993, p 337-339.

44. 'Noise-enhanced tactile sensation', J. J. Collins et al. _Nature_ , vol.383, 31 Oct 1996, p 768.

45. 'Noise in human muscle spindles', Paul Cordo, _Nature_ , vol.383, 31 Oct 1996, p 769.

46. 'Towards the brain's code' by John Maddox, _Nature_ , vol. 352, 8 Aug 1991, p 469.

47. 'Senstive flower' by Andy Coghlan in _New Scientist_ , 26 Sep 1998, p24-28.

48. 'Transposans unbound', news and views article by Margaret Kidwell in _Nature_ , vol.393, 7 May 1998, p 22-23.

49. 'Life on the Road' by John G. Flanagan, news and views article in _Nature_ , vol.401, 21 Oct 1999, p 747-748.

50. Discriminating migration', a news and views article by Pasko Rakic in _Nature_ , vol.400, 22 July 1999, p 315-316.

51. Opinion Interview with Ross Anderson by Ehsan Masood in _New Scientist_ , 6 Nov 1999, p 48-50.

52. Transitions still to be made' an article in _Nature_ by Philip Ball, vol.302, suppl, 2 Dec 1999, p C73-76.

53. ' _En passant_ neurotrophic action of an axonal target in the developing mammalian CNS', Hao Wang and Marc Tessier-Lavigne, _Nature_ , vol.401, 21 Oct 1999, p 765-769.

54. 'Carbon-based electronics', news and views article in _Nature_ , vol.393, 7 May 1998, p 15-16.

55. 'Luddites of the world unite' by Anna Soderblum in _The Times_ Suppl (Inter-face), 13 Dec 1999.

56. 'Ruling passions' by Roger Lewin in _New Scientist_ , 3 April 1999, p 34-38.

57. 'Scaling, energetics and diversity', news and views article in _Nature,_ vol.401, 28 Oct 1999, p 865-866.

58. 'Enzymes of evolutionary change', news and views article in _Nature_ , vol.401, 28 Oct 1999, p 866-867.

59. 'Synexpression groups in eukaryotes' by Christof Niehrs and Nicholas Pollet, in _Nature_ , vol.402, 2 Dec 1999, p 483-487.

60. 'From molecular to modular biology', Leland H. Hartweel, John J. Hopfield, Stansilas Leibler, Andrew W. Murray, in _Nature_ , vol.402 (Suppl), 2 Dec 1999, p C47-52.

61. 'Cyberthreats catch U. S spies on the hop', a news item by James Bone, _The Times_ , 26 Nov 1999, p 23.

62. 'Ancient art of the Sahara' an article in the _National Geographic_ , June 1999, p117-122.

63. 'Pentagon gets ready to wage a cyber war', a news item by Ben Macintyre' in _The Times,_ 9 Nov 1999.

64. 'To the virtual barricades', an article in _New Scientist_ , 18 Sep 1999, p 18-19.

65. 'Protein interaction maps for complete genomes based on gene fusion events', Anton J. Enright et al, _Nature_ , vol.402, 4 Nov 1999, p 86-90.

66. 'A combined algorithm for genome-wide prediction of protein function', Edward M. Markotte et al, _Nature,_ vol.402, 4 Nov 1999, p 83-86.

67. 'Harper's Biochemistry', 24thed, 1996, Prentice- Hall International, Inc.

68. 'Textbook of Physiology' by A. C. Guyton and J. E Hall, 6th ed, 1997, W. B Saunders.

69. 'Review of Medical Physiology' by W. F. Ganong, 17th ed,, 1995, Prentice Hall.

70. 'Derailed axons get on track', news and views article in _Nature,_ vol.402, 2 Dec 1999, p 475-476.

71. 'Locked out'-News item in New Scientist, 19 Feb, 2000, reported by Kurt Kleiner.

72. 'Computer data complacency is threat to firms', News item reported by Clive Mathieson in The Times, 10 Feb, 2000.

73. 'Website attacks' reported by Ian Brodie in The Times, 10 Feb 2000.

74. 'Web attacks are linked to Germany', The Times, 14 Feb 2000, Grace Bradberry.

75. 'Hacker gang blackmail firms with stolen files' Sunday Times, 16 Feb 2000, Jon Ungoed-Thomas and Stan Arnaud.

76. 'Voices from the past', New Scientist, 26 Feb 2000, Robert Adler.

77. 'The evolution of syntactic communication' M. A. Nowak et al. Nature, Vol.404, p495-498, 2000.

78. 'It's a steal', Duncan Graham-Rowe, New Scientist, 4 March 2000, p16-17.

79. 'French spies listen into British calls' James Clark, The Times, 23 Jan 2000, p13.

80. 'The spy who bugged me', Barry Fox, New Scientist, 11 March 2000, p15.

81. 'Ultrasonic hearing in nocturnal butterflies', Nature, Vol 403, 1999, p265-266.

82. 'Decoding antiquity', New Scientist 30 May 2009.

83. NSF Workshop on Molecular Communications/Biological Communications Technology, Feb 20-21, 2008, Hilton Arlington, Virginia, USA.

84. 'Complexity and the Evolution of Computing: Biological Principles for Managing Evolving Systems'. Steve Burbeck 2007.

85. 'Non-coding RNAs: the architects of eukaryotic complexity'. John S. Mattick. EMBO reports, Vol 2, No.11, pp 986 - 991, 2001

86. 'Trends in exploration of Therapeutic targets'. C.J. Zheng et al. Drug News Pers 18 (2), March 2005.

87. 'How many drug targets are out there?' John Overington et al. Nature Reviews/Drug Discovery, Vol 5, Dec 2006.

88. Lyons, A.S., Petrucelli, R. J. Medicine-An illustrated history. Abradale Press, Harry N. Abrams Inc. publishers, New York (1987).

89. Smith, R. In search of "non-disease". _BMJ_. **324** , 883 - 885 (2002).

90. Campbell, E.J.M., Scadding, J.G., Roberts, R.S. The concept of disease. _BMJ_. II, 757 - 762 (1979).

91. Oxford Text book of Medicine. Ed. Weatherall, D.J., Leadingham, D.J., Warrell, D.A. Oxford University Press, Oxford, 3rd edition (1996).

92. Text book of Medicine. Eds. Souhami, R.L., Moxham, J. Churchill Livingstone, Harcourt Publishers, London, 3rd edition (1997).

93. Harrison's Principles of Internal Medicine. Ed. Braunwald, E., Fauci, A.S., Kasper, D.L., Hauser, S.L., Longo, D.L. McGraw-Hill, New York, 15th edition (2001).

94. Cecil Text book of Medicine. Ed. Wyngaarden, J.B., Smith, L.H. W.B.Saunders, Philadelphia, 18th edition (1988).

95. Medicine. Ed Axford, J. Blackwell Science, London, (1996).

96. Clinical Medicine. Kumar, P., Clark, P. Bailliere Tindall, W.B.Saunders, Edinburgh, 3rd edition (1994).

97. Cellular injury and death, Chapter 1, page 1 in 'Pathologic Basis of Disease'. Eds. Cotran, R.S., Kumar, V., Collins, T. W. B. Saunders, Philadelphia, 6th edition, (1999).

98. Murray, R.K. 'Biochemistry of disease', Chapter 63, p729 - 730 in Harper's Biochemistry, Appleton Lange, Norwalk, CT. 23rd edition (1993).

99. Shannon, C.E. A mathematical theory of communication. _The Bell System Technical Journal_ **27** , 379 - 423, 623 - 656 (1948).

100. Cunningham, M.W., Antone, S.M., Gulizia, J.M. Cytotoxic and viral neutralising antibodies cross react with Streptococcal M protein, enteroviruses and human cardiac myosin. _Proc Natl Acad Sci. USA_. **89** , 1320 - 1324 (1992).

101. Proctor, R.A. The staphylococcal fibronectin receptor:evidence for its importance in invasive infections. _Rev Infect Dis._ **9** (suppl 14), S335 - 340 (1987).

102. Lopez, J.D., dos Resi, M., Bretani, R.R. Presence of Laminin receptors in S.aureus. _Science_ **229** , 275 - 277 (1985).

103. Foster, T., McDevitt, D. Surface-associated protein of S. Aureus: their possible roles in virulence. _FEMS Microbiol Letts_. **118** , 199 - 205 (1994).

104. Cheung, A.I., Projan, S.J., Edelstein, R.E., Fischeti, V.A. Cloning, expression and nucleotide sequence of a S. Aureus gene (fbpA) encoding a fibrinogen-binding protein. _Infect Immunol_. **63** , 1914 - 20 (1995).

105. Schorey, J., Li, Q., McCourt, D. A mycobacterium leprae gene encoding a fibronectin-binding protein is used for efficient invasion of epithelial cells and Schwann cells. _Infect Immun_. **63** , 2652 - 7 (1995).

106. Joiner, K.A. Complement evasion by bacteria and parasites. _Annu Rev Microbiol_. **42** , 201 - 230 (1988).

107. Van Putten, J.P. Phase variation of LPS directs interconversion of invasive and immunoresistant phenotypes of N.gonorrhoeae. _EMBO J_. **12** , 4045 - 51 (1993).

