

THE DEMOCRATIC

QUALITY VECTOR AND THENEW SOCIAL AGREEMENT

Jode Himann
Published internationally by TAGDit

1018 72 Ave NE, Calgary, AB T2E 8V9, Canada info.tagdit.com

© TAGDit 2019

All rights reserved. This book or parts thereof may not be reproduced in any form, stored in any retrieval system, or transmitted in any form by any means electronic, mechanical, photocopy, recording, or otherwise – without prior written permission of the publisher.

First Edition

### Acknowledgments

Curiosity lay at the very heart of being human, and wonder is its driving force, accompanying every child born into this world. We honor children by supporting their curiosity. If parents nurture that wonder, it matures into the passion found in the healthy adult. For that reason, I am grateful for the gift of my father and mother, for their patience, love and support. It is a tradition I continue to practice with my own children.

Writing a book such as this to express a novel and complex scientific idea to a lay audience is no easy task. It is a team effort that has taken many years to complete. While a seed can grown into a healthy plant, it requires much support and the right causes and conditions in order to fulfill its potential. This book is the realization of a lifelong passion, to express an intuitive view of the universe, a soliloquy to the infinite beauty of patterns found at every scale and dimension of nature. In the investigative part of the journey, I bestow deep gratitude to my close collaborator, Dr. Brett Teeple, whose unique gift and intimate mathematical dialogue with nature can tease out the most subtle patterns hidden from the human mind. The craftsmanship of writing lay in shaping and giving voice to unformed ideas wishing to be born. For this part, I am grateful for the support of James (Gien) Wong, not only for invaluable ghostwriting assistance for important elements of this book, but also for his insights, genius, and sense of humor. Without him, this book could not have been published, nor would my philosophical musings have been articulated so well. Lastly, I wish to thank Brad Fincaryk, who has been an essential but quiet supporter of this project, ever present in the background to help move things along. Without his significant patience, belief and trust in the face of adversity, this book would not have seen the light of day.

# CONTENTS

Introduction 4

A New Way to Decide 7

A Brief History of Fake News 12

Scientific Knowledge and the Cold Hard Facts 18

The Value of Knowledge 20

Can We Trust Our Own Thoughts with Innate Cognitive Biases and Human Limitations? 22

Drugs, Knowledge and Culture 28

Adding Warmth to Cold Hard Facts 60

The Search for a Universal Metric 80

The Preferential Math of the Universe 86

The Universal Metric Applied to Biology 88

The Democratic Quality Vector 100

The Value of Social Capital 112

The Need for a Democratic Quality Vector 119

A Emergent Psychometric from a Six Dimensional

World View for Economic Delay Discounting 125

DQV Application to the Legal System 129

Radical Transparency in Crown Corporations / State-Owned Organizations and the Legal System and the Law Society 135

Information Overload 145

Enhanced Political Freedom 148

DQV Experiment: The Corporation 159

Conclusion 167

References 170

# INTRODUCTION

Our democratic voting system is heralded as one of the greatest achievements of modern civilization. Wars have been fought, blood has been shed and many have died to ensure our right to vote. Yet, we need not look further than the daily news to see the serious threats democratic voting systems face. From foreign powers and nefarious agents to social media exploits, opaque meddling in the affairs of other countries is more prevalent than ever before. This has pushed our hard-earned democracy onto a slippery slope to authoritarianism. Technology is the common denominator behind all the recent forces putting democracy at risk. The creators of a technology can never foresee the unintended consequences their inventions can have years or decades later. Voting has been forever transformed by both technology and its abuse. Can the complex challenges be mended by changes in the political process alone, or must we also seek a technology component to the solution as well, to counterbalance the technological genie that has been let out of its bottle? In this book we investigate a novel concept called a Democratic Quality Vector (DQV), which is a technological tool which can help mitigate some of the serious challenges that plague modern democratic systems. At the same time, the DQV also has implications for information systems and improvements in the quality of decision- making in general.

If we closely examine the nature of these challenges which our democratic voting systems face from a data science perspective, we can characterize them as issues of two things: data integrity and trust. When bots create fake news, resulting in the false impression that a large number of people believe in, that is a data integrity and trust issue. When social media accounts are siphoned off and psychological profiles constructed to identify voters vulnerable to targeted manipulation, that is once again a data integrity and trust issue. When propaganda is mistaken for truth, that is a data integrity and trust issue. A more colloquial word for data integrity is truth.

The background story that leads to the discovery of the new DQV begins with a deep analysis of our most fundamental assumptions about truth. The question of how to make truth and trust resilient in the information age led us examine fundamental philosophical questions such as "What is knowledge?", "How do we know when something we know is true?" and "What establishes if something is true or not?". That in turn led us to open a can of historical worms. Today, the scientific method is the definitive technique for seeking knowledge and knowing what is real in the world. In history, the scientific method is closely related to rationalism, the fundamental concept that brought about the Enlightenment, the Industrial Revolution and led to the overthrow of a number of heads of states throughout history. Rationalism is the foundation of science, and of our modern society, but we puzzled at what has become of intuitive knowledge. Certainly we all still employ it, and intuitive knowledge has played an important role in much of human culture up until recently.

Rationalist theories of scientific research have traditionally discounted intuitive knowledge as unreliable. But this view is beginning to change, paradoxically, because of rationalism itself. It is only recently that serious neuro-psychological research into intuition has begun to reveal its underlying operating mechanism. Intuition is a refined form of species instinct and biological systems are demonstrably at the root of much of it. The field of interoception is one of those that create the bridge between intuitive knowledge and the underlying biological systems that moderate the intuitive signaling. Rationalism is beginning to finally accept the legitimacy of intuitive knowledge, but only after its methodology has finally asked the right questions to reveal its secrets. The DQV is built on the logic of this new knowledge, and is designed to quantify intangible data, transforming it into a tangible proxy that can be used for rational decision-making.

We also take an excursion into psychoactive compounds because they have always played an interesting role in shaping human culture. They have always been a significant part of culture all throughout history and their mind-bending effects is bound to have implications for our perceptions of reality, ability to make new discoveries and even definition of knowledge. Because they allow us to experience the world in an entirely different way, they result in experiences that can have profound impact on knowledge creation and our understanding of the universe. These alternative experiences can shift our framework of what is true or not.

In politics, data integrity is tightly entwined with that other important concept, trust. For voting is an expression of trust in a representative to perform as promised. But powerful and ubiquitous information technology has exposed the Achilles Heel of representational democracy - the easy manipulation of data. It is this vulnerability which can easily concentrate political power into the wrong hands.

A close examination of our most basic epistemological assumptions about these two fundamental concepts underpinning democracy, truth and trust leads us directly to the consideration of reliable, new way to value "one unit of social capital" in particular, and a new way of measuring the value of one piece of information in general. We call this the Democratic Quality Vector (DQV).

After the foundations of our measurement has been established as a baseline, we then propose our solution. Our solution is a new psychometric that is derived from the Laws of Biology. In the process, we apply this emergent metric to new more suitable forms of voting systems. We introduce a new voting system called the transferable voting system that builds on top of liquid democracy. As we explain in the foundational section, the fundamental mathematics behind the transferable voting system emerged out of applications of a new type of mathematics which is ubiquitously used throughout nature and has been used to organize and explain a full spectrum of natural systems including physical, chemical, biological and social systems. This gives the system a solid foundation upon which to rest.

However, this does not mean there aren't many questions concerning how to approach the design of a transferable voting system. How would it overcome the limitations of our current system? What is the best way to interpret accuracy, magnitude, and direction of a transferable vote vector? How do we assign a meaningful value to the magnitude of the transferable vote's social capital? Could we assign any value we want to it? If the unit of social capital is decreasing with each successive transfer, Does the sequence converge to zero? How do we assign weight to a voter in the chain? Who has more weight, an academic, a senior or a plumber? Can we trust the information which we use to make our decisions? Who is the best to assess the validity or truth of a statement, report, publication? Who decides what the "cold hard facts" are?

As we answer all these questions, we are led to construct a final transferable voting system called a Vector-Parametrized Information System (VPIS), and a special instance of VPIS for voting called a Vector-Parametrized Voting System (VPVS) which incorporates all the properties necessary to compete with our current representative democracy system.

# A NEW WAY TO DECIDE

As humanity begins another chapter of its journey into the unknown, we are faced with tremendous challenges. However, with the right tools, we can transform those challenges into opportunities, into a better life for everyone. The Democratic Quality Vector holds significant promise of improving the quality of our decision-making process in business, politics and civil society. This book will explore this profound data shift and give us the tools we need to equip ourselves. This will not only ensure we not just to keep up, ride the wave of this change and come out ahead.

However, some of us might be happy with our Democracy the way it is, and wonder if a DQV is necessary. As we already know Social contracts within democracies protect the rights of the individuals living within them. Therefore, it is somewhat ironic that any person who is born into a democratic society initially has no democratic ability whatsoever to decide their fate in the world. The completely helpless condition of human newborns leaves us at the complete mercy of our caretakers and the social contracts that they choose to impose upon us. As we grow from a newborn into childhood, the normative social contract is deeply conditioned into us from the earliest age, in the form of the socially acceptable behavior of our guardians. All through our young life, we obediently adopt the normative values of the social agreement imposed and enforced by our caretakers and the state institutions that govern their behavior. We are encultured through explicit and dominant narratives repeated over and over in our families, schools, communities, and media and reinforced through peer behavior. When such a fundamental right is stripped away from children at an early age, and powerful conditioning applied, one can argue that such powerful behavioral conditioning is the most difficult thing to change.

For each one of us, the normative social contract is often one that was forged through a long history of dialogue, disagreement, and sometimes even violent insurrection. This social contract arrives at the doorsteps of our lives with the enormous inertia that history carries. It is the accumulation of generations of refinements of political rules, each generation better than the previous, each one fixing some problem or injustice not previously spotted. In addition we strengthen the social contract each time we comply with the rules of our society. For example, each time we drive under the speed limit, we are reinforcing accepted norms. Each time we vote for a politician, abide by the law and court system, or pay our taxes, we are strengthening the rules. Often, we do not think we have a choice in this process.

Moreover, in spite of the incredible struggle to get to this point, the modern form of our social contract is still far from perfect. In reality, it can never be static, but rather is a continuing work in progress, as each new generation discovers its own set of social injustices that require policy changes.

There are dangers in this process. As the French thinker Paul Virilio has argued, the Industrial Revolution's technological inventiveness has unleashed a string of new kinds of catastrophes. The invention of the automobile gave birth to the car accident; that of the boat to the shipwreck and invasive species. Furthermore, the emergence of the airplane gave rise to the plane crash, and the threat of rapid global disease spread; the emergence of industrial food production systems has given rise to biodiversity loss, species extinction, eutrophication and cardiovascular and diabetes epidemics; and of course, fossil fuel contribution to climate change to name but a few. Something similar can be said to take place in the political sphere. The French political philosopher Pierre Manent speaks of the phenomenon of the "organ-obstacle" or "instrument obstacle," whereby once beneficial policies become significant obstacles in themselves. We can cite two examples that Manet provides. First, the law, which has the aim of protecting the weak from the strong. often results in privileging the strong over the weak. Second, the sovereign state, which was founded to guarantee peace among individuals, has itself become a significant vehicle for declaring war.

With all of this in mind we might ask about conventional democracy itself and wonder whether it too has brought forth new kinds of political catastrophes. Does our democracy as we know it contain certain inherent harm that is not otherwise intended?

As It does not require a great deal of imagination to come up with a list of grievances and concerns about contemporary democratic practices, the answer to that question could be yes. For example, democracy is government of the people, by the people, and for the people, as Abraham Lincoln famously put it. One, then, would naturally expect the very best among any given people to serve in its structure. Democracy should be an opportunity for the most talented at applying their skills on behalf of their fellows. Often, however, the opposite is the case. Thus, democracy can suffer from becoming a series of choices among mediocre representatives – or worse.

Another problem is that social media has also proven easy to hijack for nefarious purposes. Bad actors use phony accounts and bots to spread fake news that has created extreme political polarization and has even tipped elections. The short-termism of four-year voting cycles does not allow important long-term issues to gain any traction, resulting in the sidelining of essential issues.

The inertia of the democratic political process also creates long delays in passing legislation. Democratic governments are also infected with dark money that buys political favors, making a mockery of the democratic process.

Last but not least, the concerns of philosophers through the ages such as Voltaire, Socrates, Aristotle, and Plato seem to be coming true before our very eyes. In a climate of fear, looming ecological disruption, and identity politics authoritarian leaders rule the roost. Without updating and adapting democracy to the modern world with its myriad complexities and rapid rate of change, a democratic catastrophe awaits.

The events of the post-2016 US election cycle have demonstrated the potential of democratic catastrophes in the digital age. Information technology has become indispensable in the fabric of modern life, allowing for a truly informed public. Despite that, bad actors have exploited the power of digital technology to undermine democracy in ways that its founders could not even imagine. Now, everyone acknowledges that the system is broken, but no one is sure how it can be fixed. How can one initiate change and convince all sides of the need to steer democracy in the right direction?

Technology itself offers some potential changes, and any sustainable solution must include some changes in the technology itself. Because of the abuse, social media giants are being forced to authenticate user accounts and tackle fake news on their platforms. However, that isn't enough. That is just treating the symptoms. What we need are new tools that accompany in a bold, new idea that can captivate the imagination, and tackles the root problem.

What we are witnessing today is a global phenomenon of the system of democracy being outplayed and won by hegemonic power. This is the root problem. Such power has abused its privilege to accrue an unfair advantage. Their Capital allows bad political actors to buy access and engage in deception that circumvents democratic rules on two levels. First, information technology systems are being used opaquely to get around voter privacy and voter rights to accurate information. Second, once hegemonic actors are installed in a political leadership position, the existing laws of leadership are often too weak to restrain an unethical leader. Subsequently, the existing, weak rules are being co-opted to increase opaqueness that benefits and protects the hegemonic power. Within the current form of representational democracy, any candidate possessing the right combination of strength, cunning, and lack of ethics has a good chance of concentrating extreme power.

The system of representation itself is the problem. Because the ultimate outcome of elections within a representational democratic system is to install few people in control of a city, province/state, or entire nation-state, it comes with the danger of extreme power concentration. Unfortunately, the checks and balances of ensuring the integrity of a candidate are insufficient to rule out electing an authoritarian leader. This is because the weakness lies in the voting public itself. When a large proportion of the voting public is insufficiently educated, the wool can be easily pulled over their eyes.

Over two thousand years ago, the greatest philosophers of ancient Greece had already warned us about this very Achilles Heel of democracy. And yet, solutions have popped up throughout history also. As early as 1884, Lewis Carroll, the famous author of Alice in Wonderland gave us a hint of a better voting system based on transferring a vote to a trusted person.

Today, we live in very complex societies. There are thousands of issues that need focusing on in a modern government. Unfortunately, democracy does not produce nearly enough experts to govern all these issues effectively. Indeed, most elected representatives are not experts in the domain they are delegated to govern on. In fact, they may have political, legal, or a business background and then end up overseeing a field for which they are not prepared.

Given that, a party of hundreds or even thousands of elected representatives does not have enough capacity to effectively govern over millions, especially when the elected representatives are not domain experts. If our governance problem comes down to finding enough genuine domain experts to make collective decisions for effective governance, then a swarm approach that produces just the right number and quality of representatives could be the solution. The common name for this kind of democracy is delegative or liquid democracy.

Instead of artificially constraining the elected representatives to be of a small number and the elected representative to be the winner of a popularity contest, liquid democracy is more likely to produce governance based on both merit and the expertise required to make an effective collective decision. This may sound attractive, however, the DQV is not conventional; it takes all the types of Democracy and improves them in a new way. The change we are talking about is nothing short of a cultural shift and a complete overhaul of the current democratic structures.

This book explores the tools that will make such a system possible. The challenges of progress necessitate that human beings, as toolmakers, continue our tradition as innovators in all areas of life, and continue to refine and improve our tools, not least of all by the adoption and creation of new ones.

The challenges of both direct and representational democracy have been known to humanity since the days of ancient Greece, and the formative principles of liquid democracy emerged in the late 19th century. It has been seriously explored in Academia since 1969 when James C. Miller published "A program for direct and proxy voting in the legislative process," and many researchers subsequently added to the body of knowledge, such as computer scientist Bryan Ford, who proposed delegative voting in 2002. With the emergence of blockchain technology, the possibility now exists to finally create a modified liquid democracy system that can be secured and therefore enable a transferable vote system.

We will begin our journey into the tools that can expedite democratic reform by examining our most basic assumptions about knowledge itself. For a voter to make the right decision, whether to select a potential elected representative or to weigh in on an important political issue, we need to understand what the facts are. As we shall see in the course of this book, however, that disagreement on what a fact is half the problem.

#  A BRIEF HISTORY OF FAKE NEWS

#### When the knowledge that is handed down is combined with errors. As soon as anybody belongs to a certain narrow creed in science, eerie unprejudiced and true perception is gone" 1883 GOETHE

Humanity has passed the industrial age and is well into the information age. Data has become the lifeblood of our society and our economy. Like it or not, we are all interconnected by a vast network of information arteries that allows instantaneous communication. In recent years, self-organized mass social events such as the Occupy Movement, the Arab Spring, the Yellow Jackets, and the Zimbabwe uprising have seen a game of citizens using social media as a powerful democratic organizational tool. It is so powerful that governments sometimes respond by shuttering the internet or social media. Such is the power of the internet and real-time mass information flow.

Today the internet is about to enter a new era. With the dawn of IoT (Internet of Things), Blockchain and AI, machines are going to join this network in a way that will result in massive step change in the internet and data landscape. IoT is expected to reach 75.44 billion units worldwide by 2025 (Statistica.com), eclipsing computer sales. The micro-scale implications of this seismic macro-scale shift will be profound. As explained before, technology that moves this rapidly brings huge socio-economic-ecological changes, both beneficial and harmful. It is a power that is transforming lives, but it comes at a price.

If information is the currency of modernity, then decision-making is how we spend it. So, the question is: are we spending wisely? The explosion of technology is generating reams of data, but all the information in the world is of no use to us if we cannot sort through it, make sense of it or trust it. These massive mounds of unusable information are the digital equivalent of a hoarder. How many of us have emails that have never been pruned, with useless messages, spam or outdated information sitting somewhere in the cloud? How much unused data are companies collecting through social media data analytics and machine to machine systems? Here are some sobering facts about our growing data mountains, published by Forbes magazine in 2015:

  * 90% of the world's data has been generated in the past two years (Sintef, 2013)

  * By 2020, we will have 6.1 billion smartphone users globally and 50 billion smart connected devices

  * Google uses 1,000 computers to answer a single search query, taking no longer than 0.2 seconds to complete (with 3.5 billion searches a day in 2019, this is a major reason why IT is becoming a serious power hog)

  * A typical Fortune 1000 company will generate $65 million of additional revenue by increasing data accessibility by 10%

  * Retailers who leverage big data can increase operating margins by up to 60%

  * 73% of organizations have already invested or plan to invest in big data in 2016

  * Only 0.5% of all data has been analyzed and used

The first problem we will have to contend with is how to deal with the sheer volume of data. The unintended consequences of this mountain of data reach into many dimensions. For instance, the findings of Swedish researcher Anders Andrae shows that all of this data traffic could have a profound impact on total electricity usage and subsequently and carbon emissions.

If there are no interventions, this amount of data usage could consume a fifth of humanities' electricity supply by as early as 2025. For organizations, inexpensive digital technology, high bandwidth internet, and the coming of IoT, A.I., and blockchain will create more data than we can deal with. How will we manage and make sense of all this data? This is important because it won't be of much value to us if we can't. As the above graph shows, we could be wasting vast physical resources if that data is not used effectively. We have to develop super-efficient hardware, data-miserly software, and data habits to apply whole new fields such as big data science, data analytics, and machine learning efficiently. This will produce valuable policy, business, and personal insights to support effective decision-making.

The second problem is information quality. With so much data coming from so many sources, data quality is rapidly becoming a significant issue. One of the most apparent data quality issues is the rapid emergence of the phenomena of fake news, a term which has become part of the lexicon of modernity. It reflects the ease with which anyone can use commonly available digital tools to create false information.

The power of digital media tools now allows anyone with a bit of skill to manufacture any news, image, or video, and distribute it through a fake social media account. While anyone can fabricate a deception on the internet, it becomes problematic when the lie is state sponsored. What is even more disturbing is that after being exposed, state actors deny the accusations with plausible deniability. As a result, those bad actors hiding in the shadow areas of the dark web are actively shaping the information that vulnerable consumers digest, and advancing a narrative that aligns to their ulterior political motives.

Fake news has been thrust into the limelight by the US investigation of Russian interference in the 2016 US elections. An indication of the seriousness of the problem is the growing numbers of fact-checking software that has become available. The Duke Reporters' Lab shows that from 2014 to 2018, the number of fact-checking programs tripled from 40 to 156. These tools may, unfortunately, become necessary parts of the future web. Still, they do not treat the problem, only the symptoms.

All these recent failures of our political decision-making process may be indicators that the very form of democracy we have been practicing may be fast becoming outdated. It is a wakeup call to adapt to the rapidly changing digital information landscape with new systems, or else risk a broken democracy.

Due to the fact that these cyber-attacks are not just limited to governments, it's not just in politics where information quality issues have profound effects. In business, the spoils will go to those who learn how to effectively employ big data, and the analytics and AI engines that decipher what it all means. There are useful patterns hidden in all that data which can help businesses increase their bottom line. Acting on data that will allow a company to engage with customers more effectively, or tweak machines or system operations that will result in performance increases that will increase profits substantially for potentially little or no extra cost.

Oppositely, acting on bad data can have negative consequences. An Experian Data study found that bad data had a direct impact on the revenue of almost 90% of American businesses. IBM research showed that US organizations believe that 32% of their data is of poor quality and accounted for an average of 12% revenue decline. IBM's Big Data & Analytics Hub estimates that poor information quality costs US companies $3.1 trillion dollars annually (2016). A Gartner study found an almost similar number – 27% of the data in Fortune 1000 companies were reported to be of poor quality. The tool developed in this book will ensure that these issues are avoided.

The term "fake news" may be new, but the idea certainly isn't. It can be argued that it has been around ever since humans began making general claims about the nature of reality. Two millennia ago, Aristotle observed that maggots seemed to generate on dead animal carcasses spontaneously and barnacles would form on the hull of boats, giving rise to the theory of the spontaneous generation of life. Even as late as the 1700s, Aristotle's philosophy was upheld as truth. It took scientists like Louis Pasteur and the invention of the microscope to disprove the long-held theory.

Other theories, taken as credible at the time, have long since vanished. The Phlogiston theory of Johan Becher in 1667 held that any substance that combusted held a material without any detectable properties called, you guessed it, phlogiston. The luminiferous aether was another mysterious substance thought to pervade the entire universe, and even a vacuum was the medium which allowed light and electromagnetism to travel. It was impossible for scientists of the time to conceive of vibrations happening without a material medium.

If one gets the impression that scientific theories seem to be wrong quite often, it's actually an accurate one. This should come as no surprise for anyone versed in science, for the veracity of scientific models is constantly being tested by new observations. The noted quantum physicist Richard Feynman said "We are trying to prove ourselves wrong as quickly as possible, because only in that way can we find progress. "Theories are best guesses. They are predictive models constructed from a set of general assumptions, which can be wrong.

Empirical scientists are working around the clock, 24/7 to unearth new observations in every nook and cranny of science. It comes as no surprise that some of those observations will contradict the predictions of the current model. This highlights the inherently risky business of science. With each new prediction, the chances of the model being wrong becomes more likely. If we are to trust the history of science, many of today's currently accepted theories will be consigned to the garbage heap in a century's time. Given this built-in transient nature of scientific knowledge, we can make the reasonable but counterintuitive claim that all science is ultimately wrong. We can guess that all scientific knowledge has a shelf life, we just don't know what the expiry date is.

Or maybe we do. In his book The Half-Life of Facts: Why Everything we Know Has an Expiration Date, Harvard mathematician Samuel Arbesman argues that all so-called "facts", including scientific ones behave like radioactive substances and have a measurable half-life. Abersman provides some intriguing evidence to support his claim. "Facts", Abersman claims, are changing all the time. Abersman's unique contribution is that he has uncovered a predictable pattern to the way facts changes, grows and decays. Abersman is part of a new field of quantitative, meta-study of scientific ideas called scientometrics, which grew out of the field of quantitative library sciences called bibliometrics. There, the unit of measurement is the research paper. Back in the 70s when digital

memory was not yet widely available, librarians notice the rapid growth of scientific knowledge and were concerned about the limited shelf space on their bookshelves. So they began to measure which scientific research papers and fields were growing the most rapidly. Abersman investigated the field of medical research and found that in the research on hepatitis and cirrhosis, scientometric research in the 1960s had already discovered the half-life of this field to be approximately 45 years. In other fields such as social sciences, the half life is even faster, due to the uncertainties of studying human behavior. In some physical science fields, meanwhile, the half-life can be much longer because the knowledge is very quantitative and well defined. Abersman also looked at the growth of knowledge and cites figures for doubling times of knowledge in various fields: medicine – 87 years, mathematics – 63 years, chemistry – 35 years, genetics – 32 years. Because there is so much to know, the way we deal with that is by specializing in niche areas.

Arbesman cites the 1960 research paper "The Dollars and Sense of Continuing Education," a paper written by author Thomas Jones who calculated the effort it took an engineer to stay up-to-date, assuming a 10 year half life of knowledge. He calculated five hours/week for 48 weeks a year to stay current. A typical degree requires 4800 hours of work.

Within 10 years, 2400 hours of that would have been obsoleted. A 40 year career requires 9600 hours of additional study to keep current. Modern estimates of half life are half this or less with higher number of hours of study. This is impractical, and the leading technology firms know it.

Hence, leading tech firms like Google, Facebook and Amazon are biased to hiring recent graduates instead of retraining older tech workers.

What scientometric studies like these show is that scientific truth is ultimately always only provisional, but that doesn't make current scientific knowledge useless. On the contrary, there is always a pragmatic utility in the present. The unavoidable cautionary tale is that there is always a price attached to it, and nature may recall her debt of the unknown at any moment through unintended consequences, progress traps.
SCIENTIFIC KNOWLEDGE & THE COLD HARD FACTS

Scientific theories seem to be proven wrong quite often. This should come as no surprise for anyone versed in science, for the veracity of experimental models is continually being tested by new observations. The noted quantum physicist Richard Feynman said: "We are trying to prove ourselves wrong as quickly as possible, because only in that way can we find progress." This means that theories are only best guesses. They are predictive models constructed from a set of general assumptions which can be proven wrong. This highlights the inherently risky business of science. With each new prediction, the chances of the model being false increases. If we are to trust history, many of today's accepted theories will, a century from now, be consigned to the garbage heap.

Given this transient nature of scientific knowledge, we can make the reasonable claim that science is ultimately wrong. We can guess that all models will expire; we don't know what the expiry date is. Scientific truth is finally always only provisional, but that doesn't make current scientific knowledge useless. On the contrary, the utility always comes with a price attached to it, and nature may recall her debt at any moment.

Our modern civilization is both the achievement and curse of our pursuit of knowledge, as it is built upon that debt. We have achieved vast improvements in our quality of life at the same time that we have created an enormous range of new problems, both known and unknown. Technology employs known scientific knowledge to advance human progress, while the unknown is what creates the unintended consequences.

Today, science plays a critical role in our lives, form the theories of internal combustion to cooking science, most of the reader of this book are very familiar with some science. While those born into modernity may now think of scientific knowledge as absolute, it wasn't long ago when the leading voices of science were put on trial for heresy. It was just a few centuries ago when Galileo was tried and found guilty for his steadfast adherence to scientific truth over dogma. While the church won the battle, they lost the war. Today, knowledge is a taken for granted part of our lives.

Science tries to be right, but it advances by being wrong. To do this, it must embrace the pursuit of truth with open arms and collectively admit to errors when they appear. This ability to question itself is a crucial distinction that sets science apart from most religions – which hold that their basic tenets as eternal truths, which are not subject to change. As science advances, observations that contradict the predictions of models challenge the prevalent notion of truth.

When contradictory views reach a crisis level, and a new model is born, it can bring about a paradigm shift. Thus, the collective self-destruction of old models is intrinsic to scientific and cultural advancement. The truth of any current scientific statement is always judged by existing models, biasing what we consider truth. In a genuine sense then, we can say that as science advances, truth becomes a casualty of scientific progress. A more startling conclusion is that all science is could be considered false because we never reach the end of discovery.

# THE VALUE OF KNOWLEDGE

So what about our notions of historical truth? Here too, we find that historical fact, like scientific fact, is also imperfect and dependent on the reliability of historical evidence. The historical processes by which ideas become common knowledge are often intentionally opaque. Voltaire said that history is a pack of tricks we play on the dead. One example of that is Columbus Day. Many Americans still believe Christopher Columbus discovered America and celebrate his success on Columbus Day. Scholars today know that he did not ever set foot in America, landing instead on the Caribbean island known as Hispaniola. The Viking explorer Leif Ericson landed in Newfoundland a millennium before, and in July 1497, the Venetian explorer Giovanni Caboto (John Cabot) left Bristol, England in search of Asia and a month later, on August 6, landed in Newfoundland.

How did Columbus become celebrated as the discoverer of America, but not Ericson or Caboto? The answer is that the United States Columbus Day holiday did not come to be as a result of accurate historical remembrance but rather from lobbying by different vested interests. The newly liberated colonists needed a heroic symbol but could not choose John Cabot, as he flew the British flag. In 1792, the colonists celebrated Columbus' hundredth anniversary of landing in the "Americas." Hence, Columbus and Cabot, two minor characters in history, suddenly entered the limelight.

A first-generation Italian-American, Angelo Noce of Denver, Colorado, lobbied to consider Columbus Day a national holiday, and finally, in April 1934, the Knights of Columbus successfully urged Franklin D. Roosevelt to declare October 12 a national holiday.

Another instance of constructed history was in scientific history. In the late 19th century, two battles raged within the discipline of engineering physics, whose victor was to determine the course of history. First, Thomas Edison and Nikola Tesla, inventors of the DC and AC power distribution system respectively, and second William Rowan Hamilton's Quaternions vs. Josiah Willard Gibbs in the field of vector algebras were both unknowingly going to have a profound effect on our live today. Today, we all know who won the Current Wars, Nikola Tesla's AC system of electrical power distribution. Any student in applied mathematics also knows that Gibbs came out the winner of the vector wars. Today, with the benefit of hindsight, mathematicians believe that another kind of algebra, called Clifford Algebras is the best description. What would have happened had Hamilton won the vector wars over Gibbs and Heaviside? Hamilton's intuitive belief in a triplet to represent space, as well as a ratio of two vectors as a fundamental concept, may have led scientists and philosophers into an alternative understanding of objective reality.

Another bias in our knowledge is our preferences, which can lead to unintended consequences, or progress traps. We prefer a high protein diet because our body needs energy. Similarly, when building knowledge, human beings always seem to prefer anthropocentric solutions, and this may be a part of the problem. We wanted the earth to be the center of the universe, and therefore we thought the earth was the center of the universe for the longest time. We adopt a base ten numeral system because we have ten fingers, much easier for us to understand because of our biology. However, the laws of the universe are not human-centric.

Because the universe in which we live is not human centric, so we create progress traps through our imperfect understandings. Cars are fantastic for mobility but create environmental damage. Windmills provide energy and yet can kill many birds. Progress traps show us the relative nature of what we call progress; what is experienced as definitive progress in one generation can be experienced as a regression in the next. The example we have discussed has shown that these consequences are regular and often. They are minefields left for later generations to deal with. As we have mentioned before, scientific relativism often creates problems for future generations. In this way, progress traps become intimately connected to the relativism of scientific knowledge. Thus, Scientific knowledge is considered as flawed, due to our permanently flawed understanding of the universe.
CAN WE TRUST OUR OWN THOUGHTS WITH INNATE COGNITIVE BIASES & HUMAN LIMITATIONS?

Sometimes, we are our own worst enemies. This is especially true when it comes to an innate feature of human reasoning called cognitive biases. These are biases that we ourselves construct. We create our unique subjective realities based on these systematic deviations from norms of rationality or judgment. Scientists have discovered a lot of them, and there are more always being discovered.

In psychology and cognitive science, a memory bias is a cognitive bias that either enhances or impairs the recall of a memory (either the chances that the memory will be recalled at all, or the amount of time it takes for it to be recalled, or both), or that alters the content of a reported memory. There are many types of memory bias, including:

With so many biases working against us, it benefits us to be mindful of our interpretations of reality, our families' interpretations of reality, as well as our leader's version of reality. Some might think that by using the scientific method, we can calibrate our thoughts in order to be better aware of these distortions. However, this too is a concern. That is because cognitive biases are found everywhere, even within peer-reviewed science itself. In this case, we will define bias as experimental outcomes artificially moved towards a favored direction. Because scientists are driven by pressures such as the competitive pressure to publish, maintaining a lab and staff, career advancement, or financial gain they can easily let their guard down and let errors creep into their papers. As you can see, this might be detrimental.

Psychologist Brian Nosek of the University of Virginia is a crusader bent on ridding science of confirmation bias, and he believes that this phenomenon, which he calls "motivated reasoning," is the most common bias in science. That means we interpret observations to fit a particular idea (Ball, 2015), instead of fitting ideas to particular observations. A behavioral economist, Susann Fiedler of the Max Planck Institute for Research on Collective Goods, says, "Seeing the reproducibility rates in psychology and other empirical science; we can safely say that something is not working out the way it should...cognitive biases might be one reason for that" (ibid.).

Science reporters Adam Marcus and Ivan Oransky, founders of the Retraction Watch website, a spinoff of the Center for Scientific Integrity, say, "Simply put, much, if not most, of what gets published today in a scientific journal is only somewhat likely to hold up if another lab tries the experiment again, and, chances are, maybe not even that" (Marcus & Oransky, 2015). Therefore cognitive biases are alive and well in the world's scientific community.

Peer review should theoretically prevent confirmation bias from happening, but peer review can take a long time, and false or misleading information can spread into the scientific community before it is deemed misleading and retracted. This is a progress trap, and it probably wastes mountains of time and labour. At the same time, peer review is itself a large part of the problem because, as is well known in science, tenure and grants rely on frequent publication in reputable journals. This need to publish drives scientists to unconsciously select data and analyze results to satisfy journal referees (Ball, 2015).

All of these results call into question if truth is still a concern in the industry of science. If "truth" is still a concept that is useful in science, then it is our responsibility to guard our knowledge sources so that they do not become distorted. The open source nature of science fundamentally relies on sharing knowledge. An implicit condition is that this knowledge must be accurate, which is misleading. Therefore biases, both known and unknown, are detrimental to that because they compromise accuracy, and therefore (universal) usability.

A 2016 survey by Nature raises some legitimate concerns (Baker, 2016). One thousand five hundred seventy-six researchers were asked many questions. Of those surveyed, 52% said there was a significant reproducibility crisis, 38% said there is a slight crisis, 3% said no crisis and 7% said: "don't know." If you cannot experiment twice, you could be intentionally misleading. More than 70% of researchers have tried and failed to reproduce another scientist's experiment, and more than 50% have been unable to reproduce their own experiment. In spite of this, most say that they still trust the literature, which seems to ignore the point that the majority of experiments seem to be flawed due to bias.

Over 60% of the surveyed scientists reported two leading causes of irreproducibility: pressure to publish and selective reporting. There is also the situation of being stretched thin. If senior researchers are too busy with their research to mentor grad students, then grad students may start their labs and miss the mentoring from senior researchers, a pathway that is more error-prone and can lead to more experimental mistakes that increase the risk of irreproducibility (ibid.).

As you have read, cognitive biases can be seriously problematic. They can create false outcomes in data sets the world around, and have crept into every corner of our discourse, from politics to parenting.

We are going to propose a way of improving our information quality and strengthening our decision-making process, but we are not done examining the questionable foundation of knowledge quite yet.

#  DRUGS, KNOWLEDGE AND CULTURE

#### Ten years ago, while killing time between flights in a duty free shop, I found myself wondering why drugs surrounded me. Marlboro cartons loomed to my left, Drambuie bottles to my right, Belgian chocolates behind me, Kenyan coffee straight ahead – everywhere I looked, I saw imported psychoactive products. How did these things get here?" DAVID T. COURTWRIGHT

It can be argued that our ability to store and pass human knowledge down from one generation to the next is the single most distinguishing feature of human societies. However, what is ultimately passed down does not always travel a straightforward linear path. Science historian James Burke showed, in his famous 1978 book and BBC series Connections, the surprising nonlinear path that scientific knowledge and invention has taken from the past to the present. Burke's central thesis was that it was not possible to trace the development of any one piece of the modern world in isolation because everything is interconnected in a web of inter- relationships, like a gigantic human gestalt. For example, collective behaviour emerges from individual actors who each have no awareness of the final destination. So, the development of scientific knowledge can come from many unusual or overlooked sources.

One of those largely ignored areas that has had a definitive but unquantified impact on human experience is drug culture. As we shall see, entire civilizations have at one time or another deemed it socially acceptable to ingest large and regular doses of psychoactive substances. A psychoactive drug is a chemical substance that changes brain function and results in changes in perception, mood, consciousness, cognition, or behaviour. This includes not only things like illegal narcotics, but also common everyday items for sale in a shop near you like alcohol, sugar, chocolate, coffee, and cigarettes. Many of these have now been found to be harmful and are classified as illicit today, and unfortunately many are still legally sold.

In ancient cultures, drug use began as a means for practitioners to form a deeper spiritual connection, as an aphrodisiac, or as a folk remedy to heal mental, emotional or physical ailments. More recently, in the 17th century, they began as ways to treat pain or as a panacea. As drug use spread, however, the dark side of addiction and psychological impairment became apparent.

The mass consumption of psychoactive substances raises an intriguing question: what effect does a mind-altered state have on the way a culture experiences the world. That experience ultimately informs the culture's worldview, and subsequently of the knowledge it produces. If entire cultures indulged in mind-altering substances, it is reasonable to expect detectable changes in worldviews, narratives, mental models and decision-making behaviour? Human beings under the influence of psychoactive substances experience reality in a different way. Drugs can offer a way to stripe away our deep conditioning to reveal the raw, naked world that exists before mental framing of it. This space is the sphere of the intuitive mind, not the rationalist. It is perhaps this view that offers a potentially tantalizing naked experience of reality, free of relative, learned frames of reference. If it is absolute truth we are after, rather than relative, safe psychoactive compounds may offer a way to experience it.

Given that mind-altering drugs have permeated our culture over the past three centuries, it must have played a not-insignificant role in the convoluted course history has taken to arrive at our current world. As democracy began to take on more traction, what part did the outsized impact of a collective drug-induced haze have on it? Did the non-linear spurts of out-the-box, creative thinking suggest ideas that may not have otherwise emerged? Did a drug- induced state affect the policies that developed? Did it change who voted, and who didn't, and therefore the evolution of policy?

One argument is that our modern world can be argued to be the outcome of the Enlightenment. Therefore, the application of reason to every facet of society is the prima facie of our modern age. Yet, as we shall see in our survey of this period of history, opiates entered benignly into the lives of many citizens while they were promoted as a universal panacea. This was at a time when medical knowledge was limited, and diseases took many lives causing much misery. Dropsy, consumption, rheumatism, and ague were all familiar parts of everyday life. Even as late as the 19th century, there was still no known cures for cholera and dysentery. These two terminal diseases often caused death by excruciating diarrhea. In a short period, opiates became entangled in the lives of a large percentage of Europeans across the continent.

By the time their addictive qualities became known, it was too late. Historians have a good idea of the scale of drug abuse during this time. However, it is difficult to know the full impact about the effects of psychoactive compounds on culture, epistemology, and politics of the 17th to the 20th century as it was so prevalent.

Although few large scale studies have been conducted on the collective cultural impact of opiate addiction, the many historical accounts of high doses of widespread usage of opiates can be combined with currently known neurological impacts to develop a plausible theory of cultural and knowledge impact. Some of the known risks of opiate addiction include varying levels of cognitive impairment, dissociation, psychological dysfunction, and severe brain damage if taken at sufficiently high levels.

If we trace the history of the drug culture, we find that it has been a part of humanity since our earliest days. Many ancient civilizations used concentrated plant compounds both for healing and as a way to achieve altered states of consciousness. However today some people have morphed psychoactive compounds into a substance that is harmful and abusive. It is also somewhat of a twist of irony that many of today's destructive illicit narcotics have their roots in modern pharmacological research labs whose aim was to find drugs that benefited humanity. That is a progress trap.

Further tracing drug culture, anthropologists have found that psychoactive substances to many different indigenous cultures throughout human history. 2,500-year-old hallucinogenic huffing bowls have been discovered on islands in the Lesser Antilles.

It is known that the people of the Andes mountains in South America chewed coca leaves for several millennia. Ayahuasca, a mixture of Amazonian plant ingredients centred around the Banisteriopsiscaapi vine, has been used by South American tribes in sacred ceremonies for longer than we can know, and the same goes for Peyote and psilocybin mushrooms in Mexico and Iboga plant in Africa. So, not only have psychedelics profoundly shaped the worldview of our early ancestors, but also of our recent ancestors. They shift in time can also be paralleled in a shift in

use, for drugs have become much more harmful recently.

One drug, that has, and continues to have a profound effect on humanity is the seed of the poppy. It has a long history of widespread use wherever it travelled. There is even early anthropological evidence of it from a Neolithic burial site near Barcelona. The ancient Greeks considered opium sacred, and poppy seeds were described In ancient medical texts such as the Ebers Papyrus written in 1550BC, describing it as a sedative.

In the Minoan civilization (2700 to 1450 BC), it was described as a sedative to calm crying babies. Still more evidence points to Arab scholars using opium to treat disease and as a general anaesthetic. Opium was introduced as a method to protect and treat wealthy patients after the plague. It was popular with citizens in the Persian Empire during the late medieval period, rulers of the Mughal Empire ate opium and the combination of wine and opium intoxicated emperor Jahangir. Opium, among other drugs became widespread, and remains so today.

The modern trade of opium began when European countries began cultivating opium and selling it to China. In the 16th century, Portuguese traders became aware of the lucrative medicinal and recreational demand for Arab-supplied opium in China. Portuguese colonialists had learned about the North American aboriginal use of smoking tobacco and were inspired to create a new product for the Chinese market – opium mixed

with tobacco which could be smoked. The new product became a hit, and recreational smoking of opium quickly ballooned.

Addiction spread rapidly, creating such a crisis in China that in 1729, the emperor Yung-cheng criminalized the recreational smoking of opium. Yet, the extraordinary steps he took were of no avail because demand was so high. In 1764 in the province of Bengal, India, the British won the Battle of Buxar and the British East Indian Company (EIC) took over the former Mughal emperor's opium production monopoly. The British had a powerful motivation to grow opium more efficiently and then sell it to China. Because of the growing trend of consumerism in Britain, there was an increasing hunger for all products Chinese, including teas, spices, silk, and porcelain pottery. However, the Chinese at this time had no desire for European products.

Consequently, there was a huge trade imbalance. Britain needed something valuable to sell to China, whether by legal or illegal means, and opium fit the bill. So the British experimented and perfected the efficient growing of opium in Bengal and begana long relationship with Chinese smugglers to get the poppy into China. The British would receive gold and silver and use that to purchase teas, spices, silk and porcelain pottery to bring back to Europe. The black market trade became so lucrative that annual shipments increased from 200 chests in 1729, 1,000 chests in 1767, 10,000 chests in 1820 and 40,000 chests in 1838. By that time, the trade imbalance was in favor of Britain.

China tried to break the illegal supply chains set up by the British in the first Opium War in 1839-42 and again in the second Opium War in 1856-60, but to no avail. In 1849, the American gold rush brought thousands of Chinese migrants to work in the California gold fields, and they brought their opium smoking habits with them. Migrant Chinese workers set up opium dens in "Chinatowns" throughout the western United States. By 1870, opium smoking had spread to become a favorite pastime for many Americans. By the second Opium War, Britain and France had allied to fight the Chinese army.

The Chinese government was forced to legalize opium and levied a small import tax. By that time, imports were between 50,000 and 60,000 chests annually and continued increasing for the next three decades. By 1906, opium had run its course and was on the decline in China. In 1907, China signed the Ten Years' Agreement with India to limit importation, cultivation, and consumption of opium so that it ceased entirely over a period of ten years.

While the opium trade was starting up in China, back in Europe, the medicinal properties of opium caught the attention of one Thomas Sydenham, a physician who in 1676 published a recipe for an opium tincture called laudanum and thereafter, it was used to treat all manner of ailments up until after the second world war. Indeed, it would seem that the whole of Europe had become dependent on it.

In The Pursuit of Oblivion: A Social History of Drugs, a comprehensive modern history of narcotics, historian Richard Davenport-Hines describes the growing fascination of opium displayed by classically educated men. Exposed to a new literary genre of traveller's tales, they were the first to embrace experimentation with the new drug.

French naturalist Pierre Belon (1517-64) travelled in Asia Minor and Egypt, and in 1546 reported that "There is no Turk who would not buy opium with his last penny; he carries it on him in war and peace. They eat opium because they think that thus become more daring and have less fear of the dangers of war. In war-time, such quantities are purchased that it is difficult to find any left." Samuel Purchas, the vicar of a Thames-side parish who met many sailors, reported similar such tales of the Turks. Davenport Hines also shares the account of Cristobal Acosta, a Spanish physician who in 1582 published a treatise on the drugs and medicines of the East Indies. In it, Acosta gives an account of how the sexual effects of opium were known among medical students and doctors alike of Arab, Parsee, Turkish, Corazon, Sundasi, Malayan, Chinese and Malabar descent. Imaginative men who took opium to enhance their sexual performance often instead suffered premature ejaculation, Acosta wrote. It overheated them. However, for unimaginative men, it helped them to last longer and climax at the same time as his female partner.

In 17th and 18th century Europe, opium became not only acceptable and legal but indispensable for medical treatment of all manner of disease. Of course, there was no such thing as Tylenol or other Pain Killers. Kramer wrote "Opium would become indispensable to the practice of medicine. It would be used freely to allay suffering, not only from pain, and cough, but also from insomnia, and neurological and psychiatric disorders."

Kramer described the experience of many physicians, who prescribed opiates as a universal pain panacea, unaware that it's over prescription would result in a drug addiction epidemic.

While some regarded psychoactive compounds with deep suspicion and fear, science seemed to revel in experimenting with them. Early experimenters like Robert Hooke were fascinated with the exotic medicines global trade had brought back to England. Hooke himself purchased samples of cannabis, conducted experiments on anonymous subjects, and reported their effects back to members of the Royal Society.

Pharmacists such as John Awsiter expressed concerns about addiction in the 17th and 18th century. He wrote that were the pleasure-inducing effects of opium became universally known, widespread habituation would ensue. In the 19th century, while some authors reported the sedative effects of opium, others contended that opium would act as only as an exciter, increasing physical vigour and clearing the mind. Although the mechanisms behind opium were still mostly unknown, it had nevertheless become a major form of therapeutic support during the Victorian era. In those days, it was routine to walk into a chemist and buy laudanum, cocaine, and even arsenic without a prescription. Opium preparations and tinctures were sold like sugar is today. To illustrate this, let us refer to French doctor J. Hector St John de Crevecoeur who noted the particularly addictive qualities that women displayed, saying that "taking a dose of opium every morning, and so deep-rooted is it that they would be at a loss how to live without this indulgence." In trying to do good, physicians were unknowingly becoming perpetrators of harm.

In spite of the apparent dangers of addiction, depression and neurological disorders, it was the positive effects associated with opium such as pain relief, psychedelic experiences, and euphoria that drew entire nations to it. Opium helped dull the terrible pain that some experienced. After all, with primitive medical treatment, even a simple scratch could outright kill a person. Life indeed was nasty, brutish, and short. Therefore, by the time of the Enlightenment, opium had become almost ubiquitous. It was a universal drug used for its many benefits, needed by the legions of ill. In the United States, the Civil War launched the opioid crisis by issuing 10 million opium pillsand 2.8 million ounces of opium powder and tincture to its soldiers.

In Victorian England, one tincture stood above all, laudanum, a concoction invented by the physician Thomas Sydenham which included two ounces of opium, one ounce of saffron, a drachm of cinnamon and cloves, all dissolved in Canary wine. Three famous poets, Shelley, Baudelaire, and Edgar Allan Poe were said to be opium addicts. Elizabeth Barrett Browning is reputed to have taken 40 drops of laudanum a day when she began her

correspondences with her future husband, Robert Browning. Other notable addicts include George Grabbe and Francis Thompson as well as writer De Quincey and novelist Wilkie Collins.

Quincey revealed that he took opium for the first time in 1804 when studying in the Worcester College, Oxford, as pain relief for tooth pain. He described the initial impact it made on him: "... within one hour, oh, Lord! What an extraordinary change! What a resurrection from the most unreachable depths of spirit! What a revelation of my inner world. The disappearance of my pains seemed insignificant. This negative effect was consumed in the abyss of a divine and suddenly revealed pleasure. Here was the panacea for each and any human suffering; here was the secret to happiness."

In the Romantic era, poets were known to have relied on laudanum to help them access places within their psyche that could not be accessed any other way. In The Milk of Paradise, M.H. Abrams analyzes the impact of opium addiction on the literary work of four leading writers of the day, Crabbe, Coleridge, De Quincey, and Francis Thompson, who were all addicted to opium. The waking dreams revealed by their writings indicate wild swings from the heights of bliss to the abyss of terror.

Given that medical science today knows so much about the profound and often detrimental impact such drugs have on the brain, it is surprising that no large scale historical study has ever been conducted to examine the social effects that widespread and centuries-long consumption of such a powerful hallucinogenic would have on European culture, knowledge, behavior, and norms. Is it conceivable, for instance, that the mind-bending nature of the drugs would induce states of amazing creativity insights, leading to brand new discoveries?

Indeed, the common association of drugs and artists heralds back to the poets and artists of the Romantic era, who, as we've seen had regularly taken Laudanum. It is not inconceivable then, that the significant legacy left behind by the Romantic era can be attributed, at least in part, to psychoactive substances. The pre-eminent poets of the era such as Woodsworth, Keats, Shelley, Blake, Coleridge and Byron tapped into their own imagination. sometimes through psychoactive journeys, to illuminate and develop a coherent vision of the world, one that would restore

 our spirituality. Poets made the claim to be interpreters of reality. Shelley himself said 'Poets are the unacknowledged legislators of the world.' In music, composers moved away from the formal structures of classical composition and towards deeper emotional expression. Chopin, Liszt, Schumann, Berlioz and Mendelssohn set the stage, and were followed by Brahms, Wagner, Verdi, and Tchaikovsky, Schoenberg, Debussy, Bartok, Mahler, Stravinsky, Puccini, Rachmaninoff, and in modern times, George Rochberg and David Del Tredici. This list is far from exhaustive.

In art, Romantic painters also strove for deeper emotional expression. In sculpture, Auguste Rodin tried to capture the inner lives of his subjects. In portraiture, painters

RIGHT Fig 3. Laudanum tincture bottle from late 19th century Source: Science Museum London, 2019
explored feelings, inner psychological states, nature through animals and the innocence of children. In political ideology, empathy for the oppressed, liberation, emancipation and the individual's contribution to social progress were common themes.

These professed intuitives rejected the rationalism of the Enlightenment and encouraged their audience to look for the healing power found in nature and in their own imagination to transcend the difficulties in everyday life. Romantic reverence for nature encouraged the travel into new physical and imaginative spaces. Romantics saw the journey of life as one of liberation and the world as one filled with unlimited potential. The Romantic perspective continues to have a profound impact on culture today. The 1960s counter- culture revolution with its focus on rebellion, a "back to nature" sensibility, exotic Eastern mysticism reflected Romatic sensibilities.

In 1803, German chemist Friedrich Serturner isolated morphine from opium, producing a pain killer that was ten times the strength of those that came before. During the American civil war, morphine was prescribed to soldiers, and when the war ended, 400,000 soldiers came home morphine addicts.

In 1855, the German chemist Friedrich Gaedcke isolated a chemical derivative of coca leaves, a cocaine alkaloid he called Erythroxylon, and his Ph.D. student Albert Niemann purified it. An aspiring neurologist named Sigmund Freud caught wind of it in the journal, Therapeutic Gazette. Owner of the Gazette, Parke-Davis sponsored the 28-year-old Freud to endorse it. Freud started experimenting with the drug himself and found that it successfully combated his depression and indigestion. Freud wrote: "If one works intensively while under the influence of coca, after from three to five hours there is a decline in the feeling of well-being, and a further dose of coca is necessary in order to ward off fatigue..."

Subsequently, he became a big advocate and believed it could be used to treat morphine and alcohol addiction, asthma, eating disorders, an aphrodisiac, and anti-depressant. He also thought that it could act as an effective local anesthetic. Freud formed bedrock concepts of psychoanalysis such as id, ego, superego, libido, and the Oedipus Complex during a period he is known to have experimented with large amounts of these psychoactive substances. Fortunately, as time went on, Freud found out the dangerous side effects as well. In prescribing to a friend, Von Fleischl-Marxow, who had become a morphine addict, it only turned him into a cocaine addict, and he died of a cocaine overdose thereof. The miracle substance was given the generic name "cocaine" by 1880 and turned up as an active ingredient in coca wines, cigarettes and, of course, Coca-Cola.

Meanwhile, the morphine addiction crisis motivated an English chemist named Alder Wright to search for a less addictive pain killer than morphine. In 1874, Wright synthesized what he thought was a safer replacement, known as heroin. In the 1890s, German pharmaceutical Bayer marketed Heroin as

a morphine substitute and cough suppressant, promoting it for use by children for cough suppression and colds.

The German defeat in the 1936 Olympic Games, where black American athletes excelled, threatened the theory of the superhuman German. They had to find a way around that. When they looked for evidence, the German authorities suspected that African American athletes might have been taking a doping agent. The story goes that this suspicion was all that was needed to encourage German chemists to produce a better drug.

Methamphetamine was first synthesized into crystallized form by the Japanese chemist Akira Ogata 1919. In 1937, Fritz Hauschild, the head chemist of German pharmaceutical Temmler discovered a new method to synthesize methamphetamine. Temmler patented it, and the drug Pervitin was born. In 1938, it was introduced for sale as an over-the-counter drug in the German market and quickly became the drug of choice amongst many Germans looking to enhance their performance.

Unlike today, it was not considered an illicit drug at the time. According to Norman Ohler, author of the book Blitzed, Drugs in the Third Reich, Pervitin was considered "the people's drug." Hard to believe today, but an entire nation was legally hooked on crystal meth. Anyone could take it without supervision. The popular chocolate Hildebrand was laced with crystal meth and was advertised with popular slogans such as "Making housework more fun" and "Hildebrand chocolates are always a delight," no wonder!

Each chocolate was laced with approximately 14 milligrams of meth, about the equivalent of one line of meth today. It was recommended to eat between 3 and 9 chocolates, according to Ohler. Housework could be accomplished in a third the time, and since it curbed the appetite, it also helped housewives to slim. Now, we have no wonder why it was so popular amongst housewives.

Fig 4. Popular 1930s German chocolate brand Hildebrand laced with crystal meth

Source: Ohler, 2015

According to Ohler, people took it for a diverse variety of reasons: when they had to tackle a difficult task, housewives also took it for menopause, young mothers for postpartum depression before breastfeeding, students to help cram. However, that wasn't all the reasons people took the drug. Secretaries used it to type faster, actors to refresh before going on stage, writers for all night work sessions, night guards or night shift delivery drivers to stay awake all night, or production line workers at auto plants to increase productivity.

The main focus of Ohler's book is its use in the army. It played an important role in Hitler's victory over France, Poland, and other European countries. Pervitin allowed officers to go for 40-hour stretches without sleep and came to be called "the wakefulness pill." In the same year, it was introduced to the public; it caught the attention of army physiologist Otto Ranke, who began to envision it as the ideal war drug. It could keep tired pilots alert and create an army of fearless soldiers able to endure extreme fatigue and pain. Ranke ordered tests performed on university students who performed exceptionally well in spite of being short of sleep.

These impressive results convinced Hitler, and in 1940, Hitler ordered millions of Pervitin tablets to be dispensed to soldiers to prepare for the Blitzkrieg of Europe. The Allies were unaware of the super drug that Hitler had enlisted into service. At the time, the French troops were mainly in Belgium and had left their northern border next to the Ardennes mountains unguarded. The assumption was that Hitler's army could not possibly cross the mountains in three days. Once they saw Hitler's army move into the Ardennes mountains, the French military could get back in time to defend against them. All this would have been true except for Pervitin. The drug turned Hitler's army into super soldiers who did not need much rest. The Nazi army reached the border town of Dean in a remarkable three days and went on to take France and the rest of Europe at lightning speed.

As the war dragged on, the drug helped soldiers to cope with the horrors of war. However, they simply traded one set of problems for another. Many became addicted, and that brought on the symptoms of sweating, dizziness, depression, and hallucinations. Some soldiers reportedly died of heart attacks while others committed suicide during psychotically induced episodes.

Drinking was another drug that soldiers became addicted to, in spite of warnings of harsh penalties. After the war, returning soldiers on both the Allied and Axis side became the new wave of drug addicts. Pervitin was easy to obtain after the war as well, either on the black market or as a prescription drug. It was prescribed for a range of ailments from depression to diet suppression. Medical students used it to stay awake and cram. All of this simply lead to another progress trap. But Germany wasn't the only country that employed psycyhoactive compounds during wartime. The US military fed amphetamines and steroids to their soldiers in VIetnam, causing post-war addiction issues. They also dispense an anti- fatigue drug called modafinil to jet pilots today. ISIS combatants have been reported to use a methamphetamine-like psychostimulant while fighters have reportedly taken a drug called Captagon (fenethyline). All of these drugs help soldiers to stay awake, become more aggressive or to numb pain. Like Hitler's soldiers, Captagon can turn an ordinary person into a killing machine who can take a bullet and not feel pain. The military is cognizant of the power of psychoactive compounds to distort reality in another way, that is beneficial to their purposes, but harmful when viewed from moral and philosophical perspective. Inducing an unnatural state of aggression, that can have long term social consequences on social stability. It reminds us of another example of importance in this book - the distortion of knowledge used in propaganda to sway a vote.

Could opiates also have become a factor in the relentless drive of German industrialization? Such intense and narrowly focused drive could result in massive impacts in the manufacturing of the war machine.

Source: Bateman, 2017

The use of psychoactive substances can result in significant physiological changes and shifts in perspective. This can be extremely useful, in the case of drugs like Zoloft and Insulin. They change how the body works, and in extension the way the mind works; indeed, powerful drugs can sometimes change brain chemistry permanently. As a result, people's beliefs, biases, and understanding can also vary. As they interact with others, this directly influences how we, as a larger society, collectively understand the environment around us and how we interact with it.

As we saw earlier, due to their ubiquity, psychoactive substances played some role in the creative output of 17th and 18th century Europe, as witnessed by the work some of the best writers of Europe of the era. Studies from the lab of South African anthropologist Professor Francis Thackeray suggest an intriguing potential connection between William Shakespeare and cannabis or cocaine. By using gas chromatography and mass spectrometry, his lab analyzed fragments of 24 clay tobacco pipes dating from the early 17th century found on the grounds of Shakespeare's former house. Traces of cannabis were found in eight pipes, nicotine in one and cocaine in two. Hemp was commonly used for paper and clothing at the time, but it was also used as a muse among creatives.

Thackeray's study also suggests Shakespeare's contemporaries Sir Francis Drake and Sir William Raleigh may have brought back coca leaves and tobacco from the New World. Finally, Thackeray draws a connection from Shakespeare's writing. In sonnet 76, Shakespeare writes "Invention in a noted weed." Thackeray points out that Shakespeare could have meant that he was willing to use weed for creative writing since the word invention can be associated with creative writing. In the same sonnet, Shakespeare writes that he would prefer not to be associated with "compounds strange," which Thackeray suggests could be a reference to cocaine. Regardless, the historical studies that show many creatives took psychoactive substances regularly for the express purpose of stimulating their creative output. How did a century of widespread opioid use affect the quality, quantity, and type of knowledge produced by 18th century Europe? How did it change the way Europeans saw the world, each other, life, or their place in the world? Moreover, importantly, how does the knowledge stemming from altered states of consciousness affect the views that emerged during this era, and found their way into our modern world?

As history shows, there have been definite benefits that emerge from the use of psychoactive substances. It heightened and greatly enhanced the creative output of entire generations of writers, thinkers, philosophers, poets, artists, scientists and many others. It is more than just conceivable that the wide availability of psychoactive substances has played an important, if not an unintended role in the advancement of cultural knowledge. With millions of people, including many children taking doses of psychotropic drugs that we would consider large and even harmful today, we cannot discount the potential cultural impact it had. Opiates are far more potent than cannabis and have significant mind-altering potential. More research into the cultural implications of opioids that were mass consumed could be an essential part of future humanities studies, to shed light on their possible role in shaping our cultural knowledge.

But how could we possibly measure this? Armed with CRISPR, the new gene editing tool, could a new generation of savvy biohackers and scientists re-engineer illicit drugs to eliminate the harmful and addictive qualities, and retain only the beneficial qualities? Given the disastrous results of modern pharmacology to create drugs to treat addiction and pain, such an endeavor could be taken with the most stringent application of the precautionary principle. Could this improve our society? Could such a solution go beyond methadone to solve the drug crisis?

At a time of profound global crisis as we find ourselves in, new solutions will emerge from unexpected places. We need to recognize these places and learn how to place value in them. Recently, Christian Muller and Gunter Schumann proposed a new framework for non-addictive psychoactive drug use that could answer these questions. In their research, they cite epidemiological data that shows that the majority of people who consume psychoactive drugs with an addiction potential will never become addicted. As this seems to be a safe experience, they propose that this majority take these drugs because they are useful for their personal goals. The perspective shift and new insights that arise from the safe consumption of psychoactive substances might give us insights that can help us ameliorate harmful social biases, progress traps, and echo chambers that currently harm society. These chemicals, introduced into the chemical computer that our brain is, could radically re-write what we are capable of. We should not be terrified by this, but rather explore the possibilities.

Perhaps nothing symbolizes mass psychoactive experimentation in the modern era more than the counterculture revolution of the '60s, with San Fransisco as ground zero. As we know there were many positive impacts associated with this culture. The Beatles Sgt Pepper's Lonely Hearts Club and singles such as Lucy in the Sky with Diamonds was a clarion call to the culture of the time, and millions responded. Furthermore, Allen Ginsberg and other beat poets helped set the stage with their anti-capitalist work beginning in the 50s, extolling the virtues of Eastern religion, gender equality, and economic justice, stoking the imagination for decades to come. Everyone was tripping on acid and encouraged to do so, not only by the Beatles but by Jefferson Airplane, Jimmy Hendrix, Janis Joplin, The Mommas, and the Papas, Credence Clearwater and especially The Grateful Dead. Singer Scott Mackenzie invited everyone to San Francisco, and that they did on the infamous Summer of Love. These were but some of the incredible benefits from psychedelics at the time.

Ironically, in 1967 in Haight-Ashbury, at the spatial-temporal peak of the movement, it fell into rapid decline. Psychoactive drugs played a significant role in both its rise and fall. The San Francisco Oracle, the Haight-Ashbury hippie newspaper announced the Golden Gate Park Human Be-In event:

"A new concept of celebrations beneath the human underground must emerge, become conscious, and be shared, so a revolution can be formed with a renaissance of compassion, awareness, and love, and the revelation of unity for all mankind."

The media coverage of the vibrant Haight-Ashbury community attracted the attention of youth across America like a magnet, and 100,000 hippies came in response to the announcement of the festival. At the 30,000 strong Human Be- In counterculture gathering at San Francisco's Golden Gate Park, Leary turned Marshall McLuhan's slogan "Turn on, tune in, and drop out" into the mantra of the counterculture.

However, no sooner had the Summer of Love began, when the winter of discontent set in. The higher level of consciousness promoted by the event met with the sobering reality of the real human psyche. The very same people drawn by the message, and enabled by drugs, ended up bringing about massive drug overdoses, drug-induced rapes, and violent crimes. The organizers simply had not prepared for the scale of the youth invasion. Subsequently, overpopulation and unsanitary personal hygiene led to the rapid spread of contagious disease. To make this situation worse, the movement suffered more publicity blows from the Manson murders and the Hells Angels linked killing of a teenager at a Rolling Stones concert. Those who came to San Francisco with the promise of a new tomorrow, went home disenchanted, penniless, and sick.

Without a clear guiding philosophy, the use of these drugs by large, unsupervised, communities is often problematic. However, there is no doubt that it had a significant impact on the community of the world at the time.

Stewart Brand, founder of the Whole Earth Catalogue, and his friend and Shelter editor for the Whole Earth Catalog, Lloyd Kahn, typified the mind-altering nature of the LSD that Leary promoted as a way to finding oneself. The first time Kahn took acid, he said "I saw a flower breathing, and it wasn't a hallucination. Flowers do breathe, but you don't see it." This is an example of a truth that drugs could help us find. However, by the time the 60s rolled over to the 70s, LSD had lost its allure to the average drug user and gave way to the harder drug, cocaine. Resultantly, many hippies weren't into the harder drug and followed Leary's advice to drop out.

People like Brand went on to set up world-impacting businesses instead. Brand founded the famous Whole Earth Catalog, an idea to supply all those hippies with the things they needed for off-the- grid living. The spirit of independence that its readers demanded resulted in a continuous stream of requests and innovative thinking for products that would help people gain independence from the system. It wasn't long before Brand began looking at technology to help solve these problems. And that in turn inspired others.

The publication motivated a small group of local computer scientists who envisioned a way to spread around the globe.

Steve Jobs was one of those, and he said "The Whole Earth Catalog.... was one of the bibles of my generation...It was like a Google in paperback form, 35 years before Google came along. It was Idealistic and overflowing with neat tools and great notions." Jobs early life could be described as hippieish, embracing trips to the far east to learn meditation as well as experimentation with psychoactive substances. When Jobs and Wozniak built the first Apple 1, it was a circuit board screwed down onto a piece of plywood. Jobs and Wozniak were selling their personal computer as a source of decentralized liberation. They encouraged other users to get involved in the DIY maker culture that came out of hippie communes. Jobs, of course, went on to inspire the likes of Jeff Bezos, Elon Musk and a host of other valley entrepreneurs.

While Silicon Valley's façade is all business, the spirit of valley innovators can be traced to the ethos of the counterculture. The spirit is one of techno-utopia, a belief that technology can create a world that will free humanity from chains of bondage but make a lot of money on the way. The paradox is that Silicon Valley has become the heart of the mainstream culture, the very culture they rebelled against in the previous generation. Silicon Valley innovations may be the source of many beneficial impacts around the globe.

But these are matched by an equal number of costly problems, or progress traps, that have also contributed to our modern lives.

At its core Silicon Valley is the creativity, openness, and passion that lay at the heart of the psychoactive counterculture.The hippies brought those values mainstream.

Today, a distant relative of the early hippie music festivals of the 60s and early 70s is the Burning Man Festival held each year in the Nevada desert. It is a throwback to the idealism of the late 60s and is attended, not surprisingly, by many Silicon Valley workers, carrying on the tradition of psychoactive drugs, idealism, and stimulation of new ideas. This culture is so incredibly valuable, it survived the drug wars and many other seriously negative barriers to success.

The ideas that came out of the psychoactive counterculture and continue to come out of it have gone to affect the whole planet. In many cases, these ideas weren't new, but the counterculture amplified and popularized them to a new level. Many of the pillars of mainstream political liberalism that millions of people embrace and enjoy today can be traced to that amplification, from recognition of LGBTQ rights to anti-capitalism and the environmental movement. They are extremely valuable today. But, how has the drug-fueled sources of these ideas affected the quality of those ideas?

One attempt to collate psychometric data on psychoactive impacts was conducted on spiders. Researchers have performed experiments to measure the hallucinogenic effects on the ability of spiders to spin their web. NASA scientists conducted an unpublished study on the common house spider (Araneus diadematus), by observing the web pattern it spun under different neurotoxins.

For the spider species tested, caffeine seemed to have the most toxic effect, reflected in the loss of symmetry of the web. The experiment was repeated in a peer-reviewed article using a control substance, amphetamine, and caffeine.

These experiments were intriguing but very little can be concluded about the actual effects on the central nervous system of the spider. Another suggestive test along similar lines can be conducted on entire insect colonies such as ants, honeybees or termites, and could reveal behavior for large groups that could reveal insights about drug use in human collectives. Behavioral biologist Stephen Pratt studies the superorganism of ant colonies. What would the effect of hallucinogenic drugs be on such an ant superorganism? Would the colony experience a kind of "enlightenment," or would the drug impair their faculties?

The possibilities of such an experiment may be too much to ignore. It draws a very primitive yet direct link between the spider's experience of the world and its behavior. The spider experiences the world stem from a genetically programmed set of behavioral codes, along with whatever it has learned in its developmental process. The psychoactive compounds resulted in a scrambling of those codes. If the neurological structure of the spider is tuned to receive the world in one way, then truth is relative to that tuning. Human beings are tuned to experience the world in a different way, with a different set of sensors. It is hard to discriminate an absolute truth when our experience of reality is so dependent on our biological hardware.

Human beings are not the only species that has found the intoxicating effects of psychoactive compounds irresistible.

The BBC wildlife series Spy in the Pod was the first to broadcast footage of teenage dolphins in the wild passing around an inflated pufferfish. The group did not kill the pufferfish but instead it was inhaling a highly toxic poison called tetrodotoxin that the pufferfish releases when it is threatened. It is suspected that small amounts of it act as a stimulant. The footage shows the dolphins passing the pufferfish around, with each dolphin carefully holding it in its mouth. The dolphins are observed to be in a trance-like state after passing the pufferfish around. There is still little direct proof that the dolphins were intoxicated, but much circumstantial evidence.

In another BBC documentary, Animals on Drugs, the lemurs habit of rubbing millipedes on their fur to protect against malaria or insect bites also shows the lemur show outward signs of drug intoxication, such as dilated pupils, drowsiness and heavy eyelids. Furthermore, horses seek out and eat a weed called locoweed. The owners of these horses report that the weed has an intoxicating effect on their animals. So, humans are not the only animals that seek out the benefits of intoxication.

Gordon Wasson writes in his book Soma: Divine Mushroom of Immortality that reindeer native to the boreal regions of the northern hemisphere seek out red-and-white psychedelic mushrooms and after eating them, are observed prancing and reveling in an intoxicated state. Moreover, the 2009 BBC documentary – Magic mushrooms & Reindeers – Weird Nature – BBC Animals shows that in Autumn, the reindeer seek out the fly agaric mushroom even under cover of snow. The Sami shaman takes the mushroom ritually, and in a trance, they contact the great reindeer spirit. Their experience under this drug is heightened senses and visions of flying.

In 2009, Tasmania, the world's leading producer of legally grown opium poppies for the pharmaceutical pain killer morphine, was bedeviled with the problem of Wallabies raiding their poppy fields and getting stoned. The matter became so bad that Lara Giddings, Australia's attorney general at the time made the statement that Wallabies were "entering poppy fields, getting high as a kite, and going around in circles."

Given the widespread use of these drugs across the spectrum of human societies, modern history of drug use deserves more serious scholarly attention. It has had widespread cultural impact over millennia, working its way into the foundations of our societies. The benefits of combining historical research on centuries of opioid use with the latest neuroscience can reveal new insights about the evolution of human culture. However, even without extensive research, the broad strokes of history already teach us valuable lessons, especially about the past and current state of human knowledge.

This pursuit of knowledge tainted with drug use and other imperfections, could indicate that there is a need for a way to rank the quality of knowledge. In fact, the modern pursuit of pain relief and performance-enhancing drugs throughout history has resulted in an endless merry-go-round of progress traps creating flawed knowledge. From the Laudanum of the 16th century to Hildebrand methamphetamine-laced chocolate, cocaine-laced Coca-Cola, Pervitin, morphine, heroin, and methadone, this repeating cycle of progress-becoming-progress-trap seems to be resilient to insight.

The misery of that psychoactive compounds can create follows the same laws of unintended consequences as are found throughout society:

  * Affluence leads to diseases such as heart disease, stroke, diabetes, and early deaths;

  * Automobiles are responsible for an average 1.25 million traffic fatalities a year (WHO 2010);

  * Fossil fuel and internal combustion technology has led to climate change

  * Large-scale monoculture has led to biodiversity loss, species extinction, climate change, and deadly algae blooms

In On Deep History and the Brain (2007) and An Essay on Neurohistory (2010), Harvard historian Dan Lord Smail argues that psychoactive substances have evolved into a new kind of purposeful social control in modernity, giving states an abusive means to distract the general population away from issues that are deeply relevant to them. His argument is based on the idea of teletropy, social control based on psychological manipulation. Smail contends that while organized state violence is a primitive form of teletropy, other kinds of distractions such as movies, gossip media (including social media), novels, music, shopping, sports, coffee, alcohol, pornography and psychoactive drugs enable a new form of control called autotropy, in which citizens willingly manipulate their own emotions.

The political activist Noam Chomsky argues for essentially the same conclusion with the idea of manufacturing consent. Such control and self-distraction renders meaningful democratic engagement ineffective. In Forces of Habit: Drugs and the Making of the Modern World (2001), one of the hypotheses that author David T. Courtwright advances is that drugs are a colonizer's tool, giving them the power to pacify and control labor in colonies and plantations. In their day, opium-smoking Chinese coolies were regarded as the most reliable laborers in the world. Workers around the globe including South Africa, United States, China, Russia, Jamaica, and Egypt were commonly paid in alcohol or drugs, or if paid in cash, spent their wages on psychoactive substances. Courtwright does not see a conspiracy of the elites as much as a natural behavior on part of laborers to use such psychoactive substances to alleviate the combination of tedium, pain, and stress of working under the toil of colonizer's companies. Yet, there are many side effects of such use.

Today, drug culture is ubiquitous as it ever was, and abuse is ripe. A part of the drug culture of today is a consequence of the health care industry's attempt to use opioids to treat pain. The Global Drug Survey reports annually on the latest global drug statistics. The 2018 version surveyed 130,000 people in 44 countries. Among the findings, Synthetic Cannabinoid Receptor Agonists (SCRA) are rated as addictive as crystal meth, cocaine delivery is faster than pizza delivery in many countries, and the darknet is a favorite place to purchase drugs such as MDMA (ecstasy), cannabis, LSD and novel drugs. It is even reported that the majority of drug users surveyed lost their drug virginity with free drugs.

The WHO reports that 275 million people (or 5.6% of the world population) use an illicit drug such as cannabis, amphetamines, opioids or cocaine (2016) with 192 million cocaine users worldwide.

The WHO further states that 31 million people suffer from a drug disorder (2018). How does such a significant population of people with some distortion of their experience, worldview, and judgments affect the knowledge that society produces?

How does the socially accepted drugs, such as caffeine, alcohol, and soon marijuana affect the collection of that knowledge?

How can we consider the cultural influence of such significant amounts of psychoactive substance use and abuse to qualify the present state of human knowledge properly and mitigate the possibility of poor collective decision-making in the future?

Recent research sheds light on the cognitive impairment that results from opioid misuse. A research study conducted by Sara L Simon et al. (2002) warned of the global cognitive impairments that result from the use of methamphetamine: "The national campaign against drugs should incorporate information about the cognitive deficits associated with methamphetamine...Law enforcement officers and treatment providers should be aware that impairments in memory and in the ability to manipulate information and change points of view (set) underlie comprehension...methamphetamine abusers will not only have difficulty with inferences... but that they also may have comprehension deficits ... the cognitive impairment associated with [methamphetamine abuse] should be publicized..."

Such neurological dysfunction has implications for democratic societies. In 2018, the National Institute on Drug Abuse (NIDA) estimated that 24.6 million Americans aged 12 or older, or 9.4 percent of the US population, have used an illicit drug in the past month. In the UK, the Home Office Crime Survey for England and Wales 2017/2018 showed a similar level of statistics; 9 percent of adults between 16 and 59 have taken an illegal drug in the last year. With such a large percentage of the voting population abusing substances, and potentially altering their worldviews, it is, therefore, more than a passing interest to know how this might affect voting outcomes. If drugs alter our perception of reality, they also change the quality of the decisions we make.

Today, our world is ravaged by a global opioid crisis, and illicit drugs are accessible everywhere. Illegal narcotics are a multi-billion dollar industry, rivaling the arms industry and oil industry in scale.

Many advocates promote the legalization of drugs to eliminate a black market that is responsible for a vast spectrum of crimes. As we shall see later, there are reasons to support such a policy. In addition to ridding the world of the criminality, the benefits of such psychoactive compounds could be made accessible to non-addictive use, while addicts can still be treated humanely.

A growing number of studies record the specific cognitive deficit that results from methamphetamine (crystal meth) use disorder (MUD), such as deficits in memory, attention, and concentration. A meta-analysis of 44 studies that assessed cognitive dysfunction in 1592 subjects with MUD and 1820 healthy controls was published in 2018 by Potvin et al. It recognized:

  * Moderate impairment across most cognitive domains attention, executive functions, language/verbal fluency, verbal learning and memory, visual memory and working memory.

  * Deficits in impulsivity/reward processing and social cognition were prominent.

  * Visual learning and visual-spatial abilities were relatively unaffected.

This current knowledge of cognitive impairment arising from MUD can be back-projected in time to the opioid use disorder from the 17th century onwards. The cultural opioid use disorder affecting a large percentage of historical European or American populations calls into question the veracity of much of the knowledge derived from the 17th century onwards. Could our modern world be built upon a foundation of mass cognitive impairment, resulting in distorted knowledge and poor political decision-making?

While taking opiates such as Pervitin allowed users to perform incredible feats of physical endurance, it also affected other cognitive functions such as working memory, executive function, and verbal learning. Democratic institutions had their birth in the 19th century, but if a large percentage of the voting public was cognitively impaired, this could have a detrimental impact on final policy choices. Some opiates completely disabled the user.

Finally, the history of drug culture has left us with a worldwide drug crisis today. The illegality and easy production of drugs has resulted in toxic mixes which have unknown ingredients, and whose effects are unknown. Given the large percentage of opioid misuse today, we should be concerned of its impact on a whole host of issues. Beyond physical health, there are concerns with social discourse, the compromised creative and innovation potential of society, the quality of knowledge our society can produce, and the quality of decisions our nation makes in political matters.

Philosophers of every age have never been afraid to try new things, in a quest to take hold of the philosopher stone. Modern day philosophers such as Dr. Peter Sjostedt-H in this quest have traced the connection of psychedelics and philosophy going back to Plato. After that discovery, he argues that psychedelics have had a profound influence on the modern evolution of culture. Dr. Sjostedt-H is like a detective, sifting through history. He has linked together an unbroken chain of influential philosophers focusing on their use of psychedelics.

His book, Noumenatics, is a series of short essays on this subject and is named after the noumena, the reality that Kant claimed existed beyond human experience. He begins in Ancient Greece, where Plato's philosophy influenced successive scholars for millennia. He is recorded to have participated at the Eleusinian Mysteries, a regular event held in Ancient Greece at the Temple of Demeter, where participants drank kykeon, a potion containing barley, mint, water and a psychedelic ingredient.

Dr. Albert Hofmann, the inventor of LSD, argued that the psychedelic ingredient was the barley parasite fungus ergot, from which LSD is also derived: "(We can) assume that the barley grown (in the Rarian plain) was host to an ergot containing the soluble hallucinogenic alkaloids.

The famous Rarian plain was adjacent to Eleusis. Indeed this may well have led to the choice of Eleusis for Demeter's temple." Plato wrote in the Phaedrus: "[W]ith a blessed company – we following in the train of Zeus, and others in that of some other god –... saw the blessed sight and vision and were initiated into that which is rightly called the most blessed of mysteries, which we celebrated in a state of perfection... being permitted as initiates to the sight of perfect and simple and calm and happy apparitions, which we saw in the pure light, being ourselves pure and not entombed in this which we carry about with us and call the body, in which we are imprisoned like an oyster in its shell." Building on that, Dr. Sjostedt-H makes a plausible argument that psychedelics inspired Plato's mind-body dualism, which prevailed in western philosophy and religion for millennia.

Furthermore, Nietzsche, another character in the psychedelic story, commented, 'Christianity was Platonism for the "people."' Even more so, Alfred North Whitehead said: "The safest general characterization of the European philosophical tradition is that it consists of a series of footnotes to Plato." This all means that the basis for European philosophy and science was affected by psychedelics.

Building upon that history, Kant broke the long period of the dark ages with his 1784 anti-Christian essay, What is Enlightenment?

Furthermore, while Kant himself is not known to have taken opioids, the long list of psychonaut-philosophers who came after him were heavily influenced by him. Thomas de Quincey, who wrote the famous Confessions of an English Opium Eater was one of the first English commentators on Kant. In specific, Quincey claimed that while on opium he could recall the minutest detail of childhood. Then Quincey's reports influenced the French philosopher Henry Bergson, who in turn influenced the psychonaut, Aldous Huxley. All in turn influencing others to try out the substances, in turn influencing the culture around them.

Humphry Davy, who was a friend of another famous opium eater, poet Samuel Taylor Coleridge, took high doses of nitrous oxide and wrote: "I lost all connection with external things; trains of vivid visible images passed through my mind and were connected with words in such a manner as to produce perceptions perfectly novel. I existed in a world of newly connected and newly modified ideas... I exclaimed to Dr. Kinglake, "Nothing exists but thoughts!". Davy's experience with drugs gave him profound insights, which left him with a world view that he would carry to his grave. His last book Consolations in Travel or The Last Days of a Philosopher, he illustrated those views: "Without the eye there can be no sensations of vision, and without the brain there could be no recollected visible ideas; but neither the optic nerve nor the brain can be considered as the percipient principle – they are but the instruments of a power which has nothing in common with them.... The desire of glory, of honour, of immortal fame, and of constant knowledge, so usual in young persons of well- constituted minds, cannot, I think, be other than symptoms of the infinite and progressive nature of intellect." As you can easily see, this idea had profound effects upon the progression of western thought.

Arthur Schopenhauer was another philosopher strongly influenced by Kant, and while he is not known to have consumed any psychoactive compounds, he did support their use. He felt that taken properly they could greatly enhance creativity. He saw that mystical consciousness and intuition complemented our rational nature that was promoted by the Enlightenment. When it comes to Friedrich Nietzsche's wild and influential work, Sjostedt-H believes that Nietzsche's writing and inspiration were due to what is today called auditory hallucination. It is thought that Nietzsche heard the voices of the Greek philosopher Dionysus and channeled his hallucination into his influential writing.

ABSOLUTE sobriety is not a natural or primary human state," ASSERTS RICHARD DAVENPORT

Nietzsche's American contemporary William James, in his famous The Varieties of Religious Experience, was acutely aware of the potential of psychedelics to expand our awareness, writing: "Nitrous oxide and ether, especially nitrous oxide... stimulate the mystical consciousness in an extraordinary degree.... [In] the nitrous oxide trance we have a genuine metaphysical revelation.... [Our] normal waking consciousness, rational consciousness as we call it, is but one special type of consciousness, whilst all about it, parted from it by the filmiest of screens, there lie potential forms of consciousness entirely different." James also claimed that nitrous oxide helped him to make sense of Hegel's cosmology: "Nitrous oxide gas-intoxication... made me understand better than ever before both the strength and the weakness of Hegel's philosophy. I strongly urge others to repeat the experiment... its first result was to make peal through me with unutterable power the conviction that Hegelism was true after all." As we can see, this process can be made easier through the consumption of these drugs.
On March 31, 1910, the French philosopher Henri Bergson wrote to William James, sharing what he would come to call the "reducing- valve" theory: "...I believed myself to be to be present before a superb spectacle – generally the sight of a landscape of intense colours, through which I was travelling at high speed and which gave me such a profound impression of reality that I could not believe, during the first moments of waking up, that it was a simple dream.... How I would like you to pursue this study of 'the noetic value of abnormal mental states'! Your article [A Suggestion about Mysticism], combined with what you have said in The Varieties of Religious Experience, opens up great perspectives for us in this direction." And this opinion was not just limited to Bergson. Other intellectuals were of very much the same opinion. Aldous Huxley wrote of Bergson: "Reflecting on my experience, I find myself agreeing with the eminent Cambridge philosopher, Dr. C. D. Broad, "that we should do well to consider... the type of theory which Bergson put forward... The suggestion is that the function of the brain and nervous system and sense organs is in the main eliminative and not productive." This is the view that the brain does not produce consciousness so much as filter it.

In April 2016, findings from a brain imaging study sponsored by the Beckley Foundation presented findings at the Royal Society that confirmed this theory. He proved that the brain does not produce consciousness, but instead provides a filter upon it. "Our studies have begun to lay bare the workings underlying the changing states of consciousness. With a better understanding of the mechanisms underlying these states, we can learn to use them better, to manipulate our consciousness, to our own and societies' advantage. William James explains it as seeing through the veils of perception. Huxley describes the ego as a reducing valve of the brain. How right they were. Now, for the first time, we have seen the empirical basis of these realisations." Ernst Junger, the Swedish philosopher who continued the tradition of Nietzsche coined the word psychonaut and wrote: "What interested me above all was the relationship of these [psychedelic] substances to productivity. It has been my experience, however, that creative achievement requires an alert consciousness, and that it diminishes under the spell of drugs. On the other hand, conceptualization is important, and one gains insights under the influence of drugs that indeed are not possible otherwise." He is saying that we are able to change the filter that our brain places on our consciousness with a range of psychoactive substances.

We know that psychoactive substances have already altered the state of consciousness of a large percentage of people on the planet today. This altered state can have profound long term impacts, thus affecting how we see the world and how we shape our narratives, the knowledge we have, and decision-making. The individual who actively consumes psychoactive substances can experience altered states of consciousness, depending on a wide variety of factors including the specific psychoactive substance ingested, the dosage, frequency, genetic predisposition to mental disorder and many other factors. This altered state has the potential to change the individual's view of reality, personal motivations, knowledge, and decision-making processes. So, from that it is easy to wonder if large segments of a population have shifted their collective views of existence, the knowledge produced by that generation is bound to reflect the altered state of consciousness. Psychoactive substances were embraced by the beatnik generation, which gave way to the hippies and then the psychedelic generation. It is this generation, in particular, that explored the potential positive and creative benefits of altering our views of reality.

To illustrate the effect of drugs on society, a whole category of music emerged to celebrate psychoactive substances, aptly named psychedelic music or acid rock. The Beatles "Sgt. Pepper's Lonely Hearts Club Band", The Yardbirds "Shape of Things," and The Byrds "Eight Miles High" marked the beginning of the psychedelic era, followed by Janis Joplin, Jimmy Hendrix, Jefferson Airplane, and The Doors. Even later bands like Pink Floyd were inspired by drug experiences, morphing the songs into progressive rock. The psychedelia subculture already alluded to earlier helped usher in alternative worldviews, especially from the far east, and played no small role in the comfort that eastern culture now experiences in Western society.

There is a school of practitioners today, who in spite of the tragic trail drug use has left behind, still believe in the power of psychoactive substances to raise our collective level of consciousness. Indeed, they feel it is more urgent now than ever before that we expand our minds to meet the pressing challenges of this century. These are the modern psychonauts, following in the footsteps of their ancestral explorers before them, who experientially unlock the secrets of the mind. Through the influence of this lineage of psychonauts opioids and psychoactive compounds have undoubtedly exerted a profound impact on human civilization and has helped to shape modernity.

Knowing what we now know about the neuroscience of drug related cognitive impairment, what conclusions can we draw about the impact of three centuries of widespread drug use on the evolution of western culture?

One conclusion can be drawn from the use of opioids. It is that the cognitive impairment from opioids takes on a great diversity of effects ranging from numbness to agitation, from superhuman energy to fearlessness, and from the dullness of mind to wild mental agitation. This complex plethora of effects simultaneously acting upon millions or even tens of millions of ordinary citizens will have a statistical impact on every aspect of society. In an upcoming chapter, we take a close look at the concept of the superorganism, a higher level meta organism composed of individual biological organisms acting in unison. In biology, such organisms are called eusocial and include insects such as ants, termites, and bees whose collective behavior form the colonies behavior. Homo sapiens can also be considered eusocial, and our society the superorganism formed thereof. The effect of centuries of opium consumption throughout Europe and America must have had a profound effect on the human superorganism.

While one of the effects of opiates is cognitive impairment for some, for others it can be enhanced mental function. As we have noted earlier, creatives in ancient cultures have taken psychedelic substances to expand their consciousness. We traced this during the Age of Enlightenment, in which there was a diverse sharing of information that took place in coffee houses all across Europe and the Americas which fomented, among other things, the American and French Revolution. While millions of laudanum users would have unknowing suffered all manner of cognitive impairment, from more susceptibility to political narratives, increasing passion and courage, and providing shots of energy for work into the night, entire lineages of philosophers had intentionally made use of opioids to open their mind's eye to other vistas of reality. Having visited them and gained new perspectives about life, they wrote, published, and shared these ideas, stoking the imagination of the reader.

Psychoactive substances distort experience, memory, and subsequent reasoning. An altered model of reality will even affect judgment and decisions in proportion to the degree of intoxication. Disappointingly, not many long-term studies between large-scale drug consumption and its effect on human epistemology have ever been conducted, so there isn't an extensive body of knowledge of the overall impact on cultural knowledge. However, isolated research by historians such as Dr. Peter Sjostedt-H demonstrate that taken altogether over millennia, there is a continuous lineage of philosophers who have been deeply influenced by psychedelics. Collectively, this lineage has had a profound impact on modern thought. That most of them intentionally took psychoactive compounds to see reality from a different perspective speaks volumes of the ability of psychedelics to remove filters conditioned by society.

Today, scientific organizations such as the Multidisciplinary Association for Psychedelic Studies (MAPS) extoll the virtues of psychedelic compounds for their healing and consciousness- expanding properties. They distinguish between the beneficial properties of such psychedelics and the more harmful psychoactive drugs that left a deep scar on humanity.

Nevertheless, they would be wise to pay heed to the cautionary tale of our recent past. Terrible unintended consequences await if we are not careful. Our well-meaning pharmaceutical forbearers attempted to develop painkilling compounds to benefit humanity, and instead created a generation of addicts.

MAPS trains therapists to use the drugs in treatment centers to support scientific research into psychedelics. They aim to enhance spirituality, creativity, and to educate the public honestly about the risks and benefits of psychedelics and cannabis.

MAPS has sponsored MDMA trials to treat severe, treatment- resistant PTSD. In one small South Carolina study with 24 patients, 67% were PTSD-free one year after treatment. In 2018, researchers working with FDA and Health Canada launched a phase 3 clinical trial of MDMA-assisted psychotherapy. If the findings are consistent with earlier studies, it is possible that MDMA could become a legal prescription drug by 2021.

Stanislav Grof pioneered LSD-assisted psychotherapy in the 1960s.

It can be used to help patients reframe past events that have led to psychological trauma. By using controlled doses of LSD, patients can begin to remember their past trauma and rewire their brain to reduce the emotional trauma associated with the event.

Psychoactives such as Ayahuasca and Ibogaine have been used by indigenous people in South America and Africa for thousands of years, but recent overdoses leading to death has cast them in a bad light with regulatory agencies. Conscientious users report profound experiences such as insights that help with longstanding emotional traumas. The positive benefits of these drugs can easily be seen, if one does the research. However, the ethical debate about banned psychoactive drugs is problematic because deaths squash the definite benefits, they may bring due to accidental overdose.

Out of Silicon Valley, techno-entrepreneurs have given birth to the latest trends of transhumanism and consciousness or neuro-hacking, and part of their arsenal is psychoactive drugs. An increasingly popular trend that seeks to enhance human performance is micro- dosing, taking small doses of drugs such as 6-25 micrograms of LSD or 0.2 – 0.5 grams of psilocybin, the active compound in magic mushrooms. Subsequently, users have reported that micro-dosing alleviates depression, cluster headaches, addiction to smoking, and ADHD. Dr. James Fadiman is a leading micro dosing researcher who has published research involving 1500 participants to date.

In his 2011 book, The Psychedelic Explorer's Guide contends:

  * The value of psychedelics for healing and self-discovery

  * How LSD has improved scientific and technical problem-solving

  * How ultra-low doses enhance cognitive functioning, emotional balance, and physical stamina

  * 23 million have experimented with LSD

  * 600,000 people in the U.S. alone will try LSD for the first time (2011)

  * Micro dosing can potentially replace addictive antidepressants, anti-anxiety, and mood stabilizers.

Modern-day neurohackers like Jason Silva promote the use of psychoactive compounds as an aid in the quest to answer the big questions of life. In the past, shamans have used peyote, ayahuasca, ibogaine, mushrooms, and more to open their consciousness to a greater reality. Taking the lessons learned over millennia, modern neurohackers like Silva employ both ancient and traditional psychoactive compounds towards the same spiritual ends.

The history of psychoactive compounds teaches us something important about ourselves. Human beings have a built-in avoidance of pain. If that pain becomes too unbearable, we will take whatever means are available to escape its clutches, even if it involves terrible unintended consequences such as addiction. At the same time, we also have an inherent desire to find meaning in life, especially in light of our mortality. Psychedelics have been an integral part of the both of these journeys. For the knowledge that psychedelics unlock does not belong to the rational and logical realm, but to the realm of intuition. And that real, is not yet amenable to the capturing of its wisdom. It is to that realm which this work intends to open.

Humanity's long, meandering journey with psychoactive compounds has played a role in shaping modernity. Both cognitive impairment effects as well as their ability to unlock new creative insights have affected the ideas and values built into the foundation of modern culture. The historical and scientific journey of this chapter has showed us that while the normative experienced reality of a human being may seem absolute, the distortions, both harmful and beneficial that are induced by psychoactive compounds can be interpreted as either distorting this normative experience, or removing filters from it to reveal something deeper. If it indeed reveals something deeper, then it exposes the relative nature of our normative experience of reality. In this regard, we have to philosophically question what truth really means. If truth is tied to this human body, with its specific evolved sensors, then how can we apply that to other

species experience of reality? In what sense is something true for a human being also true for an bacteria, ant or bird? It seems our exploration opens the door to many more profound questions.

One of the secondary aims emerging from our is to develop a psychometric method to assess and measure the change of perspective caused by any psychoactive drug. Such a metric could have much practical value. For example, it could tell us if a drug user has exceeded a level to safely drive a car, operate machinery, or if it has a beneficially enhanced awareness.

The primary aim of our work, however, is to combine modern technology and ancient wisdom to make better decisions, without the need to take drugs. The Enlightenment taught us to see the world only through the lens of reason. Western culture, through capitalism, industrialization, and extraction based on science and technology has excelled at this. The result is not what we expected, however. We have created a world of gleaming human structures at the expense of a decimated natural and social world. It is time to rebalance how humans relate to the world. It is time to rebalance our lopsided reasoning process by reintroducing intuition, the master, and to subdue reason to its rightful place as its servant.

# ADDING WARMTH TO COLD HARD FACTS

#### The Truth is more important than the facts."

FRANK LLOYD WRIGHT

In light of our earlier analysis of the widespread usage of opiates in pre-Enlightenment Europe, this brings up an intriguing possibility: is it possible that the knowledge discovered in the Age of Enlightenment itself was influenced by opium? Could the Age of Enlightenment be an example of the psychedelia movement of the 1960s, or Hitler's opioid-fueled Blitzkrieg of Europe in WWII? The Age of Enlightenment was undoubtedly a much-needed response to an age when the authoritarianism of the Church and Kings had caused untold harm. However, did the regular consumption of opium-laced laudanum fuel creatives to new, unimagined heights of expression, and ordinary people to heights of pain avoidance or cognitive impairments? Were opium eaters emboldened to the cause of justice? Mark Twain wrote: "The very ink with which history is written is merely fluid prejudice." Now, modern science is revealing the role that intuition plays in complementing analytical reasoning. Both are heavily affected by drug use. At the same time, neither intuition or intelligence can exist in a silo for valid reasoning to occur.

Centuries of rational thinking have created a world fundamentally shaped by Enlightenment philosophy. Today nowhere does reason excel more than in the rationalistic fields of business, science, & technology. As a result, our modern world is built on the countless inventions that were created out of mathematical calculations and the mechanical application of business principles designed to maximize profits for their shareholders. Renowned psychologist Daniel Kahneman spelled out the relationship between intuition and logical reasoning in his groundbreaking book, Thinking, Fast and Slow, which summarized his decades of research with partner Amos Tversky, on intuitive and logical thinking. Incidentally, this also launched the field of behavioral economics.

In a nutshell, Kahneman and Tversky investigated and classified common human errors that arise from heuristics and biases and summarized it in a blandly entitled framework called "System1 and System 2". Kahneman defined System 1 as the brain's fast, automatic, intuitive approach, while System 2 as the brain's slower, analytical, rational approach. Kahneman saw System 1 as more influential and steering System 2. Kahneman's System 1 and System 2 cut across prior categories. One cannot merely say that System 1 is irrational because sometimes it is often logical. Conversely, occasionally slow System 2 thinking can produce poor and even irrational results. It is only recently that we are beginning to realize that the prioritization of System 2 thinking that reduces everything to an equation may not be the panacea that Kant and other leading figures made it out to be.

Modern capitalism is based on the principle of homo economicus, the rational agent. Production plants that apply human motion studies to production lines that produce rationally designed products for homo economicus. Perfect, right? Wrong. Two centuries of reductionist industrial capitalism have resulted in significant unintended consequences such as populist uprising, authoritarian regimes, biodiversity loss, fresh water shortage, peak resources, the highest inequality rates in history, and

climate impacts to name a few. It is clear that rational though without intuition leads to unacceptable imbalances.

At the same time, the fast and intuitive System 1 of Kahneman is peppered with all manner of cognitive biases. Psychologists keep discovering them, and they now number in the hundreds.

Kahneman does not dismiss intuition outright though, but instead argues that it is effective to compliment analytic reasoning, but only if it is used correctly. The intuition of a domain expert is entirely different from that of a novice. The domain expert has the advantage of years of practice and experience that allows the expert to formulate a high-quality response quickly.

Studies by Gerard Hodgkinson at the Centre for Organizational Strategy, Learning, and Change at Leeds University unpacks intuition to give us insights into how it works. He cites the case of a Formula One driver who braked sharply nearing a hairpin turn without knowing why. As a result, he avoided running into a pile-up caused by accident up ahead. Psychologists who were interested to know how he was able to identify this tested him with a video of the event and discovered that he was subconsciously tuning into the fact that the crowd that was usually cheering him on was instead looking in a different direction with a static, frozen, gaze. This was the cue that something was wrong, and the driver responded immediately to it. Hodgkinson concludes that neither is better than the other, and both are needed in effective decision-making.

Neuroscientist Valerie van Mulukom of Coventry University agrees with Hodgkinson, that intuition has an important role to play. The current model of the brain is as a predictive processing system which constantly compares incoming sensory information and experiences with stored memories and knowledge to predict what will happen next. This comparison occurs automatically and subconsciously in real-time, and when a significant mismatch is detected but has not reached a conscious level yet, it produces the feeling we call intuition. Recent meta-analysis investigating the relationship between intuitive and analytical reasoning This shows that intuition is not correlated and do not exist on opposite ends of a bipolar spectrum. This means that even though you may think you are engaged in purely analytical System 2 thinking, yet System 1 intuitive thinking can still be happening subconsciously. Furthermore, Albert Einstein was a firm believer in intuition and credited many of his significant discoveries to intuitive thinking.

While it is easy to say that intuitive thinking is sloppy and imprecise,

Mulukom cites a study that shows that too much analytical thinking can lead to poor decisions as well (Wilson et al., 1993)

Recent studies on identity politics conclude that our group identity is stronger than reasoning and causes us to cherry pick data to support our groups position. Thus, in moral dilemmas, analytical thinking

is sometimes referred to as the "press secretary," which comes up with post-hoc justification for firmly entrenched moral positions. As usual, it is not a pure black and white case of one is better than the other. Both intuition and analytical thinking are required in appropriate amounts to make most decisions, even though they need to carefully harnessed to ensure clear decisions are made.

One of the most famous stories that illustrates the power of such domain expertise intuition is that of Lieutenant colonel of the Soviet Air Defence Forces. Stanislav Petrov is known as the man who single-handedly saved the world from nuclear war. Petrov is the central figure in a false nuclear alarm incident that took place on September 26, 1983. At that time, Petrov was the duty officer at the Oko nuclear early warning command center. While he was on duty, the early warning system radar screen became lit with six incoming intercontinental ballistic missiles launched from the United States. In the few tense moments that followed, Petrov had a decision to make. He could alert authorities higher up on the chain of command of the radar screen's report of incoming nuclear missiles, or he could disobey his orders and protocol. His gut feeling told him something was wrong. He didn't know what, but his years of experience told him that this didn't make sense. In the end, Petrov did nothing. Going against Soviet protocol explicitly written for such situations, he disobeyed orders and told them nothing. After he watched and prayed for the moments following his decision, the blips on the radar screen show a ground zero strike. Then nothing. Follow-up calls indicated to the command center that nothing had happened. Petrov made the correct guess, and he is now credited with preventing an erroneous retaliatory nuclear strike on the US and its NATO allies, which could have resulted in large-scale atomic war.

Post-incident investigations confirmed that the early warning system had indeed malfunctioned. The investigation revealed that the false alarm was caused by a rare alignment of sunlight on high-altitude clouds above North Dakota and the Molniya orbits of the Soviet satellites. In the post-incident briefing, Petrov explained that his gut feeling was based on the knowledge that a US strike would be all-out, so five warheads seemed inconsistent. The warning system was also newly installed, so he did not gain enough experience with it to fully trust it, the ground radar picked up no corroborating evidence, and the warning messages quickly passed through 30 layers of verification too quickly. In Petrov's action, we see the combination of both domain expert intuition and analytic reasoning to produce the right response.

When it comes to voting, as voters analyze their options, personal beliefs combine with perceptions of a candidate's fitness (his or her historical performance, competency, lineage, party, and background) to determine a final decision. For some voters, even a candidate's physical appearance, dress, and bone structure may affect a voting decision. For instance, various aspects of facial features and expressions have been found to influence perceptions of trustworthiness. These intuitive senses may hark back to our evolutionary history when specific features alerted us to danger and increased our fitness to survive. We have to remind ourselves that such instinctive responses, though they may still arise naturally, may be out of context in a modern system.

However, we are trying to apply our intuition to a distant decision. Typically, we are represented by a person that is 3-4 degrees of social separation away. Indeed, our intuitive prowess diminishes with indirect measurements, and our intuitive awareness is not as applicable as a valid tool as a result. So then, the difficulty of applying a metric at a distance to intangible, qualitative values (such as "trustworthiness," "drive", or "charisma") reduces the value of intangibles in decision making and shifts the priority to variables that can be measured with a greater degree of confidence. In this way intuitive ideas, which cannot be measured, lose out to ones that can. In a modernity that grew out of the Enlightenment, numbers alone bestow a legitimacy that makes associated ideas worthy of consideration.

Reflecting upon the powerful influence of Enlightenment thinking, at the end of the 19th century, Francis Galton stated "until the phenomena of any branch of knowledge have been submitted to measurement and number, it cannot assume the status and dignity of science" (Galton, 1879). American psychologist James McKeen Cattell stated that, "Psychology cannot attain the certainty and exactness of the physical sciences unless it rests on a foundation of experiment and measurement" (Cattell, 1890). Galton's firm pronouncement was based upon the work of his predecessors, especially the medieval scholar John Duns Scotus and the German philosopher Immanuel Kant. Indeed, science naturally expands from what is known to what is unknown. Hence, it grows and develops into the unknown and absorbs the intangible into the tangible. New instrumentation based on novel physical concepts of measurement brings a once intangible variable into the purview of science.

With measurement, once purely intuitive ideas are provided with the quantifiable validation that makes them legitimate scientific concepts. In our post-Enlightenment era, it is experimental validation that moves such stories from mythology into the realm of fact.

As scientific research reveals the specific mechanisms behind our intuitive senses of the world, these insights can help us remove the bias that the lack of research has created against intuition. For example, our sense of smell is an evolutionary adaptation that confers survival advantages. We've all heard how stories of how animals and even some people can smell fear, or how mothers have a special bond with their children. Scientists from Germany, Canada, and Sweden have found that the sense of smell is a strong signal that bonds mother with their newborns. fMRI scans showed that new mom's thalamus lit up more than that of women without children when smelling the cotton undershirts of newborns, suggesting mother's increased attention (Lundstrom et al., 2013). Olfaction researcher Katrin T. Lubke suspects that mother's imprint their chemosensory signature onto their unborn child through their amniotic fluid, enabling newborns to be able to detect the unique scent of their mother.

In a 2015 experiment, newborn babies turned their heads towards scent pads of their mothers twice as long as a lactating stranger.

In 2015, European researchers using electrodes measured the facial expressions of subjects who had sniffed the scent samples of people who watched a variety of videos. The scent of volunteers who watched scary videos evoked an apprehensive facial expression in the subjects, while happy videos evoked a smile, and those disgusting ones elicited a facial expression of revolt. In another fMRI study, subjects smelling the sweat of first-time parachute jumpers caused significant activity in the left amygdala, suggesting a fear response. These olfactory signals, often interpreted as intuition, may be indicators that warn us of impending danger.

Olfactory research is also putting old myths to sleep, replacing them with more nuanced findings. For instance, how often have we heard that dogs have a keener sense of smell than humans? This may be true, but only for a limited range of stimuli. Biologist Matthias Laskas' research involves performing a cross-species comparative analysis of odorant detection overt the span of decades and has found that dogs have a better sense of smell than humans, but only for specific types of aromas, such as fatty acids that are emitted by meaty prey – the type of scents pertinent to its survival. Humans, it turns out, outperform dogs when it comes to smelling plant aromas, something beneficial for our human ancestors seeking out fruit.

A University of Chicago research study tested a group of women who were asked to smell t-shirts worn for two consecutive nights by male subjects. The study found that women were able to choose their closest genetic match based on scent alone accurately. In another related study, researchers at McGill University demonstrated that smelling body odor activates the dorsomedial prefrontal cortex, a part of the brain associated with recognizing family. Other research shows that women prefer potential partners who are genetically related, but not too related. Taken together, these results tell us that our refined sense of smell is a tool that women use to eliminate choosing bad partners, which is another form of measurable intuition.

The olfactory circuits are a classic example of how (bodily) intuition trumps reason. A 2014 study revealed a spectrum of a trillion different odor permutations which humans are capable of distinguishing; yet, we are armed with a small supply of words that describe the aroma. This means that our intelligence is only capable of discussing a tiny fraction of this enormous olfactory space. Since the olfactory nerves do not connect directly to the thalamus, but instead to the cortical areas that are responsible for emotions and memories, smells can trigger these feeling without our conscious awareness. Neuroscientist Johan Lundstrom's research has led him to conclude that evolution has given us a highly refined sense of smell that science is only beginning to reveal. This indicates that our intuition is highly valuable and should work hand in hand with our intelligence. While we can always choose to ignore the intuition emerging from our sense of smell, it may come at a price.

Professor Sarah Garfinkel is a neuroscientist at the University of Sussex who researches interoception, the awareness of internal body sensations, like an increased heartbeat, headaches, knots in our stomach, dizziness, or feeling hungry. Interoceptive signals travel along neural or blood-borne (humoral) pathways. When we don't feel well, interoceptive cues play a large part in our feelings. Such signals have evolved to provide feedback to the brain to increase our chances of survival. However, like scent this is not a conscious awareness as more often it is tied to intuition.

This connection between the signals of our internal physiology and our emotional state was recognized as early as 1884 when William James put forth the argument that our "feeling states" are a product of our physiology. In other words, James argued that fear doesn't cause our heart to beat faster, but our accelerated heartbeat is the source of our feeling of dread. The visceral structures of the body refer to the internal organs of the body, the heart, lungs, stomach, intestines, kidney, liver, and other organs. These organs contain sensory nerve endings that relay signals to the central nervous system but rarely enter into conscious awareness. Occasionally we do experience our internal world through sensations such as heart palpitations, throbbing headaches, racing pulses, abdominal cramps, colic, or butterflies in the stomach. For example, today, doctors use patient reports of visceral pain to help diagnose potential disorders of internal organs as a very important intuitive diagnostic tool. In fact, with intelligence we have become better at identifying the source of this knowledge. Scientists studying interoception have identified a region called the anterior insula located at the center of the brain as a critical processor of both emotions and internal visceral signals.

Garfinkel and other neuroscientists researching interoception have performed experiments that demonstrate the neural and mental representation of internal bodily sensations are integral for the experience of emotions. Researchers tested for enhanced interoception in the lab by testing subjects for their ability to sense their internal physical sensations. The subsequent research shows that there is indeed a positive correlation between sensitivity to interoceptive signals, greater activation of the insula during interoceptive processing, enhanced grey- matter density in the anterior insula, and experience of greater emotional intensity. This does mean that many of our senses are processed through our intuition, not our conscious minds.

Furthermore, researchers have discovered that our internal bodily signals can be quite nuanced. For instance, we all know that when we experience the emotion of fear, our heartbeat increases, but did you know that our heartbeat slows down when we are in a state of anticipation? Different patterns of heartbeats are characteristic of different emotional states. This is just one fact that hints at the fact our minds and bodies are intrinsically coupled.

Experiments in the field of interoception have also demonstrated empathy effects through external synchronization of interoception states. For example, in an experiment when the heart rate of firewalkers and their observing spouse or partners were measured, the observer's heartrate matched that of their partner. This is one more example of measurable intuition. In experiments involving manipulation of eye pupil sizes of images of people, reducing the pupil size of the image caused a physiological reduction of the pupil side of the observer as well. Reduction of pupil size is an indication of sadness, so sadness was transmitted subconsciously through the image. Such experiments show that interoception is essential both for the awareness of our internal state of wellbeing, as well as that of others. Interoception research concludes that a healthy community requires individuals who are sensitive to their interoception states.

These interoceptive signals also play an essential role in intuition and decision-making. Intuitive decisions are often made by "gut feeling," which we have been discussing the basis for. For example, stock market traders often trade by gut feeling. As reported in their 2016 paper, Garfinkel and colleagues constructed an experiment with high-frequency stock traders on the floor of the London Stock Exchange to test the connection between risk-taking decisions based on this intuitive sense, the gut feeling, and interoceptive signaling.

They found that indeed stock traders who use their gut feelings are more sensitive to their heartbeats than matched controls from non- trading populations. Also, there was a good correlation between their interoceptive skill and their financial profitability leading us to guess that intuition is essential in high quality decision making.
These examples illustrate how scientific research is slowly revealing that instead of a vague, unquantifiable feeling, intuitions are based on sensory signals of the outer and inner world resulting from millennia of evolutionary adaptation. So even when we are unable to articulate it, except in vague emotional and unmeasurable terms such as "I sense danger" or "something doesn't feel right," they are still based on a physiological system honed for survival. Such research lends credibility to the hard science behind the signals that underlie our intuitive senses.

However, the challenge of intuition has always been the inability to measure it. Unlike scientific realists who make claims that a scientific concept may exist in some metaphysical realm, scientific operationalists elevate measurement to a philosophically distinct category of scientific activity. Operationalists do not admit a scientific theory unless it can be measured. Hence, from their perspective, intuition is not something that could scientifically exist unless it could be estimated first. Therefore, many scientific concepts are defined by measurement as measurement takes logical priority over other scientific concepts that cannot be

measured, in the operationist view. In other words, when something is communicated and understood accurately between people, it eclipses the importance of something that is not able to be described well, something without an agreed upon metric.

The representational view, that measurement involves a distinctive relation of numerical representation, sees measurement as a hybrid of the empirical and the conventional. This raises the spectre of the apparently "unreasonable effectiveness of mathematics in the natural sciences" (Wigner, 1960). When scientists set about devising practical, standardized procedures for measuring, it is precisely these real numbers (ratios between unknown magnitudes and the unit adopted) that scientists attempt to identify. In that process, the most important factor distinguishing measurement from other methods of scientific inquiry is the context of application – quantitative attributes and ratios that they sustain.

After the Second World War, new regulations for researchers emerged in the U.S., as documented by Schorske (1997) and Solovey (2004). These new requirements led the human sciences (psychology, economics, and sociology) to imitate the quantitative rigor of the physical sciences (e.g., biology, chemistry and physics). Heightened public perception of psychology's laboratory and methodological rigor (the ability to measure quantitative value) maximized funding opportunities under the new, post-war dispensation. However, new funding policies forced the qualitative methods previously employed by psychological researchers into decline. This shift occurred within a new cultural paradigm that evolved from the Enlightenment period of the 17th century, which privileged quantification and associated hard numbers with objective clarity.

If psychology and other qualitative, social science disciplines desire scientific legitimacy, it seems that practitioners must limit their inquiries to quantifiable, objective phenomena and fact. Today, the desire for such reductionism has filtered into many areas of society, including democratic institutions and the governments legal backbone. Attempts to quantify a voter's decision-making process have overshadowed investigations into some of the equally valuable qualitative variables. As a result of decades of this, our Post-Enlightenment Western thinking now habitually assumes the authority of quantitative over qualitative values. This bias is particularly evident in instances where the choice of quantitative values over qualitative ones do not make sense. If the only way for a reductionist-dominated world to operate is on measurable facts, then perhaps quantifying the qualitative is the solution?

Also, science seeks to discover universal, governing principles in nature: structural operations, attributes and their interrelation (causal or otherwise). Where only qualitative information exists, science is limited in its predictive power. It cannot encompass all possible variables currently. For example, what we call intuition is typically considered an intangible, but as we have seen, it may itself be a highly evolved biological sense that confers evolutionary advantages. It is only when it undergoes scientific scrutiny that it is transformed from folklore into a consistent, observable pattern that we believe we can rely upon. When intangible values become measurable, significant advancements in knowledge follow because accurate measurements reveal previously unseen patterns of behavior. This provides the measurer with a significant positive.

Quantifying what was previously qualitative is in fact how modern science progresses. Often, it takes the definition of a new idea to move the fuzzy intuitive ideas into the well-defined world of the measurable. Once that mapping has occurred, mathematical patterns can be revealed. Thus, quantifying current qualitative variables of political science may indeed provide solutions to the problems that we are faced with in our current iteration of democracy. One economist's rough attempt to measure the qualitative (intangible) value in the US economy resulted in a three trillion-dollar asset valuation. What, then, is the value of intangible – but genuine – "social capital," and what role does it play in civic engagement and democratic participation? Our inability to quantify this intangible value and integrate it into decision making means that we cannot assign a value to it, and so, it simply does not appear on our radar. Subsequently, our decisions are made without it, resulting in much poorer decisions.

While science struggles with trying to find a way to quantify intuition, it's very definition – the ability to understand something instinctively, without the need for conscious reasoning, is the modus operandi of all other species. Intuition is built into the DNA of all living species – it is how even we humans conducted our lives before advanced symbolic reasoning.

As we have seen, psychoactive drugs can influence culture in significant and unpredictable ways, skewing our perceptions, and giving us a different lens on reality. Depending on the individual, that lens can be destructive and lead to an endless cycle of depression, addiction, violence, and alienation, or it can be exhilarating and stimulate new insights about life that could lead to beneficial paradigm shifts. These altered states of reality affect not only the individual drug taker but also the surrounding social environment. This effect can range from claims that drugs can expand their awareness, as they can bring back ideas into ordinary consciousness and make positive changes. This can lead to a large array of social changes.

However, it's not just psychoactive substances that can alter our conscious experience of reality. Often, scientific research sheds a whole new light on our normative experience of reality itself. Research from the lab of cognitive and computational neuroscientists such as Anil Seth sheds light on how our brain constructs our reality. Seth's research strongly supports the notion that we don't just passively observe the world through our sense, but actively construct it. Thus many humans now agree that we frame waking consciousness as a kind of normative hallucination, a shared dream that we mostly agree upon.

Following this line of research, brains are evolved not so much to report the whole truth as they are to report indicators that increase our survival fitness. Our sensory organs only sample a very narrow portion of the entire spectrum of any sensory mode. For instance, our visual sense only senses electromagnetic vibration between the infrared and the ultraviolet part of the electromagnetic spectrum.

It cannot detect the vast frequency spectrum that lays beyond this narrow window. The world that our brains construct from this limited data is not a total representation of reality but is only a very limited indicator of objective reality that helps us survive better. We know our natural limitations when dogs can hear things we can't hear, or when lions can smell us from a distance, but we cannot see them.

Moreover, it's not a difficult step to imagine the perspectives which our fellow species perceive the world from. This idea, of one organism's perceptions of the world was put to paper in Germany. A German scientist named Jakob von Uexküll proposed a new framework and a new word for it. He calls an individual's sensory experience the Umwelt.

Putting ourselves in the shoes, not of other people, but other species may sound challenging, but it's an essential step in understanding a picture of shared objective reality. Homo sapiens is a species that evolved out of nature, but our skill of modeling the world combined with toolmaking overwhelms nature with its firepower. After our species learned that, in a very short span of geological time we have ascended to become the apex predator of planet earth. Unfortunately, our actions are throwing nature entirely out of balance, affecting not only other species, but ultimately, our own race as well. Progress traps abound as a result of our boundless ingenuity. Our terrible stewardship of the only planetary ecosystem requires a complete rethink of the reductionist way in which we have related to nature for the past centuries.

More and more ecologists recognize the need to rebalance our species' activities with the others we share the planet with. There is now even a (human) movement afoot to give other species rights. In that vein, perhaps voting rights for other species is not so far off. With the growing scientific recognition of the conscious and emotional lives of other living species, it may not be as far- fetched to thinking animals may soon have more rights. This is one response to our reckless treatment of ecosystems which has resulted in civilization-threatening species loss. We have harmed an enormous range of species; everything from apex predators to insects. The impact of our industrial food production industry is to enslave and control other species. In light of our aggression to all other species on the planet, our efforts to conserve biodiversity can be interpreted as a way of voting for other species.

The insight that the Umwelt concept gives us is that our human experience of reality is but one out of many possible ways to experience reality. Due to our richer analytic reasoning abilities, we may intellectually know a lot about how other species experience the world in ways different than ours, but that is entirely different from actually experiencing the qualia (that subjective, felt quality of experience of the world) from a non-human perspective.

In human society, a large part of human social interaction is empathetic through imputing the felt experience of other people we are with. When another human being expresses emotion through body language, we can ascribe their subjective interior state from that. For instance, when another person laughs, we can infer the feeling of happiness, when we hear loud, harsh words accompanied by harmful actions, we can assume the emotion of anger, and when we see tears, we can infer the state of sadness. The same is true of a range of other mental states.

Peer review, not just of scientific articles, but of any idea would be impossible if we could not mentally tune into the same mental state. If we could not impute the inexperiencable but only posited internal states, we could not carry on any meaningful dialogue with another person. Language would be rendered useless. I can only write these words if I can predict to some degree of accuracy your mental experience when you come to read it. So, with another member of our species, we can relate to their internal qualia because of a shared Umwelt. Extending that beyond humanity is very challenging because the subtlety of comprehending the imputed states of other beings is almost impossible when it comes to a significantly different sensorium.

We have no reliable way of knowing what our pet cat is trying to tell us when it meows or scratches at an object. An animal may be receiving uncomfortable interoceptive signals as symptoms of an internal organ disorder, but it may have no way to relay that information to us. Furthermore, the more anatomical distant a species, the less we are likely able to empathize with it. How does a mosquito experience life? What about a bacterium in the gut of a fly? We take our anthropomorphism as an evident and inherent quality of science, to the extent that few of us are even consciously aware of it. This built-in anthropomorphism has far- reaching consequences for there are disconcerting questions of inter-species bioethics which would fundamentally change how human civilization behaves were they to be taken seriously.

We may formulate this dilemma in the form of a philosophical question: is it even possible to know about the world except from our human perspective? It is problematic for us to imagine how we might otherwise experience nature as a non-human being. How do we see nature through the lens of non-human eyes? If knowledge (of nature) is inherently biased towards the human perspective, how can we ever empathize with other life forms? In developing a taxonomy of living beings, for instance, one of the ways we describe each species is by the unique sensory modalities they have each evolved. Each species senses the world through its own unique set of built-in biosensors. Will the future scientist be able to categorize different "flavors" of the Umwelt? Even our descriptions are anthropomorphic in the strictest sense since our description is referenced to the framework of human consciousness. Maybe there is a slight variance in human Umwelt that should be categorized?

The descriptions are part of our uniquely human knowledge system, a cultural creation dependent on highly advanced use of symbolic reasoning. Few of us would argue that any of these ideas presented in this book would be of meaning to a cat or a nematode.

We must never forget the obvious but highly significant fact that it is human organisms which are describing non-human organisms. The inherently anthropomorphic lens by which we see nature is a very profound assumption. We project properties, which are a construct of the human consciousness onto the rest of the known universe.

These kinds of considerations bring up an interesting question. Is there one objective reality which living organisms' sense in their unique way, based on the unique configuration of their sensory apparatus, or is there just a personal universe that is unique for each living organism? This vital question draws strong comparisons to the philosophical debate between realism and pragmatism (Sharov, 2001). Members of the scientific community are primarily realists, and logical positivism is the dominant philosophy. This view is characterized by belief in an objective reality, that facts correspond to things and relationships that exist in the world, and theories are a collection of statements about such things and relationships.

In contrast, pragmatism does not believe in objective reality; it posits that subjective reality exists for an organism because it is useful in its survival. The 20th-century Baltic-German animal physiologist Jakob von Uexküll developed the idea of the Umwelt (German for the environment, but which usually translates to the subjective universe) to explain this subjective space of a living organism. While most ecologists are realists and assume that all organisms in the ecosystem share the same environment, Uexküll's research led him to postulate that organisms create their subjective universe. Umwelt is not the same thing as an ecological niche because niches are objective units of an ecosystem which can be measured and quantified by an objective measuring device. The Umwelt, on the other hand, is subjective and no more accessible for direct measurement than another person's mind is, yet. In this system, the practical meaning of each aspect of an organism's existence within its environment is unique and relative to each specific organism.

Humans seem to have the capacity to see beyond our species Umwelt and identify that of other species. As a result, we can study, survey and catalog the sensory modalities unique to each species, and understand their limitations in a way that they cannot.

As an example of contrasting Umwelts consider the remora, also called a suckerfish, which has a parasitic relationship with sharks. It, therefore, sees the host shark as a food source and not as a predator. Many other fish, however, may see the same shark as a predator. Living organisms actively create their Umwelt by repeated interaction with their environment. So coral plants' Umwelt consists of filtering the water for their tiny prey, but at the same time, countless other creatures use the vast colonies of coral as a protective ecological habitat.

Uexküll's theory of meaning parallels the ideas of Charles Saunders Peirce's semiotics (theory of signs). From the semiotics perspective, Umwelt is not a set of objects in the environment but instead is a system of signs interpreted by an organism. Humans are unique in that we share a large portion of our Umwelts using a highly sophisticated communication system. In having an advanced reasoning ability, does this somehow help us rise above subjectivity and become objective? For instance, while some fish may see the shark as an enemy, while others see it as a friend, we can analyze and see how each of these regards the shark.

However, in spite of our sophisticated use of signs we too are limited by our own Umwelt. Our five senses constrain each of us to experience the world in our unique way. The fact that so many different human cultures have created number systems with base 10 is no accident.

It's an intrinsic part of our Umwelt to experience our 5 digits on each hand and to use these to count the objects in the everyday world that we experience in our particular scale of the universe.

The Umwelt theory of Uexküll contradicts the traditional positivism schools of science, which claims only that which can be sensed is real. In the recent past, the goal of science was viewed as the discovery of various aspects of the objective real world that exist unconditionally and independently from any observer. Contradictory to this, the Umwelt theory has begun a healthy debate of precisely what the words objective reality mean. Hence, we may observe the external world of a dog from our human perspective, but we cannot have access to its subjective world of experience, its Umwelt. In the same way, we cannot access another person's personal inner experience. This can be taken to the logical extent that whether there is a "real" world is irrelevant, and we are like the inhabitants of Plato's cave, only able ever to experience the shadows cast on the wall of our perceptions. Thus, the only relevant experience is the subjective experience of the organism.

For mapping the Umwelt, our subjective experience, of homo sapiens we map the experience of the world as it comes through our five senses. However, each sense gets a different amount of neural real estate, and each sense participates in constructing out Umwelt accordingly. The image below shows a cortical homunculus. This homunculus is a distorted representation of the human body based on a neurological "map" of the areas

of our brain that is dedicated to processing motor functions, or sensory functions, for different parts of the body.

Under these conditions, the brain devotes a great deal of real estate to processing the nerves from the hands, the lips, and the tongues. This biases our knowledge processes to those aspects of the body. Scientists have created distorted body models with body parts distorted in proportion to the amount of neural cortical real estate the nerves from that area have been allocated.

In a sense, the Umwelt of modern humans is unique compared to other species. We seem to see the world as a meta-level, while others are stuck on without a holistic view. Yet, being sympathetic to another species experience of the world can teach us about our own. For, each species has its own subjective experience of reality. What is relevant for one living being may not be for another. Also, the paucity of our knowledge of how other living beings experience reality applies not just to other species, but even to our own species. Do you truly understand the people around you? In the use of language, there is so much that cannot be expressed. Therefore there is so much understanding that cannot be shared. The tangible, observable, and measurable universe pales in comparison to what we don't know, either individually or collectively. If we don't know how other living beings experience reality, whether they are of our species or others, our decision-making process concerning them will not be sufficient. Not knowing how other species experience reality, we may make poor decisions when we seek to shape a healthy system comprised of multiple individuals or species.

From an evolutionary perspective, each animal and plant species is uniquely adapted to its native environment. This includes the sensory modes which each species uses to perceive the world it inhabits. The sensory organs and behaviors of species make each one a master of its particular domain. Each has its own wisdom and therefore lessons we could learn from them.

Whether it is a herring that communicates through farting, or magnetic sensing to guide the flight path of birds, science uncovers patterns about how living organisms uniquely experience their reality. These biological qualities imply that different animals experience our shared existence in uniquely different ways. For example, the surface of human skin is experienced differently by a human, a flea, or a bacterium. For humans, our skin is a smooth continuous sheath that clothes our entire body. When a flea bites us, we have an experience of itchiness and irritation. For a mosquito, our skin is a shifting, spongy landscape that its sharp claws and backward-facing body armor are designed to attach itself to. It is also a landscape that the flea well knows contains its food source buried just beneath its surface. It uses its pointy mechanical proboscis to penetrate the epidermal layer and draw out the food.

Certain bacteria, in contrast, spend their entire life immobile on the skin surface. The Belly Button Biodiversity project (Hulcr, 2011), conducted to demonstrate the existence of beneficial strains of bacteria on the human epidermis, discovered up to 1400 species of bacteria living within and around the human navel.

Bacteria produce and excrete a variety of chemical compounds that help to attach and expand the colony. This extracellular matrix is composed of DNA, proteins, lipids, mineral scaffolds, and polysaccharides. This biofilm creates a livable condition in an otherwise hostile space. If we truly understood how they thought when they did this, we could elevate our own environment.

Unfortunately, what has not been measurable by scientific instruments has often been ignored, trivialized, or has gone unnoticed. Thus arose the essential work of Biosemiotics.

As a result, work in biosemiotics is challenging this view and gives us two new ways to interpret intuitive data.

First, the growing body of scientific evidence may help us place more trust in our intuition, viewing it in a new light as essential result of millions of years of evolution. We have been wired for fitness, and this includes the signals we both receive and respond to instinctively and subconsciously. After this change in attitude, there is a growing number of neuroscientists engaged in intuition research. We now know that instead of a vague, undefined process, intuition is an in-depth information processing outcome of our brain.

The fact that we do not clearly understand how it works should not lead us to discard it with a blanket description of it.

The fast system 1 of Kahneman is a predictive machine. It compares incoming sensory information with stored knowledge and memories to rapidly predict what will happen next.

Intuitions are mental processes that occur when the brain has made a significant match or mismatch, but it hasn't bubbled up to our conscious awareness yet. Instead, it manifests as a "gut feeling," the informal name for interoceptive signals within our bodies, and emotions are activated to alert consciousness to make an important decision. Even as Kahnehman warns against the many cognitive biases that plague intuition, he concludes that intuitions cannot be thrown out.

The value of your intuition comes down to your experience in an area and the ability to recognize when a cognitive bias is manifesting. In other words, the more experience a person has in a particular area, the more predictive is the intuition that emerges. A meta-analysis finds that intuition and analytic reasoning are not are opposite ends of a bipolar spectrum as has been widely assumed, but are independent constructs (Wang et al. 2015)

In fact, we could almost say it is an intuitive finding that the two thinking styles are complementary in the most effective decision-making. In scientific research, for instance, a project may kick off with intuitive knowledge but is then validated through rigorous analytic reasoning. However, throughout any project, we usually have periods of strong intuition or areas of strong analytic reasoning. Often intuitive reasoning and analytic reasoning complement each other and alternate. When we are stuck with a problem, it is intuitive insights that come to the rescue. Once the intuitive insight is out, discussed and accepted as an excellent lead to follow, it is usually then that the validation process takes place to make sure it is a valid solution.

While intuition has gotten a bum rap as sloppy or inaccurate, analytic reasoning can also lead to poor decision-making as well. (Wilson et al. 1993)

Many financial and other crisis has resulted from poor analytic rationale that led to an erroneous decision. It is indeed not the case that one form of reasoning is superior to the other. Many variables determine the quality of a, but effective decision-making is based on a dynamic interplay of both high quality intuitive and analytical reasoning at the right time. Next, finding ways to measure qualitative data that was previously not quantifiable not only validates our intuitions but also brings it into mainstream scientific analysis.

All this theoretical consideration leads us to ask: How can we quantify and operationalize our valuable intuitive knowledge to improve our decision-making, democratic, or voting processes? The unfortunate reality is that though we may possess good instincts and have a good imagination, such intuitive talents critical to effective decision-making currently never show up

in statistics or the final balance sheet of a corporation. This is because, though they are acknowledged to be real, there is no way to quantify them and hence, no way for the current economic system to evaluate them appropriately. In the chapters ahead, we explore the possibility of creating a simple new metric that can effectively bridge the intuitive and rational side of reasoning. This new way of quantifying the vast stores of intuitive knowledge all around us offers the possibility of converting this vast idling resource whose potential has hitherto been untapped, into a form that is amenable to the normal operations of analytic reasoning. In effect, we offer a new way to convert previously intuitive ideas into cold hard facts.

 So how do we measure various information types from multiple sources, quantify the intangibles and compare and combine them? A new kind of metric is required. The next chapters explore this metric.

Fig 8. Homunculus sculpture with body distorted to match size of cortical real estate. Source: Price-James, 2019

# THE SEARCH FOR A UNIVERSAL METRIC

In our quest to define a metric to quantify intuition, and reduce the reliance on flawed information, we might ask what would such a unit of intuition look like, and where might we start looking for clues? First we must define intuition. Intuition is our ability to understand something instinctively without the need for conscious reasoning. It is, therefore, a psychological process, and consequently a biological one as well. At the same time it

is instinctive, but a higher form of instinct that applies to vast stores of symbolic memory. As neuroscience research reveals, the neural activity of intuition involves large columnar clusters of neurons on the neocortex. There are billions of neurons involved in this process. Indeed they process the sensory signals of the external world and relay regular patterns as well.

The inner world of consciousness is built upon the symmetries of the brain, an organ found in the outer world, along with all the other symmetrical structures surrounding it. Indeed not only is the world full of symmetry but human bodies, being part of that world, have embedded all of that symmetry.

As our brain is built on that foundation, our conscious mental operations can be seen as arising from it as well.

Symmetries seem all-pervasive and are found across all species, making it, in a sense, independent of all the many possible Umwelts in the world. It is a rule that is independent of all of the human race. We see radial symmetry manifesting across the universe, at all scales, from the tiniest to the largest, and throughout many flora and fauna species.

Indeed, symmetry can be considered a glue that binds us all together. Since symmetry is so universal that its expression is invariant across all forms, it would seem natural to seek the fundamental laws of physics through symmetry. Indeed, our culturally based knowledge is a repository of observed patterns, and characteristic patterns reflecting the symmetry encoded throughout nature at every level.

One reason for pursuing these ideas is that our psychology is fundamentally tied to symmetry. Psychologists have demonstrated that visual compositions with symmetries, such as bilateral or vertical symmetry, are more readily detected. In 1897, the scientist Ernst Mach conducted an experiment using irregular shapes that demonstrated people could easily perceive bilateral and radial symmetries. He proved that we could detect symmetry before recognizing the pattern. This led Mach to conclude that symmetry is computed at a shallow level of image representation. Magnetic resonance imaging research has shown that when we look at an object, our brain detects visual symmetries in less than 50 milliseconds (Tyler). This is one of the quickest reaction times the brain has.

Pythagoras established the famous Pythagorean school in Croton, Southern Italy around 530 B.C.E. One of the unique results of this school was establishing the beauty of mathematical symmetries.

His Greek counterpart, Euclid was equally influential in the exploration of symmetry in space. Like Pythagoras before him, Euclid took logic as an organizing principle to another level, systematizing 465 known but disparate theorems and tying them all together in a work revered for its beauty as much as its power. It found the commonalities, or symmetry, in all these ideas and combined them into a beautiful and symmetric operation. Euclid's work established Euclidean geometry as well as the axiomatic method and logical deduction, all fundamental elements of modern mathematics.

Across the ages, scientists have explored similarities and commonalities, and in the process have proven that symmetry is not only socially and psychologically necessary to humans but are also present in great degrees in structure the universe itself. Thus, it makes sense that we are fascinated by it as a species, consciously and unconsciously, which in turn has led to more investigation.

One of the recent attempts to try and quantify the universal nature of symmetry was in the field of aesthetics. The modern study of aesthetics found one of its greatest proponents in the late Harold Osborne, founder of the British Journal of Aesthetics. Osborne devoted his life to studying aesthetics and placed a great deal of emphasis on the perceptual mechanisms of aesthetics. To Osborne, that beautiful works of art took on an aesthetic quality was a direct result of our heightened state of awareness and arousal surrounding them, inducing within us alertness, vitality, and wakefulness. The layperson often cannot appreciate art for art's sake, but Osborne thought that as reason is cultivated for its purpose in fields such as logic, pure mathematics, philosophy, and pure science, perception must also be developed for its own sake as well. He thought that somewhere in there, there must be some seed that could lead to a new understanding of symmetry.

As Osborne thought, If order is the key to understanding beauty, then perhaps there are fundamental laws of attraction that apply to all beautiful objects. Thus, Alexander Baumgarten (1714-1762) was the first to propose a new science based on laws of beauty, aesthetics. In his book Aesthetica (1750) Baumgarten argued that the appreciation of beauty is the ultimate goal of the aesthetic experience. Unfortunately, this branch didn't manage to come out with much that could be of use to us.

However, that wasn't the end of the exploration of beauty.

Other attempts to explore symmetry as a phenomenon have occurred in art. Art historians have generally observed and commented upon the symmetries found in great works of art, but it wasn't until 1963 that Charles Boleau's classic, The Painter's Secret Geometry, revealed the hidden mechanics of art appreciation. Boleau takes us behind the magician's tricks: secret symmetries such as patterns, ratios, and vanishing points that great painters throughout history employed in their artwork. Art appreciators who have read Boleau's book are astonished at the geometric overlays within the patterns that subconsciously attracted them to the piece. So insightful was his book that geometric practice still pervades today's art world.

His book has led us to ask: is the division between art and science a false one? For one, Einstein said, "The greatest scientists are always artists as well." Foremost among those who embodied this dexterity was the interdisciplinary genius, Leonardo Da Vinci. In spite of his prodigious scientific output, famed art historian E.H. Gombrich argues that Da Vinci took up his diverse scientific and engineering interests in anatomy, biology, civil engineering, and astronomy to elevate his artistry as a painter. This is one proof that intuition and intelligence have to work together.

Pythagoras, Euclid, Epicharmus, Da Vinci: these luminaries of Western thought explored the mysterious intersection of symmetry, mathematics, and aesthetics. Through these thoughts, they've managed to impact thought across millennia. Their ideas still reverberate throughout the world today, in science, math, art, literature, medicine, and other fields.

The symmetry found in the lessons of Pythagoras was only the beginning of the influence of symmetry on modern thinking. Our modern world lies upon the intersection of these ideas.

Newton and Leibniz formulated the laws of nature as differential equations, but this changed utterly with the shift to symmetry.

Symmetry, as we have shown, is a very complex and universal relationship between many things in the universe. Molecules are always looking for balance, as are plants, as are the stars. Humans find the most symmetrical things the most beautiful. Planets are always looking to be symmetrically round, and snowflakes to be snowflakes. Since this tendency permeates the universe, we can easily find other phenomena that mirror these relationships.

What do all new-born mammals have in common? Offspring are genetically coded to find security with their parent(s). As a baby begins its new life in the world, their relationship with their mother is critical for protection, and sustenance. Subsequently, offspring become intimately familiar with the shape of their parents, and since those parents are symmetrical, symmetry is entrenched as one of the main elements of safety. Furthermore, this is an essential element of survival, as the child begins to see this symmetry in other living organisms in its environment as well. These easily recognized symmetrical patterns create a code for safety, food, and danger.

We can find movement in symmetry, and food through unique symmetrical shapes. We can discover ideas by comparing symmetrical shapes and discuss abstract ideas more clearly through the same concepts. Symmetry, because it is so universal, is a powerful all-purpose discovery tool.

Then, when we go out and look for symmetry, we can find it embedded in reality at every scale, from spiral galaxies to the spiralling paths of subatomic particles. The same applies to living systems. We can observe bilateral symmetry of most animal species, insects, leaves; rotational symmetry in eyes, jellyfish, worms, and flowers; and helical symmetry in scale patterns in pinecones or the double helix of DNA, to name a few. Fractal symmetry, a self-similarity between a part of the whole and the whole itself, appears in numerous plant and animal species at every scale.

As living organisms grow, certain structures repeat. Our symmetrical bodies are built from symmetrical genetic transcription laws that, in turn, reflect symmetrical molecular structures such as DNA. As we have said before, there is much evidence for the significant impact symmetry has on the human race every day.

Related to this is the idea of invariants. In recent years, conventional science has discovered the power of invariants, properties that remain unchanged under applied transformations. Invariants are always ratios and are always symmetrical. In other words, symmetry is the essence of invariants. Symmetrical objects can be transformed by applying specific operations to one part of an object to create another part of it. Hence in mirror symmetry, reflecting an object across a line can recreate the same shape on the other side. In radial symmetry, patterns are duplicated at fixed angles. Thus invariance principles provide a structure and coherence to the laws of nature just as the laws of nature provide a structure and coherence to the set of events. Indeed, it is hard to imagine that much progress could have been made in deducing the laws of nature without the existence of certain symmetries.

However, the unchanging nature of symmetry was not clearly understood until recently. Historically, symmetry and invariance were perceived as different concepts: Until the 20th-century principles of symmetry played little conscious role in theoretical physics. The Greeks and others were fascinated by the symmetries of objects and believed that these would be mirrored in the structure of nature. Even Kepler attempted to impose his notions of symmetry on the motion of the planets. Newton's laws of mechanics embodied symmetry principles, notably the principle of equivalence of inertial frames, or Galilean invariance. These symmetries implied conservation laws. Although these conservation laws, especially those of momentum and energy, were regarded to be of fundamental importance, these were considered as consequences of the dynamical laws of nature rather than as consequences of the symmetries that underlay these laws. Maxwell's equations, formulated in 1865, embodied both Lorentz invariance and gauge invariance. However, these symmetries of electrodynamics were not fully appreciated for over 40 years or more.

The connection between symmetry and invariance was only made in the 20th century when Albert Einstein proposed his theories of relativity. Einstein's great advance in 1905 was to regard the symmetry principle as the primary feature of nature that constrains the allowable dynamical laws. Thus the transformational properties of the electromagnetic field were not to be derived from Maxwell's equations, as Lorentz did, but instead were consequences of relativistic invariance, and indeed largely dictated the form of Maxwell's equations. This is a profound change of attitude. Lorentz must have felt that Einstein cheated. Einstein recognized the symmetry implicit in Maxwell's equations and elevated it to a symmetry of space-time itself. This was the first instance of the geometrization of symmetry. Ten years later this point of view scored a spectacular success with Einstein's construction of general relativity. The principle of equivalence, symmetry, and the invariance of the laws of nature under local changes of the space-time coordinates dictated the dynamics of gravity, and of space-time itself.

With the development of quantum mechanics in the 1920s, symmetry principles came to play an even more fundamental role. In the latter half of the 20th century, symmetry has been the most dominant concept in the exploration and formulation of the fundamental laws of physics. Today it serves as a guiding principle in the search for further unification and progress.

This invariance/symmetry relationship is notable in modern gauge theory. Gauge theories have assumed a central position in the fundamental doctrines of nature. They provide the basis for the enormously successful standard model, a theory of the fundamental, non-gravitational forces of nature, the electromagnetic, weak, and strong interactions. To be sure gauge invariance is a symmetry of our description of nature, yet it underlies dynamics.

Symmetries such as Einstein's relativistic invariance or the symmetrical gauge invariant have predictive power and provides us with an essential tool for the exploration of the fundamental laws of nature.

Moreover, there is no answer for why nature should be symmetrical yet. The currently shaky knowledge of science forces us to once again examine our most basic assumptions. Out of that, we find a deeper epistemology of the universe that contains even more fundamental and symmetrical concepts that describe the universe.

Invariance and symmetry were discovered separately, but over time they were found to have significant properties that supported each other. These shared properties led us to believe that the universe is founded upon these two concepts. Invariant symmetry is truly a fundamental part of the universe and is found everywhere, from the quantum foam to the orbits of galaxies. This insight helped us discover a new type of predictive math. So, this math has powerful applications.

## THE PREFERENTIAL

MATH OF THE UNIVERSE

Our wide-ranging exploration of symmetry as found throughout all scales and dimensions of the universe, and specifically throughout biological nature, has led us to notice the ubiquity of power laws such as Kleiber's law. Called Newton's Laws of biology by some, these invariant laws of nature show relationships between heartbeat, metabolism, blood flow, and more throughout all warm-blooded organisms. These have been used powerfully for many decades and based on that evidence; we surmise that such power laws can be extrapolated to serve as a basis for a theory of human psychometrics. After we show you how to do that, we will describe how to represent human intuition and social capital under the same mathematical framework. This is a powerful tool that will allow us to measure the cohesion that bonds living beings together in social networks that make up the social superorganism. Hold on as we offer a simple, and intuitive, description of the new symmetrical mathematics of 6-dimensional spacetime.

The fundamental premise of 6D mathematics is that the physical world can be represented by a 6-dimensional spacetime consisting of 3 space dimensions and 3 time dimensions. To the layman, this is length, width, and height with an equal and opposite time 1, time 2, and time 3 instead of just one directional arrow of time. This symmetry of physical variables was inspired by observation of the many symmetries observed in nature, as we have previously spoken about. With it, we hope to derive the exact power laws (Kleiber's Law) that can serve as the basis for a new human psychometrics, intuition, and social capital.

Currently, we have no physical interpretation of the extra two time dimensions, but that does not stop us from postulating them. This lack of physical intuition does not pose a severe problem because the history of mathematics is replete with numerous examples of non- physical mathematical objects which were later validated. The lesson learned from history was to separate the mainstream interpretation of symbols from new and useful formal mathematics systems. The historical rejection of mathematical symbols based on their current mainstream or common-sense interpretation has consistently impeded the acceptance of mathematical truths for years, decades or even centuries. Later on, it is a new interpretation that validates the once objectionable mathematics. In mathematics, examples of initially rejected concepts that were then established include imaginary numbers, irrational numbers, infinitesimals in mathematics, heliocentrism, quantum mechanics, and relativity in science.

Despite the possible reluctance to develop these ideas, we will move ahead with the beauty and elegance of 6D mathematics. The purpose being to find a universal metric that all Umwelts can agree upon and as such, start the process of quantifying all the qualitative values. This was done by looking at the set of complex 4D dynamic behavior in the universe and embedding them backwards to find the simple 6D laws that they are derived from. Hence, complex behavior in a 4D world corresponds to much simpler behavior in the 6D world. The theory has proven its utility, however, to just about any field of science, including physics, chemistry, biology, neuroscience, and social network theory. The theory is so dominant that we could apply it to most any problem while offering enhanced predictive insight to all sciences. Although the mathematics is beyond the scope of this lay book, a note of historical interest lies with William Rowan Hamilton. Hamilton is known for spending years trying to discover a noncommutative algebra using triplets, and only discovering quaternions after he finally abandoned triplets. It is conceivable that had Hamilton stuck with triplets of space and had won the "vector wars" perhaps because of symmetry arguments; he might have argued for a triplet of time, creating a six-dimensional space.

#  THE UNIVERSAL METRIC APPLIED TO BIOLOGY

####...Can anyone doubt to day that all the millions of individuals and all the innumerable types and characters constitute an entity, a unit? Though free to think and act, we are held together, like the stars in the firmament, with ties inseparable. These ties cannot be seen, but we can feel them." NIKOLA TESLA

Within biology, we find extraordinary expressions of symmetry. While nature has created such great diversity in species, there are common threads that tie together all species together in strange and unexpected ways. Why do so many animals take 21 seconds to urinate? Why do all mammals pass through similar morphological, anatomical shapes on their way from conception to birth? Moreover, why is it that no matter what lifespan or weight of a mammal, individuals of any species die after approximately 1.5 billion heartbeats? The mass between a bacterium and a blue whale changes by a factor of 1020 but surprisingly, all of these creatures share a fundamental number of common properties. Even though nature gave rise to increasing complexity, all of these animals follow fundamental laws, which are physiological invariants that span a diverse range of life forms. The new 6D mathematics is extremely useful in predicting these changes.

One of these emergent invariant laws is that when 6D flat spacetime is projected to 4D spacetime, it is called Kleiber's Law, which states that an animal's metabolic rate scales to the 3/4 power of the animal's mass. It's not a perfect law, but many animal species seem to follow it.

Other interesting invariant patterns not only apply to individual members of a species but also to groups of such individuals. Some organisms live in colonies in which the division of labor is so specialized that individuals cannot survive by themselves for extended periods.

These organisms are called eusocial, and the collective is called a superorganism. The term was coined in 1789 by the father of geology, James Hutton and variously incarnated in the Gaia hypothesis of James Lovelock and Lynn Margulis, and the biosphere theory of Guy Murchie and Vladimir Vernadsky. In the 18th century, political philosopher Thomas Hobbes was one of the first to popularize a human version of the superorganism in his classic, Leviathan. In the 19th century, Herbert Spencer coined the closely related term super- organic to focus on social organization in a sociological context. More recently in 2008, E.O. Wilson and Bert Holldobler popularized the term in their 2008 book "The Superorganism: The Beauty, Elegance, and Strangeness of Insect Societies." It is mostly in this context that superorganisms have been studied ever since. Insect colonies notably display eusocial behavior, and individuals of a species live much of their lives engaging in social behavior that resembles specific functions of multicellular organisms. This similarity goes beyond metaphor, however. The close resemblance to internal organs, combined with the ease of manipulation of colonies, has inspired scientists to design experiments that resolve outstanding problems of multicellular organisms using insect colonies. These superorganism colonies appear to have their behavior, life, and even types of death.

Fire ants are one example of how invariant symmetry applies to a larger scale. Individual fire ants cannot survive by themselves; each plays a specific role to serve the colony. For instance, some harvest food while others tend to the newborns, and they all need a queen to produce offspring for the entire colony. There are social structures and hierarchies, a division of labor into specialized functions such as resource production, task-based specialization, and "drones" being controlled by managers and queens. When faced with an existential threat, such as their nest flooding, the entire colony springs into action, each member fulfilling its unique role to defend the queen through an intricate and collective dance. Upwards of half a million worker ants will instinctively surround their queen and her eggs, linking arms and forming a floating raft with their waxy bodies, creating pockets of air for buoyancy to keep the queen alive. Such social responses are innate, suggesting that higher level social functions are coded into their genes, and direct their collective behavior as a superorganism when triggered by environmental conditions. Thus, individual actions can be looked at as a symmetrical, invariant piece of the whole.

From insect populations, scientists and thinkers have begun to generalize the superorganism concept in many other directions. The more we look in nature, the more we see superorganisms everywhere, especially at a microbial scale.

The brain can be considered a superorganism. It consists of relatively simply neurons, but when networked together, highly complex behavior emerges.

One hot area of research sees multicellular organisms such as us as superorganisms composed of trillions of different microbe populations.

"For by Art is created that great LEVIATHAN called a COMMON- WEALTH, or STATE, which is but an Artificial Man; though of greater stature and strength than the Natural, for whose protection and defence it was intended; and in which, the Sovereignty is an Artificial Soul, as giving life and motion to the whole body"

##### Thomas Hobbes, Leviathan

Thomas Hobbes was the first person to popularize the idea of the state as an artificial man, a social superorganism. Today, the concept of superorganism has re-emerged. Like Hobbes before him, Gaia Vince, former features editor for the journal Nature Climate Change, has coined the word Homni to represent the superorganism composed of the entire population of humanity. Vince claims that Homo Sapiens is evolving into Homo Omnis, or Homni, a collective being that brings to mind the 17th century Leviathan of Thomas Hobbes.

Fig 10. Cover of Thomas Hobbes classic Leviathan showing the concept of superorganism Source: Hobbes, 1651

The cover picture of Hobbes classic, Leviathan, shows a king, representing the commonwealth, with his body composed of all his subjects, making up the cells of his body. Hobbes conceived of Leviathan as a mythical creature called the commonwealth, whose existence was motivated by the necessity of establishing rules which individuals can live by to avoid the brutish life of conflict and violence without them. This was a great concern in his time. Hobbes postulates a condition called the state of nature, in which each person would have a right to everything in the world, a country that would lead to a war of all against all (Bellum omnium contra omnes). To avoid this terrible state of existence, people need to agree to a social contract and establish a civil society. To do this, Individuals have to cede some rights in such a civil society in exchange for protection. If a state agrees to this, in the laws there will inevitably be some symmetries established.

Today's superorganism goes by a different name. Vince conceived of Homni, also a superorganism of society, but for a different reason. He conceived of it to bring attention to the outsized ecological footprint of humanity, and the planetary-scale impacts that our civilization have created. The Homni of Vince is born out of the Anthropocene, a product of industrialization, population expansion, globalization, and internet communications. Homnis are currently ravenously devouring planetary resources. Its insatiable appetite annually consumes 18 terawatts of power, 9 billion cubic meters of water and 40% of global land area. Besides, it is rapidly poisoning the biospheric, geospheric, hydrospheric, and atmospheric planetary commons. This is a very different superorganism than the one of yesteryear.

One aspect of Homni, modern industrial capitalism, has created large centralized systems supplying the needs of almost the entire population. From shipping/transportation to communications via the internet to industrial agriculture to highway systems and hospitals the superorganism is dependent on these large decentralized systems. Human beings, the individual cells of Homni, are increasingly reliant on large centralized systems for our life. We can look at these transportation systems similarly to veins, supplying nutrients.

If one looks at these systems in this way, human civilization now shares fundamental properties with the more familiar biological superorganisms such as bees or termites. For example, most people living in our modern world are like their eusocial animal counterparts as each plays a highly specialized role to keep the system running smoothly. Similarly, a breakdown of Homni's central systems can have catastrophic consequences for each human being. Because of this shift to extreme specialization, very few people would have the skills to survive if a large -scale system breakdown was to occur.

This is similar to most superorganisms, as opposed to tiny scale tribes, where these systems are much more flexible.

One paleontologist, Tim Flannery also believes that humanity can be represented by a superorganism model and further, that we are evolving into a cooperative, interdependent species.

We can see elements of this from the communicative system – the internet, to the local social groups. However, like many superorganisms, they can expand into the walls of their system and run into trouble. If they do not understand that system and respond to it well, there can be severe consequences. The question is "will humanity become the cooperative, thinking, responsible brain and steward of the planet before it's too late?"

Individually, it appears that our modern society is global. The internet, a mere toy a few decades earlier, is now an integral part of our daily life. Raw materials are mined from all over the world, processed in other parts of the world, and assembled in factories and sold to markets in others corners of the world. Just like an ant hill or a blind mole warren, there are many systems to ensure the raw materials get where they are going. Supermarket shelves display bananas from South America, mangos from Mexico or Senegal, rice from China, and cucumbers from Spain. The supply and production chains are heavily interwoven.

So, beneath the façade of individual autonomy, we are entirely and intimately dependent on each other's health and wellbeing, just like ants. Within this complex economy, most of the things that keep you and I alive are the results of other people's efforts. What this means is that each one of us has an invisible social contract with everyone else in society, regardless of class, gender, age, or culture. This means that we depend on everyone working together to make a better world, just like ants or blind moles.

Today, the individual in our culture, is like a specialized cell in a multi- cellular organism. Though each of us appears to have an autonomous existence, without the larger social system to support us, none of us could survive with the superorganisms' systems. In actuality, each one of us depends on our broader society for our autonomy.

As time goes by, individuals of an organization may die or leave, creating vacant job positions that are filled by new individuals. The organizations they make up change slightly, but to an outsider, it looks the same. Human beings, as cells in the social superorganism, resemble the individual cells of the human body, which are continually dying and being replaced by new ones – an incredibly complex web.

Moreover, to our friends or family, we, the multicellular individual is just a person, without this incredible global interconnection. The structure of our society, the human superorganism, remains alive while individuals are replaced continuously. To wit, the superorganism of human civilization has a life of its own, regardless of the individual human beings who continually churn through its body.

How does this organism work? What disrupts its ideal functions? How can we predict what it needs to learn? These are the questions we will tackle.

When we look at the turmoil that the world is embroiled in today, we realize that something is not working. Many of us believe that Homni is sick because something is gumming up the societal metabolism. The superorganism is sick, and the disease is spreading throughout the body, affecting many of us, the individual cells.

Just as the failure of human organs can spell disaster for the entire body, the failure of a state can destroy the social superorganism.

When there is no longer cohesion in the social fabric of our society, our social body will begin to fall apart. If the laws of the commonwealth that Hobbes envisioned as necessary to keep the social superorganism alive fail, then the superorganism will start to disintegrate. Social scientist Robert Putnam, the man who popularized the term social capital, has been tracking its demise in the United States for decades, as summarized in his book Bowling Alone.

Drawing from nearly 500,000 interviews between 1975 and the year of publication, 2000, Putnam traced the fragmenting social capital in our families, friends, neighborhoods, and democratic structures. Some startling statistics his research has produced is a 58% drop in attending club meetings, a 43% drop in family dinners and a 35% drop in having friends over. We are having trouble trusting others.

How has modern technology with the new variables and landscape it created, affected social interaction. Has the internet increased or decreased social cohesion? Both. It used to take weeks for a physical letter to go through the world's postal system and reach a destination halfway around the world. Today, email allows us to send a message in a few seconds, increasing the space of the web we can throw. We have the potential to connect with billions of people on social media, and we can stay in touch with anyone, anywhere in the world. This has increased social cohesion. At the same time, social media has created filter bubbles, fueling polarization, identity politics, and disrupting social cohesion. Indeed, wars are fought using this as the means. Additionally, the virtual world may give us more relationships, but they are not as rich and satisfying as real-life ones. Dating websites allow people to meet, but some such sites promote casual sex minimizing emotional intimacy.

In the end, the technology that enables homni to exist is also hurting it. The internet knows no boundaries and culture is starting to move seamlessly from continent to continent. Transportation is not yet real-time, but it compresses time enough to allow mass cultural migration and mixing. However, as it offers freedom, for some it causes fear and hatred which lead to violence.

However, one exciting thing about insect superorganisms, that we may be able to apply in a larger sense, is that they seem to obey a Kleiber-like power laws. This is demonstrated in a 2010 paper entitled "Energetic Basis of Colonial Living in Social Insects" by researcher James F Gillooly, Chen Hou, and Michael Kaspari. The authors discovered that the essential features of the physiology and life history of colonies of eusocial insects (such as bees, termites, ants, and wasps) follow the same size dependencies as unitary organisms when a colony's mass is equal to the total mass of individuals. Colonies also scale super linearly and die (when an insect queen dies). The authors make other observations that support a superorganism view of the colony:

  * Whole colony metabolic rate – this is not just the sum of individual metabolic rates but is approximately proportion to 3/4 power of the total colony mass, M3/4 – similar to Kleiber's law in unitary insects.

  * Whole colony growth rate – the egg production rate is also approximately proportional to M3/4. From the perspective of a superorganism the queen is therefore seen as the ovary, while as an individual, it is seen as an extreme outlier.

  * Lifespan rate – this is approximately the lifespan of the queen and scales with colony mass as lifespan scale with body mass in unitary insects, following a 1/4 power law: M1/4.

  * This has exciting implications for our uses. Just what is the metabolic rate of the human superorganism? How much mass does it have?

  * Our intuition leads us to guess that power laws may play a key role in two areas we have just surveyed:

  * We hypothesize that we can apply theoretically derived power laws to describe the behavior of organized human populations by extrapolating the superorganism concept beyond eusocial insects characterized by a queen, to human communities of sufficient size and complexity. As social capital is a critical parameter to gauge the health of the social superorganism, we propose a social capital metric along the power law lines.

  * We hypothesize that we can apply theoretically derived power laws to psychometrics to obtain a measure for various forms of intuition.

  * While there is currently no proof to accept or refute these two hypotheses, a new applied mathematical theory developed by the authors, called 6- dimensional spacetime (6D) mathematics, lends strong validation to these two hypotheses (Teeple & Himann 2018). Their model of 6D mathematics was motivated to explain fundamental physics, especially foundational problems plaguing the standard model of physics.

##### A Brief Survey

Before we present the main idea of a metric for intuition, let us survey and summarize the vast landscape we have traveled across to arrive at this point:

  * First, we acknowledge that modernity is falling into a data progress trap. In the span of less than half a century, our modern IT systems have matured to create a ubiquitous global data sharing network for humans and machines alike that our society and economy is now utterly dependent on.

  * The dawn of AI, the IoT, and Blockchain will create so much data that we will suffer a problem both of the sheer volume and the quality of the information we receive.

  * Since information quality is critical to effective decision-making, these two data problems can have significant impacts on all aspects of society including business, policy, civil society, health, and ecology. This motivates us to find solutions for these problems.

  * We also found that human beings reason in two different and complementary ways (as described by Kahneman) – the fast, intuitive method of System 1 and the slow, deliberate, methodical and analytic method of System 2.

  * The slow, rational thinking method is plagued with hundreds of cognitive biases, drug-altered perspectives, and foundational flaws.

  * The bad rap of intuition in cognitive science is underserved and needs to be understood at a deeper level.

* Intuition is fast because it is a predictive information system acting on a lifetime of stored knowledge, inherited knowledge, extra sensory perceptions, and a mixture of other sources of existing knowledge.

* Future machine systems consisting of AI, dataset training, big data, and data analytics will be parallel to their similar human cognitive systems of brain, motor-sensory system but maybe not quality intuition and decision-making.

* Human beings are but one of many species and each perspective from each species can differ remarkably. This implies that even a knowledge of the world can radically differ from species to species. This relative experience of the world, and what knowledge it considers meaningful is called the Umwelt.

* Psychedelic drugs and mind-altering drugs have been part of the earliest human civilizations and are with us today. They can

* significantly alter how we see the world and the decisions we make. In many cases, they can expand our experience of reality and lead to insights that increase our problem-solving abilities.

* Symmetry in the universe is a source of inspiration for new theories to describe the world, giving rise to new symmetry mathematics such as 6D spacetime mathematics. These new types of mathematics can better solve problems in the world by proposing a new set of axioms that embed symmetry.

* Kleiber's law inspires us to look to symmetry to derive similar power laws from 6D mathematics that quantify aspects of geometrical human intuition and render it more useful in decision-making processes.

* Psychometrics and social capital can have vast application areas if we could quantify intuitive knowledge in those fields. In psychology, we use such derived law to develop a better psychometric, and in social networks, we use it to produce a way to measure social capital.

* While psychometrics weighs the individual human organism, we can also treat sufficiently large, organized groups of individual human beings as an organism in its own right, a superorganism.

* By recognizing this machine-to-human mapping, we can create an asymmetry-based algorithm that quantifies intuition, effectively mapping it to a numerical equivalent proxy that can be subjected to analytic reasoning.

* We can develop a metric for social capital, which allows us to measure the cohesion of the superorganism.

The tremendous opportunity offered by an algorithm that can quantify intuition is that it can bring vast amounts of hitherto qualitative, intangible, knowledge into the quantitative, tangible realm where we can utilize it to make better decisions. As had been noted previously, only 0.5 percent of all data has been analyzed and used for effective decision-making. However, this does not include applying a Democratic Quality Vector to it. So, the potential economic and social benefits are enormous. To get a better idea of this, we can look at the following data visualization.

Technology educator Dan Faggella offers a diagrammatic post-human view of consciousness. Dan explores the intersection between AI, neuroscience, technology, and futures. In an article from his blog page called "Exploration Post – Human Consciousness States – The Value of Psychedelics," Faggella has created a diagram he has named "The space of all possible modes of being".to summarize possible states of consciousness. Think of it as consciousness and sensory space for all living beings. Within this large permutation space, human experiences count for a tiny fraction of the totality of possible states.

The human Umwelt, how the world is perceived, is but one of many possible Umwelts contained within the union space of all animal Umwelts. Undoubtedly plants have their own Umwelt as well. However, your Umwelt is not static. It can expand and grow with the time of day, illness, or drug use. The researcher, Faggella considers the effects of different ways of increasing our normal Umwelt by using psychedelic drugs, cognitive implants, and AI.

Along the same line, scientists Kevin Boyack and Richard Klavans work in the field of science that measures science called scientometrics.

They have developed maps of scientific research outputs to visualize patterns of large datasets of millions of scientific papers. In the map repreoduced here, the map shows the patterns of papers that use co-citation (two papers cited at the same time) and bibliographical coupling (when two papers reference a common 3rd paper) to reveal the pattern of commonly cited authors in a variety of scientific fields. Their dataset for this map is large, 20 million scientific research papers generated between 1996 and 2011. In a sense, such a map can be interpreted as a proxy to the known areas of science. Like Faggella's map, our interest is in the crevices and space between the knowledge. Both views give us a glimpse of how much we don't know.

A novel method to expand the human Umwelt to almost unlimited sensory space is being engineered by neuroscientist David Eagleman and his team of researchers and engineers. His lab invented the Versatile Extra Sensory Transducer (VEST), a sensory substitution device which takes any input data stream and translates it into a tactile signal via a network of 32 portable solenoids mounted on the vest.

This creates a physical signal that the brain responds to. The first application of the VEST has been used to test if deaf subjects could "feel" words by learning to associate vibration patterns outputted by the VEST to words. The prototype translates a speech signal into a vibration pattern on the network of solenoids. After learning how to interpret the vibration signals for a few hours a day for a few days, the deaf subjects were able to "feel" words correctly. As cochlear implants cost about a hundred thousand dollars to install, the $500 VEST is a much more cost-effective solution. Eagleman's company, Neosensory has produced a more compact wristwatch model with an array of 8 vibrating solenoids. This is but one example of how a person's consciousness can be expanded in our modern world.

Without devices like this, any animal can only see the world through its narrow window of reality. No matter how vast we perceive it to be, it is experienced only through the lens of our limited biological sensors. A fish doesn't know the ocean is limited because it's been immersed in it its entire life. As we have shown, the theoretical awareness of our Umwelt shows how limited our current knowledge of the world is. The difference between what we can see and hear, and the secrets of the universe is vast. The innate Umwelt that we inherited at birth makes each member of a species a "blind being" limited to viewing nature from their own unique, but limited, perspective. Other species are "blind beings" and we each experience nature in our unique way.

The storyline of the popular 1967 fantasy comedy "Doctor Dolittle" was a doctor who could converse with animals. The theme song contained the lyric "If I could talk like the animals, walk like the animals, grunt and squeak and squawk like the animals..." It means that we could be much more powerful if we could expand our ability to understand.

If we could extend our Umwelt, for example with technology such as the VEST, we could have a much richer experience of reality.

Just as the VEST offers the potential to expand our sense of the world and understand its hitherto invisible dimensions, the Democratic Quality Vector (DQV) allows us to understand the social capital of our societies in ways previously unseen. The DQV will enable us to access human intuition in a new way. Through the DQV, we can quantify what we once considered intangible, intuitive, knowledge and transfer it over to the tangible world

of cold, hard facts. Essentially, it is a process of making intuitive knowledge amenable to calculation for the very first time.

# THE DEMOCRATIC QUALITY VECTOR

#### Including the value of natural resources and our social capital in national accounting is a vital step to achieve economic growth that is equitable and sustainable."

ACHIM STEINER

In a sense, this chapter is the heart of the book. What we have learned on our journey to arrive here is that the facts are not what they always seem. While we know that good decisions rely on sound data, there appears to be a gap in our knowledge that results in sub-optimal decision-making. In other words, there is much valuable information that remains inaccessible to us because we have not found a way to tap into our intuitive knowledge. We have new tools like big data, data analytics, and AI to help us navigate an increasingly complex world. However, none of these address the issue of inaccessibility of intuition, and other intangibles, and as a result end up with sub-optimal results. In this chapter, we propose a new way to tap into this intuition by creating a metric that can quantify intuition. In this way, we create a numerical proxy that transfers intuitive information from the inaccessible domain over to the mathematical realm that is the basis for much of decision-making in the modern world.

A brief glimpse back can help summarize and set the tone for the rest of this chapter. We began by realizing that modernity is confronted with a data problem, both in quantity and perhaps, more importantly, quality. To understand what data quality means we took a closer look at how science, primarily since the Enlightenment, has defined what facts are. However, a closer look at science, the discipline we turn to for truth, reveals something uncomfortable. Science itself is in constant flux and in a strange sense, all "facts" can be interpreted to be false. This is because, science is in continual flux and never stands still. An idea that fits in a model that is peer accepted today may be outdated tomorrow in light of new discoveries. Hence, what we consider truth today is false tomorrow. We can validate this and build confidence in this recursive pattern of the scientific process itself through studying the history of science. We are led to the inescapable conclusion that scientific truth is impermanent and that it is possible that all such scientific knowledge has a shelf life. All experimental models may eventually turn out to be false, to be succeeded by a more accurate model.

A further way knowledge may have been inadvertently distorted is due to human civilizations ongoing romance with psychoactive compounds. It seems plausible that the sheer volume of historical Euro-centric psychoactive drug consumption may have altered the course of human knowledge in some significant and sophisticated way, contributing to ideas that are now ingrained and normalized into our cultural institutions today. From an epistemological perspective, drugs can both degrade and enhance cognitive function. Under many conditions, cognitive impairment results, but in some cases, and with specific types of compounds taken by particular types of drug users, it can stimulate the emergence of novel ideas. Indeed, the significant consumption of Laudanum before the Enlightenment period may have influenced its outcome, through its many cognitive impairments, as well as stimulation of new ideas. The success of the Enlightenment has placed a heavy emphasis on rational, analytic thinking over intuitive thinking. As a result of the success of the Enlightenment, modern scientists' frown upon fields of science which cannot quantify their key variables of study, subjecting them to rigorous, mathematical analysis.

However, more and more, researchers are discovering that intuition is only vague because it has been vaguely understood. Indeed, modern research suggests that intuitive thinking is highly evolved for increasing fitness and emerges from a predictive brain model.

If appropriately used, intuition is an integral part of reasoning. Researchers like Daniel Kahnaman have demonstrated that the limitations of intuition are often in making common cognitive bias mistakes and in drawing from a sparse set of experiences. The best intuitions are the result of a rich experiential set of data. Hence experienced workers have much more accurate intuitions than green employees. In Kahnaman's theory, intuition makes up the fast reasoning system 1, while slow, analytic reasoning makes up system 2. Research is also beginning to reveal the mechanics of inner feelings,which are the distinguishing qualia of what we call intuition. Scientists label these feelings as interoceptive signals. They emerge from internal organs and send internal messages which we can sense.

They are evolved out of millions of years of evolution to warn us of such things as an impending danger so as to increase our fitness for survival. All in all, current research into intuition is slowly shedding light on the mysteries of intuition and revealing its true predictive nature.

Intuition was also explored from the perspective of the Umwelt, the subjective universe of a living organism. This concept is useful in demonstrating the relative nature of experience and questions the long-held positivist notion of an objective reality. It illustrates how intuitions are a natural part of all living organisms. Instinctive feelings are what every living organism, including homo sapiens, uses to survive. As intuition is defined as an intuitive knowing that bypasses conscious, analytic reasoning, the subject of animal instinct becomes vital to understand.

Furthermore, we learned how different living organisms sense the world and the signals that are meaningful to them. As far as we know, most other species lack the kind of abstract, analytic reasoning ability that humans have. This makes the gap between intuition/instinct and analytic reasoning quite

noticeable. In other species that lack the same power of symbolic logic, instinct and intuition dominate decision-making.

In light of the discoveries of modern science, we can no longer dismiss intuition in our decision-making process. In the fields of democracy and economics, in particular, everything depends on trusting relationships, and there are significant opportunities to improve decision-making by incorporating useful, intuitive information concerning social capital. We have already discussed the limitations of representative democracy, the most popular form of democracy, and it is clear that this form of democracy is not working. It suffers from both lack of analytic and intuitive knowledge expertise, and channels for that expertise to affect democratic and voting outcomes. More recently, we have seen the emergence of right-wing authoritarian leaders around the world, who don't necessarily serve the best interest of the people. The representative system forces many voters to choose one representative whom they typically have no genuine trust relationship with. The only way to know them is through the media, and that message may be a biased and manipulated one.

As a result of this long-range voting structure, intuitive awareness is at a disadvantage. In such a system, politicians can use media manipulation to win elections. Whoever wins the election must decide on policy issues, but representational democracy allows popular non-experts to prevail. The winner-take-all approach also has the potential of leading large segments of the population to be underserved. The result is poor governance that does not benefit large sections of the public. Finally, often, there doesn't seem to be enough experts to make effective decisions, but there are parties that favor the smallest government possible. All these vulnerabilities are potential contributing factors to ineffective governance.

A new form of democracy is required to overcome the current systems' structural problems. To this end, we explored the concept of the superorganism as a metaphoric model for human society and social capital as a useful indicator of the superorganism's health.

To measure this, we must be aware that a healthy superorganism requires active participation from all its cells. Each must be supported to do its specific function optimally. The heart must be encouraged to beat, the kidney to cleanse the blood, the brain to perform cognition and master supervision, the stomach to process food and transform it into nutrient forms acceptable for body metabolism.

In light of those ideas mentioned above, the schemata of proxy voting offers a means by which to begin to quantify and use social capital, and thereby ameliorate other problematic methods of building and enhancing social capital, while providing the prospect of improving democracy at large. Proxy voting makes participation easy and rewarding. The concept is simple, and therefore appealing. If an eligible voter chooses to pass up the opportunity to vote, they, in turn, can give their vote to someone else – someone they deem more qualified or better equipped for the ballot (i.e., with greater knowledge, experience, confidence, and more). This vote transfer itself is precisely an instance of social capital: it exemplifies the very elements of social capital, as the transfer emerges from an assumed relation between the one who transfers and the one to whom it is assigned. Simply speaking, it is an outward display of trust, of participation, and the execution of a relationship, or an informal network. Thus, when a vote is transferred it becomes more valuable, for all that is implied about social capital in the transfer itself. The removal of a vote is the creation of value – an act of added value – and one that may continue to increase as additional transfers ensue. Any such transfer between voters is referred to as a unit of social capital.

A proxy voting system creates an environment where individuals are encouraged to be involved: it is a method of increasing engagement,

which is part of building greater social capital. In other words, the means meet the end; the instrument is part of the objective.

Individuals thus choose to participate in an election despite their potential lack of knowledge, knowing that someone else's expertise, experience, education, and so on, can be advantageous to the outcome of the election. Once all of these improvements start working together, proxy voting increases the reliability of the voting process while increasing the impact of every individual's vote, or voice.

Not only would social capital increase dramatically, but the integration and harvest of an individual's intuitive genius can also be better incorporated into decision making. Analyzing the people around you goes well beyond individual C.V.s and education history; instead, it is their story. That story is tied up into the individual's Umwelt and Interoception which goes into all of their "close range" decisions. It cannot be over-emphasized how valuable this "intuitive" decision is to the group, and it cannot be exaggerated how valuable identifying and using this knowledge can be.

In proxy voting, this value is reflected primarily when the vote moves. Votes can be transferred multiple times until it reaches a final person in the sequence who finally votes. When that person at the end of the transfer chain does finally cast a vote, it is worth much more than an ordinary vote. It is a particular type of summation of all the vote transfers before it. Hence if 5 people were involved in a string of 4 vote transfers, then the value of the fifth person's vote is much higher than the vote of just one voter.

Operating a company, an institute, a democracy or an economy without social capital is impossible. Social capital is currently only a qualitative variable that remains intangible and has resisted quantification, so far. As we have demonstrated there can be enormous benefits in quantifying it. The Democratic Quality Vector (DQV) is a new function we define that captures the essence of trust in a vote transfer sequence of a proxy voting system. We can also think of the DQV as a number that represents the total social capital value of a group of proxy voters.

In most institutions, the relative value of tangible and intangible capital is poorly understood. Because intangibles are rarely quantified, tangible capital that is quantified immediately has more veracity. This results in the typical situation where tangible equity is seen as having more value than intangible capital, when in fact the opposite is likely true. We rely upon others and refer to them to get through life intact. Is our food untainted? Will our children be safe at school? These are but two things we have to trust others with.

Indeed, some have argued – particularly in the accounting world – that several characteristics of intangibles disqualify them from being counted as capital. The lack of verifiability for intangible assets that are not acquired through market transactions; the lack of visibility of intangible assets after their acquisition that complicates efforts to track past vintages are both arguments that may disqualify the intangibles In addition, the nonrivalness of some intangible assets (that is, nonscarce – tangibles are rival because once one person uses it, another cannot); and the lack of appropriability (cannot be easily reproduced) of the returns from some intangibles. This is why many are afraid to start measuring intangibles because there is some risk involved.

It seems that economists and politicians have the same challenge in valuing intangibles as social scientists do, and therefore some of the tools economists use can be used to design a social metric. The attraction of the standard mathematical quantification of value that our culture has accepted is a normative validation. If we can't verify it, we can't trust it is our current default. This standard is unlikely to change rapidly. So, we may have higher confidence in the predictive power of intangibles if they are significantly vetted. Qualitative motivations capture attention, but only quantitative results inspire investment. That is why we often do not pay attention to social metrics. So that is why a consistent metric for social capital would help determine which inputs or changes lead to positive results.

One refined approach of valuing intangibles in economics that we may look to as a model is the ratio of a change in national income to the change in government spending that causes it. More generally, the exogenous spending multiplier is the ratio of a change in national income to an autonomous change in spending (private investment spending, consumer spending, government spending, or spending by foreigners on the country's exports) that causes it. When this multiplier exceeds one, the enhanced effect on national income is called the multiplier effect. The mechanism that can give rise to a multiplier effect is that an initial incremental amount of spending can lead to increased consumer spending, increasing income further and hence further increasing consumption. Moreover, resulting in an overall increase in national income more significant than the initial incremental amount of spending. In other words, an initial change in demand may cause multiple shifts in output and income.

We can leverage this ratio approach, along with its underlying mathematics, to design a social capital measurement. By investing in activities that strengthen social ties, we could look for the resultant revenue generation from the group and then use a weighted multiplier to connect the two. This also applies to political systems.

In economics, delayed discounting is another accepted mathematical tool that represents the intangible value of time, and it too can be referenced when making a social metric. It is called a time-inconsistent model of discounting. Given two similar rewards, humans show a preference for one that arrives sooner rather than later. Humans are said to discount the value of the later compensation, by a factor that increases with the length of the delay. This process is traditionally modeled in the form of exponential discounting, a time- consistent model of discounting. Subsequently, a large number of studies have since demonstrated that the constant discount rate assumed in exponential discounting is systematically being violated.

Delayed discounting is a particular mathematical model devised as an improvement over exponential discounting, in the sense that it better fits the experimental data about actual behavior. However, note, the time inconsistency of this behavior has some quite perverse consequences. Also, delayed discounting has been observed in both human and non-human animals.

Another context that we can look towards for some inspiration is networks in the natural sciences. They measure intangibles in similar methods as we are proposing. In biology and in the mathematical modeling of biological phenomena, the symmetrical and algorithmic properties of organic shapes have been extensively studied.

As we already know, knowledge and science change over time with new discoveries. Advances in science uncover certain patterns and invariant laws across all forms of life. The area of "social physics" – first coined by Auguste Comte and recently picked up by Alex Pentland – explores the parameters for human behavior and decision- making in groups. Pentland's work, for example, engages precisely "how social networks can make us smarter." Whether examining things from the cellular or molecular level or upwards from the broader social and civil perspective, there are emergent laws and constraints operating that shape – and in some cases determine –

the interactions and dynamics of human individual and collective behavior, on the basis of evolutionary and cultural developments.

It seems, through analysis of already existing systems, that the best way to emulate a social network is to focus on the "big picture" mathematical structure and not the individual variables. In multiple disciplines, beyond what we have talked about here, research into networks has unveiled a similar mathematical pattern. Exponential, logarithmic, or hyperbolic models are identified and subsequently justified as the best fit for the application. We have seen this same investigative pattern in the development of the mathematics of human vision. In this field, many analytical formulas describe the same data set within the error band. Essentially, what they all have in common is that they are a form of curve that best matches the data they have on hand. They have some great ideas we will use to develop our own formula. Their weakness is that no one knows for sure which law is the best, because they are all derived from curve fitting.

To develop the algorithm for the DQV, we apply new 6-dimensional spacetime mathematics that the authors have developed. It may become a theory of everything. Its' predictive power has been verified over a vast swath of physical, biological, neurological, chemical, and social science data. It is used to derive a law to describe social capital that emulates aspects

of Kleiber's empirically-derived power law relating an animal's metabolic rate to its mass. We prototype a relationship similar to Kleiber's law because social capital is a biological system indicator that seems to function in a similar delay discounted way.

The DQV itself is a vector function built using 6D symmetry mathematics and measures the total degree of social capital represented by the entire chain of voters involved in the vote transfer. The 6D mathematics applied in this context creates 4D projections, constraining particular types of power laws that all fit the data points. This theoretical basis gives it a strong and testable, predictive power that existing curve-fitted laws lack. We have already shown there is a universal basis for the symmetry of the world, and especially a human biological foundation for it. So, we are confident that we have a fundamental basis for our system of social capital.

This is relevant for our purposes insofar as the logic of vote transference converges with the discipline of science and its drive toward quantification. We begin to quantify trust: the sum of all steps, or transfers of votes, can be formulated into an equation that produces a trust value – trust follows along the lines of diminishing quarters. (Trust = A + ∑ e¼, where "A" is the original vote numeric and "∑" is the total sum number of steps or transfers).

The form of this metric is supported through logic and research. It makes sense that I trust a person and that I would trust that person's friend less than I trust that person. This is the beginning of a convergent series as the degree of separation from the source of the trust increases.

However, trust is also relative. Some individuals and groups will appreciate trust more than "the cold hard facts" and as such diminish its value relative to the more important quantities. So, although the comparable total value is less in some situations, the "shape" of the trust still follows an exponential pattern (convergent series) – alternatively, a hyperbolic pattern for the people interested in math. For example, because GE has a strong value proposition in its brand and momentum, it may value its social capital (its personnel; trust) and care less about employee turn-over. This is in contrast to a new, unproven start-up where the majority of value is in people, social capital, and trust. So incorporating this relative perspective, the total value of an organization would be T = x (conventional value) + y(A + ∑ e¼), where x and y can be synchronized to the organization or beliefs of the group.

As the source voter transfers the vote in a chain, the value of the vote decrements as it propagates through successive vote transfers because it is further and further away from the original voter.

Hence, its' worth is already 0.5 by the first vote transfer and almost 0.25 by the second. It decrements this way until it reaches a limit of four vote transfers. After this point, the decrement is not worth anything. The α is a parameter that determines the steepness of the curve and is a function of many variables of the particular social context such as group size, age, gender, education, and more.

This is but one formula for quantitatively capturing trust and offering a metric for calculating social physics. Further developing this new science of social equations will open new vistas for analyzing groups as diverse as political parties to companies – for approaching the relational aspect of human life with a degree of mathematical accuracy.

Such developments in creative and technical thinking may soon offer ways of predicting erratic and undesirable behavior, as well as preventing catastrophic or destructive patterns from unfolding.

Beyond the rationale of cold hard facts, we give intuition a say in our revised voting system. With the new DQV, when we make a final decision, two measurable sources are combined for the final decision. The first is just the common facts; the second is the output of the DQV function, which measures the qualitative and intuitive knowledge of the decision-makers in our voting chain.

This allows us to quantify intangibles and convert social capital into economic value. This conversion will enable us to transfer intangibles over to the logical domain that is the mainstay of our modern society. The DQV can be used along with cold hard facts to create a total solution that integrates the best of the intuitive and rational world. Users can dial in as much or little of the DQV and see how the final answer changes based on the addition of the DQV. Such a technique can add a quantified intuitive dimension to practically any problem we intend to solve or decision we need to make.

A shift in thinking is required to amalgamate the quantitative and qualitative aspects at hand. In the case of social capital, something complex but elegant in its simplicity at the foundation is proposed: a metric that is participatory, fluid, and intelligible. The equation we recommend for the measurement of social capital is one to be incorporated within a proxy voting system.

It is such an equation that would provide a quantitative basis for social capital to become another data system.

The Democratic Quality Vector is a new way to put a value on social capital. As we have learned, the debate between which is better, intuitive reasoning or analytic, is misleading because both play an essential role in effective decision-making. Daniel Kahneman's system 1 (intuitive, fast decision-making) and system 2 (slow, analytical reasoning) categorization are useful in providing a means for categorizing these two, system 1 for rapid access to our storehouse of accumulated knowledge, and system 2 for analytical reasoning.

Kahneman's theory is also useful in pointing out the nuances of system 1 reasoning such as the need to be aware of a large number of cognitive biases that come with system 1 reasoning, as well as the quality as a function of our experience. Once again, both systems are essential, but only the second system is measurable, currently.

Our continuing focus is to develop a new way to measure intuitive or intangible sources of knowledge and to convert what has traditionally been considered intangible information into tangible, hard numbers.

Thereby we will provide a means to quantify aspects of intuition for more effective decision-making, or to measure an organization's intangible value to provide a more accurate economic valuation. Trust is the defining quality of effectively collaborating social group, whether a family, organization, community, or entire society. While social capital has many components such as trust, civic norms, civic engagement, and political engagement, all of them fundamentally depend on trust. Without trust existing in the group at some fundamental level, there can be no basis for any collective activity. When was the last time two active enemies had a cup of coffee together? A measure of trust is, therefore, a measure of a healthy organization.

# THE VALUE OF SOCIAL CAPITAL

Who doesn't enjoy a good story? Human beings are natural storytellers and story-lovers. Our lives are constituted by stories – overlapping stories, insofar as they consist of many layers, plots, meanings, and peoples. Our individual stories include the history that precedes us, and extend around us in the present to include the many interrelated stories of others around us, and beyond us in the future as the story of humanity continues to unfold. This is just one way of suggesting that we are relational beings: we are in constant relation to one another, as we are never the only actor in our own story. Stories are a record of our decisions made, of our judgments, and actions taken, individually and together. Stories are, in short, the speaking or speeches about our deeds. And the story of relationships, or the relational account of human life, is what social scientists often refer to today as "social capital." An obvious economic result of this is that communities who have more social capital, more stories, have more trust between its members, and therefore perform better economically.

Trust translates to more income earning potential. In a sense, it's obvious but science has shown time and time again, that the obvious can prove to be the most wrong. However, research studies have shown that indeed, as degrees of trust increase, so does GDP. For a business, there is no better proof that increasing social capital, or trust amongst employees, is not only good for building corporate moral, but also for improving the bottom line.

If increased social capital is linked to increased economic prosperity, then it is in the clear interest of businesses, and not just government and civil society to support it; a social employee is an effective employee. However, within economics more broadly, intuition is highly prized but unquantified. Many leading businessmen such as Warren Buffet and other leading economists understand the intangible value of an organization may be its greatest asset. And yet, without the ability to quantify it, that intangible value is unable to reflect the true worth of an organization.

Social capital is defined as the value derived from the total of one's social networks and community activity. Social capital, then, includes personal and professional relationships (in physical or virtual form), social networks and support, civic engagement and belonging or membership to specific groups (from fraternities and societies to boards and neighborhood watch groups). Social capital also includes the benefits generated through these connections and actions. Since there is so much known about social capital, there is no doubt that social capital exists. But can it be effectively measured? This is the area that humanity has struggled with recently.

In the early days of social capital research, the concept was applied exclusively to the social potential of an individual (in characteristics such as charm, sociability, affability and usefulness to neighbors). More recently, the influential sociologist Robert Putman has reframed social capital into an attribute of collectives. He focuses on social norms and trust relationships as producers of social capital. For Putnam, social capital benefits the individuals who possess it as well as the wider community of which they form a part.

Also, social capital is germane to our present considerations, because of its positive contribution to a range of measured societal factors, such as personal well-being and crime rates. So, when we increase social capital, that leads to benefits on many levels: individual, community, regional, national, and global.

Furthermore, social capital has been recognized as a driver of economic growth. This is because an increase in social capital results in greater economic efficiency (Putnam, 2000, 1993; Fukuyama, 1995). At a macro-level, it is likely that higher levels of trust and cooperative norms reduce transaction costs, thereby driving productivity (Putnam, 2000, 1993). At an individual level, people with wider social networks are more likely to find employment (Aguilera, 2002), to progress in their career (Lin, 2001), and to earn high wages (Goldthorpe et al., 1987). World Bank efforts to estimate the "true wealth of nations" suggest that intangible capital, made up mainly of human and social capital, represents around 60-80 per cent of true wealth in most developing countries (World Bank, 2006). It is apparent that social capital is real, and valuable.

Although some researchers have tried to estimate the value of social capital assets as a proportion of total wealth (Hamilton and Liu, 2013), social capital differs from natural and human capital as it is a broad concept, based largely on interpersonal relationships. That makes it very challenging to measure. In fact without a comprehensive view of the underlying mathematics, previous attempts to quantify social capital have failed.

To accurately measure social capital, one has to understand that it is an aggregate concept addressing not only interactions with a group but also individual behavior, attitude, and predisposition. Of course, the problem of measuring or even estimating the presence of these and trust in a social network is a challenging one. Specifically, Trust, as an aspect of social capital, remains both undefined and poorly understood. If psychological attributes such as trust cannot be quantified, then the field of social science cannot benefit from the power of mathematical analysis that has proven so valuable in so many other fields of science. However, we do think it can be quantified.

First we can quantify it through current definitions. The Merriam Webster dictionary defines trust as the belief that someone or something is reliable, good, honest, effective, etc. The Online Psychology Dictionary defines trust as the confidence a person or group of people has in relying on another person or group. In a social context, trust typically refers to a situation characterized by the following relationship. One party (the "trustor") consents to rely, in good faith, on the future actions of another party (the trustee). The trustor, then, transfers personal control to the trustee.

However, since trust is based in assumptions about personal character and competence, trust always contains an associated degree of risk. Always present in ideas of trust, one finds the opposite: the possibility that the trustee could fail and bring about disappointment or harm (distrust). The trustor's expectations can only be validated or dashed by experiencing or witnessing the results of the trustee's completed action. This is an important aspect of the trust relationship, but adds to the challenge of quantifying it.

Furthermore, trust and confidence are two closely related terms in sociology. However, confidence is perhaps a more appropriate term than trust to indicate levels of belief in the competence of another party. A failure in trust can be forgiven more easily if it is seen as a failure in competence, not as a failure of honesty. It was Warren Buffet who said, "It takes twenty years to build a reputation and five minutes to ruin it." The level of trust an individual is ready to commit depends upon their past experience as well as their projected expectations. These are unmeasurable, currently.

Economically, trust is defined differently. Trust is associated with reliability in transactions, financially. High levels of trust and reliability (i.e., confidence in a person's abilities) are beneficial because it reduces emotional stress and saves time for the trustor. Without trust, each of us would complete necessary tasks ourselves, suffer emotional stress and create additional processes to ensure that others meet obligations. From another perspective, trust can be considered a heuristic rule which allows the trustor to accomplish a task with minimal effort, thanks to confident delegation. Without the aid of this heuristic, trustors would often face unrealistic levels of effort to complete basic tasks.

That is why modern democracies rely on the value of many intangible qualities, such as trust. Variables such as innovation, relationships, and trust are of intrinsic value to groups yet seem to resist quantification and thereby escape the directed attention of leadership. Unless qualitative values can be quantified, they are likely to be neglected factors in decision making. This omission relegates the essential value of qualitative elements to a latent value, and often ignored. Developing an accurate measurement of social capital would unlock this vast potential, enabling qualitative data to contribute enormously to all aspects of society. In fact, with a measurable system, the future health of democratic systems would improve, through correctly identifying and valuating intangible assets.

Trust also functions as an economic lubricant, reducing the friction associated with non-trusting relationships. The entire layers' of trusted 3rd party brokers, notary publics, lawyers, banks and even the new cryptocurrencies primary function is to reduce the friction that slows down transactions when strangers who have no social capital between them have to transact with each other. Trusting relationships reduce overhead costs and efforts that would otherwise be necessary to support and monitor untrusted persons. Hence, trust allows transactions to flow more freely, thereby reducing the cost of transactions between parties.

Trust also enables new forms of cooperation and generally furthers business activities, employment, and prosperity. In a society of trusting individuals, economic activity will be a greater and economic welfare higher than in a society in which trustworthiness is lacking (Tisdell, 2008). From this perspective, it's easy to see why higher social capital leads to greater economic performance, and why a metric for social capital has been long sought after. A metric that we aim to provide.

Effective social norms, developed by a robust civil society, serve to regulate behavior, lessening requirements of law enforcement and judicial punishment. Furthermore, the extent to which social capital is embedded in social structures limits the extent to which it can contribute to the public good (Narayan, 1998). Conversely, when only powerful and tightly knit groups possess and exploit social capital, society as a whole suffers. Such groups harm society by prioritizing individual gains over the common good.

Abuses of this kind come from individuals who do not feel accountable to the population as a whole; their actions accelerate social inequality and instability. As we already know, elitism and the centralization of power result in corruption in government, nepotism, and cronyism (Evans, 1989; Mauro, 1995; World Bank, 1997). An effective measure for that would put this all out in the open, creating a significantly new metric as a basis for this conversation.

#### Economics does not predict the degree of spontaneous sociability... that exists in a society; rather, spontaneous sociability predicts economic performance, better even than economic factors by themselves." FRANCIS FUKUYAMA, POLITICAL ECONOMIST

The full story of social capital, as a critical aspect of the social sciences, ought to inform our solutions to significant governance and policy needs. Furthermore, community leaders of all stripes must attend to the relationship between confidence and participation. Many studies of voter turnout and democratic participation find a positive correlation between beliefs about the responsiveness of political authorities, or external efficacy, and civic engagement (Rosenstone and Hansen, 1993; Brady, Verba and Schlozman, 1995). This is important, because social networks of civic engagement are at the very core of social capital (Putman, 1993) and so strong networks enable strong communities: those that can solve collective action problems through cooperation and coordination. Again, with a strong, evidence supported metric, these problems can be attacked much more effectively.

To tackle these problems, improving democracy requires enhanced trust with other citizens, politicians, those who are known, and those who are unknown to us. Furthermore, assurance facilitates cooperation, whenever one feels relatively confident about the incentives and abilities of other actors. Trust then spreads through a community, by reinforcing norms of reciprocity and self-interested cooperation (Putnam, 1993). These norms then become a part of the community's social capital, allowing individuals to make inferences about the intentions of others, even when direct or absolute knowledge about them is unavailable. It is an incredibly useful metric.

In fact, this general atmosphere of trust creates a positive feedback effect. When we put trust in others, social capital increases.

Increased social capital ultimately heightens the quality and quantity of economic transactions. In uncertain economic climates, an increase in public trust could have wide-scale impact and inspire not only increased confidence in democratic arrangements but also inspire positive structural reforms of the same.

The strongest antidote to emotional distress (caused by periods of economic turbulence or political uncertainty) is the support of long- standing trust relationships. In their paper "Individual-Level Evidence for the Causes and Consequences of Social Capital," John Brehm and Wendy Rahn find a positive and reciprocal correlation (a "virtuous circle") between civic engagement and social trust (Brehm & Rahn, 1997). Specifically, their study finds that civic engagement is more likely to increase trust than vice-versa. Brehm and Rahn also find that the correlation between engagement and trust is a precarious one – degrading either engagement or trust creates a "vicious circle" more easily than increased engagement or trust leads to a "virtuous circle."

Political and business leaders should find these results troubling because confidence is the currency in which they trade. The independent effect of interpersonal trust on confidence suggests that even improved performance of government may not be sufficient to obtain substantial levels of confidence from the public. Confidence in institutions, indicated by high levels of civic engagement, has been shown to bear a strong connection with interpersonal trust in fellow citizens (Brehm & Rahn, 1997). In summary, social capital lies at the very heart of our political and social institutions, and it must be integrated into policy and decision-making at all levels of society.

Yet, without a clear accepted measure of social capital and trust, we are not going to be able to reliably include these measures in our plans. And with the tool we are developing, we believe that effectiveness of institutions will be much more measurable and modifiable.
THE NEED FOR A DEMOCRATIC

QUALITY VECTOR

Ten men shouting will control ten thousand who choose to remain silent." JOSEPH J. HAEGGQUIST

Politics has always been broken, in fact it is still broken today. Political leaders have lost a great deal of the trust of voters and our social superorganism has an autoimmune disease, with one part battling another. There is not only a lack of social capital, but a build-up of raw aggression pitting citizen against citizen. The solution is still not apparent, as there are often only unclear distinctions between each side, with unclear barriers to these problems. This leads to further degradation in social cohesion, and poor levels of interaction. A straightforward solution is only possible with a clear measurement.

A road to this solution is in the biology of the human brain. It too can also be conceived as a superorganism consisting of billions of much simpler cells called neurons. Somehow, the trillions of connections between billions of individual neuron cells creates the complex-system emergent behavior of consciousness. Although we don't know how neurons communicate and synchronize with each other to create consciousness, we do know that if disease strikes, the disruption of the communications of networks of neurons can wreak all manner of havoc with consciousness.

Fig 16. The metaphor between a) social disruption within society and

b) bodily disruption from malignant cells. Source a) grondamorin.com 2017;

b) Wikipedia, 2006

Recently, the European-led Blue Brain project, led by Professor Henry Markham, made an interesting discovery. They discovered that cliques of neurons (complete all-to-all connected networks of neurons), can represent enormous amounts of information. When a new thought occurs, a wave of activation sweeps through the neocortex, activating these cliques of neurons, and deactivates when the thought is finished. Similarly, in human societies, waves of information travel throughout our social network. If this network coherence is disrupted, neither a brain, nor a society can function optimally. Our infighting is like an autoimmune disorder in the social organisms' body, in which one part of the body ends up attacking another part. Such extreme polarization is a breakdown of social capital at the most fundamental level. To unpack the dysfunctional governance of our social body, we need to know about the nature of the social capital between the "cells' of the social body. Then, we need to identify the nature of each cell. Better yet, since mathematics is such a powerful predictive force in human culture, if we could define a metric to measure social capital, the

data generated might possibly reveal some underlying mathematical patterns that can answer the question: "Why is democracy failing us?"

The dominant form of democracy today, representative democracy, is regarded by its many proponents as an ideal form of governance. Some even go to the extremes of thinking that being anti-democratic is to sin. It may be surprising to many then, that some of the greatest philosophers of the Western tradition thought democracy was a danger. Voltaire was only one of the greatest philosophers of the Enlightenment, whose many ideas were revered by the founding fathers of the United States. What about some classical philosophers named Socrates, Aristotle, or Plato? They too disliked democracy, as it had many apparent flaws like we see every day.

Let's begin with Voltaire's concerns. He supported something called Enlightened Monarchism, a just king supported by a counsel of philosophers. Voltaire felt that democracy was dangerous because it could easily be gamed by a sly and charismatic leader, who could say things that the uninformed masses could easily be duped into believing, a situation that many critics of right- wing authoritarian governments world would clearly concur with.

Voltaire was quoted as saying "I would rather obey one lion than two hundred rats of (my own) species" and "almost nothing great has ever been done in the world except by the genius and firmness of a single man combating the prejudices of the multitude." These quotes prove that Voltaire knew of the classical philosophers. Let's move onto the Greek classic, The Republic. In this book Plato writes a Socratic dialogue featuring Socrates. In one part of the book, Socrates asks Adeimantus who would he rather have sailing a ship at sea, a well-trained captain or some random passenger? Adeimantus chooses the obvious answer and Socrates extends the metaphor to the state and a leader of a state. Socrates concludes his argument by establishing that the ideal form of governance is a totalitarian regime, where rulers have been educated in effective and fair governance for decades before taking absolute power. In another section of The Republic, Plato suggests that democracy makes an appearance during the later stages of the decline of the ideal state, when governance has become so deplorable that the people, in desperation, can even vote a tyrant into power to save them. The conclusion? Democracy could be so flawed that it will inevitably lead to tyranny. Can our tool help prevent this problem?

The inherent vulnerability of representative democracy is that one person can be elected to a position to represent a very large group of citizens, with much previous education on its' use. The power concentrated in the hands of one such representative is significant, and a lack of integrity can result in corruption, incompetence, or both. This requires significant amounts of resources to respond to. Further, dislodging corrupt individuals who hold powerful positions can prove difficult if they choose to usurp the tools of governance to establish policies and choose other elected officials that protect them. As seen multiple times in western history, there are many ways to game the current system, from powerful lobbying interest groups, allowing unlimited cash contributions to election campaigns, misleading advertising, and policy abuse. Can there be better protections against this? How can we use a measured social capital to better define these ideas?

From a social superorganism perspective, such actions represent disease that threatens the superorganisms health. Imagine the commonwealth body of elected representatives who have taken a sworn oath to represent the interests of the people. Imagine it is infected with a malignant tumour that starts in one small localized area, but then spreads through the entire body by making use of its transport system of arteries and veins – backroom deals, blackmail, promises of job security, even threats of violence to family or friends, conveyed in opaque conversations. In this way, entire governments can succumb to the corrupting influence of a few powerful individuals. The wealth and representation of so many so-called democratic countries have been stolen through such corrupted government bodies. How can we measure these institutions in such a way as to represent their health?

#### When the masses get involved in reasoning, everything is lost." VOLTAIRE

One way to represent their health in an utterly transparent way is through measuring social capital. Once we are able to do that, we will be able to accurately gauge how it underpins both wider societal dynamics and person-to-person interactions. This could be another barrier to corruption and backroom deals.

An accurate measurement of social capital ensures that it will be used to better inform policy and public decision-making processes, allowing citizens to identify areas of possible concern. Consistent measurement also allows for comparison over time and from place to place, and therefore identifying what best practice might look like. Applying the same line of reasoning as done with psychometrics earlier, it is possible to develop a way to measure social capital, thereby bringing the intangible into the sphere of the tangible. This would allow our society to clearly identify many more aspects of trust. In fact, quantifying the intuition behind social capital empowers us to apply rational analysis to it for decision- making, bringing more semblances of clarity to governance.

A well-designed social metric would also have to harmonize with the fundamental problem of citizen-to-government communication. Since democracy is premised on governments making decisions on behalf of the populace, then we must ask how communications technology could facilitate this.

One of the first things we have to consider is how to measure social capital. Stanley Smith Stevens, the preeminent Harvard psychologist, was a leading figure in developing a theory of measurement, unique to the social sciences. He first introduced his theory of measurement in a 1946 article in the journal Science entitled "On the Theory of Scales of Measurement." Stevens defined measurement as "the assignment of numerals to objects or events according to some rule." This definition contested the established definition of measurement guiding other scientific pursuits as the ascertainment of the weight, size, temperature (attributes, in brief) of some object or event by comparison to a standard unit. This was highly unusual at the time, but now it is very useful as it gives us a basis to proceed to measure the unquantifiable.

To understand Stevens' peculiar definition of measurement in the human sciences requires an understanding of his social context. Stevens' definition of measurement was a response to the British Ferguson Committee whose chair, A. Ferguson, was a physicist. The British Association for the Advancement of Science appointed the committee in 1932 to investigate the possibility of quantifying sensory events. Stevens belonged to a school of thought called logical positivism which attempted, in the tradition of Descartes, to exorcise all unverifiable ideas from science.

Today, we think that communication and decision-making systems begin with a review of how they gather substantive qualitative and rigorous quantitative data. Because social impact is so difficult to measure, such qualitative factors are often overlooked. Social dynamics are complex and non-linear; they may not be understood through rational interpretation, assumptions, or single variable analyses. Ostrom and Ahn argue that, "Social capital, with only a decade of history of empirical applications and attempts at measurement, does exhibit serious problems of measurement. But the concept is firmly placed in the context of major empirical and theoretical puzzles related to economic and political development. It would not be wise at all to dismiss the concept on the ground that it is difficult to measure." (Ostrom&Ahn, 2003, p. XXXIV). In fact, many things that are difficult to measure, such as a human beings worth, still they are worthy of an attempt.

Currently, the prevailing method of measuring social capital is by using a standard set of questionnaires. The "amount" of social capital in a given community is then extracted from the results of the questionnaire. Different countries have different benchmark questionnaires that establish the value of the social capital. Some examples of the Social Capital Measurement Guidelines include SOCAT (World Bank), International Social Survey Program, Canadian Index of Wellbeing, Social Capital Index, and the Social Capital Measurement Tool (SCMT) as well as many others. These have been selected as representative of the various ways that social capital measurement is understood depending on context, scale, purpose, and scope. Yet, as all tests are, these methods are very thoroughly flawed.

In a subsequent effort to measure social capital, the Canadian government's Canada Policy Research Initiative department produced a 2005 study entitled Measurement of Social Capital. The research established social capital as a useful perspective for examining how public policies depend on social ties for achieving their objectives of wellbeing and prosperity. While the report recognizes the importance of social capital and establishes social network framework and methodology, it does not demonstrate any practical applications. In other words, it said that measuring social capital was going to be very difficult, even if it would be valuable to do so.

The value of inserting various efficiency-related elements of human capital into a corporate context cannot be overstated. We could mitigate lost time, for example, in supporting an employee from another team. Furthermore, building up this knowledge base would allow for the incorporation of developments from across the organization to improve the performance of team members. The proposed transferable vote, and the system of gathering and storing knowledge, would be a powerful tool for any group. It would identify valuable information quickly. It could immediately structure more effective discussions. It could give a platform for immediate feedback. It would reduce time spent in almost all aspects of the organization.

Through this, it would help with better decision-making by creating shared knowledge that stores the genius of all its members, even after they leave. This allows for organizational protection, on the one hand, and would make on boarding significantly easier. Through the creation of this simple tool to promote feedback from the entire group, it highlights the skills of members that may have previously gone unnoticed, thereby making a group's otherwise invisible assets visible.

Because social capital plays such a critical role in the success or failure of an organization and is currently unmeasured, it remains misunderstood, and sometimes ignored. However, as we have proven, it is an extremely valuable idea, which can significantly improve the performance of a group. On the other hand, those companies who have attempted to understand it have prospered. Consequently, we are trying to measure it, and afterwards, apply it as an influential factor in the success of an organization. By breaking down this multi-dimensional concept into a singular unit, we are able to access the information that will significantly improve an individual's or an organizations actions. This new metric demystifies social capital to make it an intelligible, accessible, and a potent force for change. Read on, and you will discover some potential applications.
A EMERGENT PSYCHOMETRIC FROM A SIX DIMENSIONAL WORLD VIEW FOR ECONOMIC DELAY DISCOUNTING

Delay discounting is a well-studied phenomena in behavioral science and economics. It compares the value of rewards received in the present with the same reward received in the future. Most species, humans included, devalue the present value if the reward is only received sometime in the future. This is a value depreciation function over time that is mathematically captured in a discount function. Delay discounting is an important phenomenon to study because it helps us understand how we make decisions on long term goals. Delayed discounting can be applied to many fields such as financial investment, life choices, and politics. In sustainability studies, it helps to explain ecological overshoot, because consumerism creates priorities that value consuming resources today more than saving those resources for the next generation.

Some scientists have attempted to define mathematical relationships that describe delay discounting such as the hyperbolic model, but the equations they choose can generate unpredictable results. The study "A Comparison of Four Models of Delay Discounting in Humans" compares four prominent models of delay discounting (a psychometric): a one-parameter exponential decay, a one-parameter hyperbola (Mazur, 1987), a two-parameter hyperboloid in which the denominator is raised to a power (Green and Myerson, 2004), and a two-parameter hyperbola in which delay is raised to a power (Rachlin, 2006). In the study, 64 college undergrad students were asked to choose between hypothetical monetary rewards, one immediate and one delayed. The fit of these four discounting models to their data was assessed. The authors found that the agreement between the four models and the data was so good, especially with the Rachlin and Green and Myerson models, there was no way to determine which model was the better one, leaving their flaws undiscovered.

Parameter estimates and fit statistics for the discounting functions (Eqs. 1 through 4). Values in regular font are based on fits to the group median data depicted in graph; values in italics are the medians based on fits to individual data.

The new 6D mathematics can be used to analytically derive a 1/4 power law that is in agreement with the measured results of the four power laws investigated in the Mazur paper, and potentially provide an explanation for the four variations of delayed discounting as well. Furthermore, if the derived power law that describes delayed discounting uses the same 6D mathematics that provides a universal explanation for all the known laws of physics, chemistry, and biology, this serves as powerful validation of consistency with universal laws.

In the previous chapter, we became familiar with the metaphor of the superorganism to describe society, which first appeared in the writing of political philosopher Thomas Hobbes. Over the course of the last two centuries, the French and American revolution ushered in a global living lab of experiments in democracy. Were Voltaire, Socrates, Aristotle and Plato right in their concerns that democracy can be bad for the people? Some of their concerns seem to be born out today in a number of different ways as pointed out above. A potential solution to this is the Democratic Quality Vector.

The transferable voting system proposed in this book explores quite a different democratic experiment that may be closer to the heart of these philosophical giants, but still stay away from enlightened monarchism or totalitarian dictators. It may just be the medicine that can treat the sick social superorganism, making it healthier. It is a delegative voting system. Delegative democracy and liquid democracy are examples of transferable voting systems. The main difference between transferable voting systems and mainstream representative voting is the rule which allows any individual to legally "transfer" their vote to another person. This representative does not need to be running for office to compete against other individuals in

the normal sense of a political representative, but is simply another individual nominated by choice, as long as they meet some minimum criteria set out in the voting system policies. Participating voters can nominate other participating voters for a variety of reasons. For instance, I could nominate you because I'm too busy, or a child of the minimum voting age may nominate a guardian to represent their vote. Ideally, a person may nominate another because of their perceived expertise.

Basically, if you trust another person on a subject, or in another circumstance, you can transfer your vote to them. In its purest form, transferable voting attempts to exercise meritocracy, or a transparent system of trust. This transferable vote can be executed for any issue that arises. If I'm an expert in water engineering, then I might feel competent to vote directly on a consideration for a new water treatment plant affecting my

community, but if it concerns drug addiction treatment, I may feel out of my depth and transfer my vote to my friend, who is a researcher in drug addiction. This results in a "referral system" which can result in many representatives, not just one. Instead of having a few hundred representatives in a country the size of the United States, we could have many million. The idea of a centralized house of representatives becomes redundant, needless to say. It is a radically different voting system that is decentralized and far more participatory. These decentralized systems are hard to game.

Although any system with explicitly defined rules can be gamed, it is many times more difficult to game such a distributed system. Lobbyists and influencers could not easily sway a large number of different representatives. By preventing the concentration of power, vote transfer systems lessen the opportunities for corruption and abuse of power to take place. It would be far more challenging for a disease to take hold in the governing body when one cell cannot influence many other cells easily. The Diffusion that vote transfer resembles social media information diffusion behavior, which follows Kleibers Law type power laws.

In our transferable voting system, when one vote is transferred to a second person, we define a unit of value to add to the vote.

This value is called "one unit of social capital" and is a measure of trust as we transfer our vote through the social network. Creating a mathematical entity with such characteristics creates a new set of behaviors. With vote transfer, we may allow the transfer to happen without limit. In a large social network of voters, the vote transfer may happen many times. If the vote is passed from the second to a third person, the amount of added trust is decreased because the first person who initiated the transfer may not have the same level of trust with the third person in the chain. But because the second person trusts the third more, there is an accruing of trust in the chain of transfers. This is reflected in a metric that decreases monotonically as the number of transfers cascade through a network. The trust increases less with each successive person the vote is transferred to. The vote transfer has both a magnitude and direction, hence can be represented as a vector. In the case of transferable voting, the vector is a measurement of the collective trust in the directed network. It can also be interpreted as the collective quality of the final vote.

Proponents of delegative democracy believe that by leveraging existing trust relationships and social capital, vote delegation can help increase trust in government, and therefore effectiveness.

Citizens in a liquid democracy would have a more direct engagement with government and would thus be in a better position to engage in cooperative endeavors as opposed to citizens of a representative system, setting in motion a "virtuous circle" through which trust promotes cooperation and cooperation promotes trust (Putnam, 1993). As history has demonstrated many times, compliance with societal expectations is inefficient when based on fear of authorities, rather than on internally regulated and positively enforced norms. A transparent and engaging method for encouraging this is explained in the next chapter.

# DQV APPLICATION TO THE LEGAL SYSTEM

#### We have come to accept almost without question the monetary evaluation of the immeasurable perturbations of the spirit. But why should the law measure in monetary terms a loss which has no monetary dimensions? . . . To put a monetary value on the unpleasant emotional characteristics of experience is to function without any intelligible guiding premise." JAFFE 1953:222

Psychometrics is the science of measuring mental capacities and processes. While it is within the domain of psychology, it is interesting to note that economists and lawmakers publish a great deal of novel research in this area. Every single day, in courts all over the world, lawyers, judges, and juries not only decide who is right and who is wrong but must also decide how much mental anguish and suffering was caused by whom and to whom. Most importantly they have to decide what that anguish is worth in economic terms. Currently, this is all done subjectively. It is possible to imagine one day developing a standard psychometric scale to measure emotional anguish. We have a suggestion for that.

Psychological measurement is not easy because unlike the outer, physical world, the inner world is not so easy to measure objectively. For example, a piece of art can elicit as many responses as there are viewers of the art. Psychometrics is the scientific field concerned with the measurement of psychological variables such as happiness, intelligence, perceived emotional pain, etc.

Psychometrics find application in areas such as hiring in the business world, student assessment in education, personality profiling and determination of mental health in psychiatry & psychological assessment, and sanity in criminal cases in law.

To turn art into a science, imagine taking the entire spectrum of human emotion and defining a new psychometric to it to produce a mapping that could unambiguously measure the degree of an emotion. What if we could all agree that X was worth 5 units of happiness while Y was worth 7.5 units of happiness?

With our new systems of math, we can make that happen.

The math that can provide a way to convert intangibles to tangibles in many areas, including psychometric measurements is novel application of 6D derived power laws. By developing a way to improve the objectivity of psychometric measurements, we can effectively transform what was once vague intuition to a measurable and standardized variable which rational decisions can be based upon.

Using the 6D spacetime mathematics, Teeple & Himann have identified new application areas where an objective scale of emotional pain and suffering can have important impacts. Such a scale can be used to implement a more objective financial compensation methodology in court cases and insurance claims. It can also be used to determine if goods or services reach minimum customer satisfaction levels, or the effectiveness of advertising in the feelings it evokes in its audience.

Our objective in developing such laws is, of course, to convert the intangible qualities of intuition into tangible ones and develop a psychometric tool that can improve the function of democracy.

Financial Compensation for Court Cases

In courts around the globe, countless judgments are made each day, with subsequent socio-economic ramifications. Lawyers, judges and juries not only decide who is right and wrong, but also how much to compensate for mental and emotional anguish caused by another party. Currently, this is all done subjectively, on a case-by-case basis. Certainly, legal precedent comes into play as well, but none of the precedents have ever been subjected to standardization on an objective psychometric scale of compensation. Because these damages are impossible to quantify by any conventional measurement, it is an area where the jury has substantial discretion that is not consistent with an otherwise highly structured and systematic legal system.

Subsequently, compensation is a currently still a very subjective process. This subjectivity brings about large inefficiency in the judicial system. Even when litigants and defendants agree to settle a case, they may still be embroiled in years of litigation afterwards in order to put a final agreed price on the anguish caused. Many of these legal challenges arise from the subjectivity of human emotions and experiences, and our inability to establish independent and objective metrics to measure exactly how we feel, combined with the enormous variety of individual experiences and subjective realities.

Consider a common class of such court cases, divorce. For couples who ultimately file for divorce, emotional turmoil often takes years to build up before they reach a crisis level that requires a legal intervention.

Conflicts arise when partners no longer hear each other, that is, they are not sensitive to each other's emotional suffering. A conflict can only occur when each partner feels their own emotional pain is more valuable than their partners. Which party's subjective reality is more valid? Is there such a thing as an objective reality that can nullify any individual claim of subjective priority? Noneconomic damages (which includes pain, emotional anguish, humiliation, reputational damage, loss of enjoyment of activities, and worsening of prior injuries) caused by emotional pain remain a form of relief that is woven into the fabric of law.

Despite the intangible quality of mental anguish and psychometrics, people going through a bitter separation consider them very real and concrete. Perhaps a new and dependable system could save them the pain and suffering of a long court battle? Today's lawyers play "the game" by the understanding that qualitative losses of value cannot be enforced in the court room easily, or at all. Some of these lawyers will leverage the defendants' personal qualitative value to advance this position and enhance their ability to unfairly advance to a personal financial gain. Some of these items can be the value of time embroiled in a lengthy court battle, emotional damages to the people in their social circle, negative stress that affects health and well-being and many others. It is even more unfortunate that the Law Society and Ministry of Justice allows this unethical behavior and destruction of value because of the massive effect it has on the social capital of the country. These "trusted" officers of the court are there to provide lubrication to expedite efficiency and advocate for fairness, despite the inability to measure value accurately.

This unequal balance has led to multiple controversial outcomes. Thus, the subject of damages for noneconomic injuries has a lengthy and controversial history (O'Connell & Bailey, 1972). Although there is some debate about whether to stop the guessing and remove plaintiffs' ability to recover noneconomic damages (Calfee & Rubin, 1992; Geistfeld, 1995; Jaffe, 1953; Morris, 1959; Plant, 1958), an agreement that psychological injury is real, and measurable, is now almost universal.

But, because these damages are not easily quantifiable, it is an area where the jury has substantial discretion. Subsequently, compensation is a very subjective process within the objective framework, leading to unpredictable outcomes. Many of these legal challenges are derived from our inability to objectively measure feelings, combined with the enormous variety of individual experiences and subjective realities. Emotions are centered in subjective experiences that people represent, in part, with hundreds, if not thousands, of semantic terms. We aim to clarify this area with clear, evidence supported, equations.

Claims about the distribution of reported emotional states and the boundaries between affective categories – that is, the geometric organization of the semantic space of emotion – have sparked intense debate.

We argue that the best way to quantify feelings lies in a health utility measurement, an approach developed in health economics for valuing health outcomes in public health and medicine. It holds considerable promise for bringing greater rationality and consistency to assessments of injury-related noneconomic loss. The health utility is a comprehensive empirical investigation on a significant scale that gathers consensus on the subjective value of an injury. The implementation of our 6D mathematics in this area is modelled on this already successful implementation. These proven metrics could then be used to assign a benchmark for awarding the value of damages.

In essence we plan to change the perceived value emotional pain has. One of the ways we plan to do that is by showing a comparison using the graph below. The authors of the paper "Feasibility of a Health-Utility Approach to Quantifying Noneconomic Losses from Personal Injury" have empirically mapped the perceived intensity of many types of personal injury by polling over 4000 participants and averaging their opinions. The graph below summarizes their research.

This information can be used to develop a severity weight vs financial compensation power law that can be derived via 6D mathematics. Power laws are so common in biology that we hypothesize this law takes the form of a power law similar to Kleiber's Law as discussed in a previous chapter. Psychology is a biological function after all. Generally speaking, the two extra dimensions (6D projecting to 4D) will enable tracking "memory" discounting or "emotional" discounting. That means more information can be tracked more accurately using 6D mathematics. The vector quantity employed would include and track a severity weight, or scalar relative to the specific subjective emotion. The "momentum" of the emotion can then be forecast using 6D predictions as they travel forward in time. It should also theoretically be possible to predict how an emotion affects an adjacent observer. For example, if a case involves a murdered child, the 6D mathematics could predict the associated level of trauma experienced by various family members and provide an objective measure for jurors to use in making a final decision.

The power law can be tested against a diverse dataset of actual court awarded financial compensations spanning a spectrum of health conditions, and the curve can be optimized for a best fit of the dataset. Then the relationship can be tested in new court cases. After successful testing, this metric could serve as a high accuracy legal tool to help judges, jurors, and lawyers reach an optimal court settlement for injury. As it is a flexible tool, the power law can be tweaked for cases involving a multiplicity of injuries. If successful, it can begin to establish a science of financial legal compensation, replacing the subjective court compensation with an objective methodology.

The Health Utility Graph (Fig. 19) shows how the health utility map might look if 6D mathematics is applied to derive a hyperbolic or exponential psychometric law for human perception that includes delayed discounting.

RADICAL TRANSPARENCY

IN CROWN CORPORATIONS / STATE-OWNED ORGANIZATIONS AND THE LEGAL SYSTEM AND THE LAW SOCIETY

Canadian Crown corporations first emerged in the 1870s and are flourishing today, with hundreds such bodies delivering services and they account for billions of dollars of revenue. Most often, the creation of semi-autonomous governmental bodies is a response to pressures that require action on the part of governments. In many of those instances governments have initiated the formalization of public-private partnerships, Crown corporations, are defined as "institution[s] with corporate form brought into existence by action of the Government[s] of Canada to serve a public function." As such, Crown corporations reflect the values and practices of Canada's public policies while making use of the relatively more flexible and efficient practices of the business sector. As these bodies operate within, or parallel to, governmental processes, a prescribed legal framework dictates the way governments can set Crown corporations into motion.

The initial impetus for the CBC was to help combat what many perceived as harmful and dominating American cultural incursions into Canadian territory – geographical and abstract. Second, the public broadcasting system was to fill a cultural gap left open by private media companies whose mandates had more to do with immediate financial returns than with the large-scale cultural and nation-building projects.

Government intervened into transportation by establishing the Canadian Pacific Railway (CPR), after private enterprise had assessed this epic project as too risky. Indeed, they intervened after multiple railways failed. The federal government, in establishing the CPR, employed a type of "defensive expansionism" to counteract the impact of the American empire within Canadian territories. A failure to deliver on the promise of a national rail-line would have risked the possibilities of British Columbia becoming a sovereign entity to compete with the confederated Canadian provinces or of B.C. joining with the American federation.

Although construction of the CPR took place between 1881 and 1885, the Crown corporation that oversaw this project did not become established officially until 1922. No fewer than two-hundred railway companies became insolvent prior to its establishment, which was the prompt the national government needed to intervene. Thus, the Canadian National Railway

company (CNR) emerged at a time, in the early 1920s, when fear of American intervention was at a comparatively lower ebb. The

creation of the CNR responded to the ineptitude and risk-adversity of private corporations as well as the enduring need of an efficient network for the transport of people and goods. In formalizing its relationship to the national government in 1919, the CNR became the first of Canada's many official Crown corporations.

In the preceding examples, the perception of large-scale public needs justifies the presence of Crown corporations in Canada's institutional landscape. These needs include essential services (such as a national rail-line), cultural information (reliably provided by the CBC), or, more rarely, emergency services.

But defining what is an emergency service should not be abused, as this does have the potential for serious problems.

Crown corporations, and other like entities, offer governments a relatively swift and self-sufficient platform for organizing public activities and enacting public policies. Passionate politicians frustrated by a slow-moving democracy that have to adhere to old public policies are attracted to the easy way to engineer around the failing democratic structure. The imperative means that crown corporations enjoy many of the efficiencies of the private sector while still being required to adhere to modest governmental standards of transparency and fairness. Crown corporations, therefore, undergo forensic accounting procedures, as governments do, and report directly to legislative or executive branches of government. However, on most occasions, a narrow financial audit has simply not been enough to detect unethical behavior. However, this inadequate technique for distancing a company from old policy and of engineering around a slow democratic process is trending.

In essence, the quickly turning world of technology has left Democracy struggling to keep up. The management of aligning our civic duties to the exponential change we see in the world has forced the increase of small pockets of strong leadership. Strong leadership, in some places, is a term that is equivalent to dictatorship. One of the parts of the government where this outsourcing is most concerning is in the legal system. Oversight of court officers is outsourced to a private organisation call the "Law Society". The Law Society is a group that is directed through strong leadership as opposed to a democratic decision-making organisation. In this case, democracy offers the court offices special trust and responsibility to conduct business on the behalf of the government, but also to approve a dictatorship like system to run a large portion of its governance. Unfortunately, this current system of governance is open to unfortunate abuses.

Representative government is vulnerable because, by definition, it concentrates power into the hands of a few. And unfortunately, those few are still very humanly flawed. Thus, it operates ideally when two conditions are met. First, none of the people in power are corrupt. Second, none of the people in power are incompetent, even accidentally incompetent. If either or both of these conditions prevail, representative government will be proportionally less effective. In the same vein, no citizen would knowingly vote a corrupt or incompetent person into power, unless there was something to personally gain, or they were misled. These are the ideal conditions, which are often not met.

Therefore, actual oversight of institutions is rarely done in the manner to which we aspire. We want significantly more active oversight, in order to prevent the abuses of state. This is one place where new versions of governance can actually improve outcomes.

As this is true, one place where the Democratic Quality Vector could find meaningful application is in Crown corporations and other organizations responsible to the Government. In Canada, the federal or provincial government can use an Act of Parliament or Act of a provincial legislature respectively to establish a state-owned enterprise owned by the Sovereign of Canada or the province called a Crown corporation. Such organizations are therefore shielded from constant government intervention and legislative oversight, so enjoy great freedom from direct political control that apply to normal government departments. They are created to provide a public service that the provincial or federal government deems necessary, but which is not being met by the private sector. These may include services that aren't profitable, such as ferry or air services to remote communities

in Canada. They now exist at every level, federal, provincial and municipal. Other countries have similar institutions, often called State-Owned Enterprises (SOE). They are all so omni present, that they do need systems of better oversight.

In fact, a global list of such companies shows just how influential they are in the economy of many countries. The 2018 OECD report State-Owned Enterprises and Corruption states that 22 percent of the world's largest companies are SOEs. However, like every other institution, these face many challenges, such as poor management and low morale.

Therefore, good corporate governance is extremely important to ensure there is a level playing field in the marketplace, and also that it is corruption free and supports quality public service delivery. The report analyzed data from 347 SOEs in 28 OECD countries (including Canada) and 9 non-OECD countries. In the Executive Summary, the main findings were:

  * In the face of known corruption risks, SOEs generally appear less risk averse or less about to take action than private companies.

  * SOEs report the greatest obstacles are the opportunistic behavior of individuals both inside and outside the SOE, as well as the perceived lack of integrity of SOEs from the public and political sector.

  * SOEs with public policy goals report higher risks of corruption than SOEs that have entirely commercial goals.

  * The risk of corruption is greatest in the O+G, mining, postal, energy, transportation, and logistics sectors

  * Instances of corruption reported most often were at non- management or mid-level management level.

The report goes on to say "Opportunistic behavior leading to corruption may be derived from a "too public to fail" mentality in which SOEs are protected by their state ownership, their market dominant position or their involvement in the delivery of public services, and are insulated from the same threat of bankruptcy and hostile take-over that private companies face. Opportunistic behavior may also arise out of SOEs' operations in sectors with high value and frequent transactions or within complex regulatory frameworks that, unless well- designed, can provide a smokescreen for non-compliant behavior." Again, this shows how omni present SOE's are, and how badly they need some sort of enhanced oversight.

In Canada, chartered companies were the precursor of Crown corporations, and the most famous chartered company was the Hudson's Bay Company, which was founded in 1670 by royal charter of King Charles II. When the British Parliament created the new country of Canada in 1867 part of the requirement of the constitution was to create a national railway system linking all the various British Colonies. Hence the first state- owned enterprise, the transcontinental Canadian National Railway (CNR). Over the succeeding years, the CNR became a giant conglomerate that also invested in shipping, hotels, and media, spinning off famous Canadian Crown corporations such as Air Canada, the Canadian Broadcasting Corporation (CBC), Via Rail, and Marine Atlantic. Today, there are 49 federal Crown corporations and many subsidiaries, and an even larger number of provincial Crown corporations.

Collectively, they are a massive part of a nation's economy, and as show above are more prone to corrupt activities. As they are our public property, we are responsible for their actions. Delegative democracy may be the solution to this problem.

These entities have both benefited and created challenges for governments, sometimes to the significant detriment of those governments. And despite public some resistance to Crown corporations, these semi-private institutions have significantly expanded in number, reach, and power since the 1990s.

Another criticism of these enterprises is a 2013 study of state- owned enterprises by Crisan and McKenzie. This study criticized the 50 percent government ownership criteria rule. They stated that many corporations have a government stake that is lower than 50%, so this arbitrary rule leads to incomplete lists. Subsequently, the lists may not represent the actual penetration of the government into the public sector. There may be many more SOE's than on the typical lists. Another downside is that directors of crown corporations are political appointees. Since shares of Crown corporations cannot be traded publicly (otherwise they lose their non-tax status), executive compensation is not based on corporate norms such as stock options or share price performance. And since crown corporation executives' pay is not pegged to performance, it is very difficult to terminate an executive based on poor performance. DQV implementation could promote a more ethical system of oversight, leading to improved performance of crown corporations.

As mentioned, Crown corporations are shielded from legislative oversight. They are not required to submit corporate plans and budgets for government approval or to undergo examination. These exemptions are designed to shield them against potential political interference, according to the Treasury Board. But these measures can backfire and shield crown corporations from necessary accountability, making them perfect targets for high level corruption or recklessness. History shows that corruption indeed exists within such organizations. In 2003 auditor general Sheila Fraser investigated the federal sponsorship program, revealing concerns that led to the Gomery Commission to conduct a public inquiry. The Gomery Report concluded that $2 million was awarded in contracts without proper bidding. This led to a number of improvements designed to increase transparency:

  * Split the CEO and the chairperson of the board into two distinct positions

  * Make the CEO the sole representative of management before the board

  * Bar public servants from serving on the boards of Crown corporations

  * Appoint the auditor general as the sole or joint internal auditor for all Crown corporations except the Bank of Canada.

  * Institute new guidelines for appointing directors and CEOs that allow for greater input from the board of directors and members of Parliament

  * Make several Crown corporations subject to the Access to Information Act that previously were not

Meaning that these imperfect institutions have many possible layers of dysfunction, as do private institutions. These layers of dysfunction lead to sub-par performances, and ultimately corruption. The Democratic Quality Vector can prevent that.

Corruption has also been exposed in provincial Crown corporations. In 2006, two vice-presidents of Quebec's liquor control board were caught in a price-fixing scheme. As a result, provincial governments were forced to introduce new rules to increase transparency. In 2009, Ontario Auditor General Jim McCarter released a scathing report charging that the Ontario eHealth Crown corporation had wasted $1 billion of taxpayer's money. Sarah Kramer was hired by eHealth chair Alan Hudson to fill the position of CEO, but she ignored normal procurement procedures and hired consultants without going through the normal screening process. The board of directors felt it had little power to critique her, however, since she had been hired by the chair. McCarter found that charges that favouritism was shown towards certain companies "without giving other firms a chance to compete were largely true." At one point, the eHealth program branch had fewer than 30 full-time employees but an army of 300 favored consultants. Again, a terrible outcome for a public institution. Another problem that Delegative democracy could have avoided.

In 2017, Auditor General Michael Ferguson prepared a report for the Defence Construction Canada Crown corporation, recommending strong fraud prevention systems be installed. While the Auditor General did not find a case of fraud, he found that the nature of its mission means it must be alert to the potential for fraud to easily occur and go undetected under present conditions. Auditor Marise Bedard wrote "This weakness matters because no organization that safeguards public resources is immune to

fraud risks," and "If undetected, fraud can divert public funds to unrelated private interests or allow competitions to favor suppliers who provide less value for money. Moreover, a lack of measures to monitor and mitigate fraud systematically can undermine public trust". As trust is the essential glue that holds our society together, a system that increases trust should have been installed.

In 2018, corruption in SOEs in two different countries intersected in the case of the South African Gupta brothers, involving a loan from the Canadian Crown corporation Export Development Canada (EDC) for $41 million Cdn to the Gupta brothers in order to purchase a $52 million private bombardier jet. A collections of the Gupta's emails were leaked in 2017 and exposed years of corruption. Among these emails were ones between EDC and the Gupta brothers going back to 2014. Subsequently, the EDC has come under the spotlight for the embarrassing emails that clearly implicate the EDC. The Gupta brothers are the subject of a massive South African criminal investigation on their central role in "state capture", the Gupta's control of South African SOEs. This could have been avoided if there was some sort of delegative democracy in the governance of these institutions.

The Toronto Globe and Mail newspaper reported an article on March 9, 2018 entitled "Export Development Canada is the Death Star in the Canadian Economy". EDC is a behemoth Crown corporation. They are 2nd or 3rd largest export development bank in the world. In 2016 their loans amounted to more than $100 billion and they account for 5 percent of Canada's GDP. They are opaque and lack oversight. The Globe and Mail reported that while EDC has been subject to the Access to Information Act since 2007, there is an exemption called the Export Development Act legislation which treats as confidential any information relating to a client, along with any documentation that may have been filed during or after the due diligence process. This allows the EDC year end financials to not provide exact amounts of the loans it extends, nor to disclose the lending rate or the precise terms of the financial transaction. The Globe and Mail story notes that Bombardier was hoping to secure some part of a US $1.2 Billion South African train contract, of which the Guptas were the gatekeepers and raises questions about whether the loan for the airplane was related. At the time that Bombardier approached the EDC for the loan, the Gupta brothers were already so politically toxic that no other bank would loan them the $41 million. How then, could EDC loan the Gupta brothers $41 million when the EDC's own regulations require due diligence to assess profiles of companies or individuals on official lists such as terrorism, corruption or sanctions. In late 2014, the time the loan was approved, the Guptas were being investigated on at least 3 counts. With a more rigorous governance process these issues could have been avoided. While the Gupta brothers loan is the latest debacle to come to light, the EDC has had numerous scandalous clients.

A number of crown corporations have since been privatized including Air Canada, Canadair, Canadian National Railway, Petro-Canada, Telesat, and many others. Governments have sold these for a combination of reasons – to reduce their federal deficit, because market failures that justified government ownership no longer existed, transparency issues, or that the private sector can turn a profit. The opponents of privatization argue that privatizing government responsibilities can be harmful in many respects. What was once a public service designed to assist the most vulnerable in society may be priced beyond the reach of those very citizens in a privately-operated organization.

Many such Crown corporations were therefore never intended to turn a profit, but to provide an essential service. How they provide that service is not often measured effectively. A liquid democratic type governance may provide some version of accurate, real time measurement of these services.

Hence, for many such Crown corporations, selling them creates new social service problems. The solution may not be to privatize them, but to address the challenges they face. Where transparency is a factor, Crown corporations can improve by allowing for a greater measure of democratic accountability.

Delegative democracy can help to realize much- needed reforms to Crown corporations by making the current systems of governance more participatory, and therefore more transparent, and therefore revolutionary. A Canadian Assembly of proxy voters operating in a delegative democracy system can hold the promise for a renewed relationship between governments and Crown corporations, so that the latter could become more accountable for their actions, something which should be of interest to every Canadian.

Elected representatives regularly scrutinize the actions of arm's-length public bodies and put hard questions to the government agents responsible for their creation and service delivery. Traditionally, cabinet ministers have held responsibility for Crown corporations and staked reputations on successful performances. The close ties between ministries and Crown corporations bolsters governmental oversight of these hybrid public-private institutions. Recently some governments have not closely adhered to the principle of ministerial responsibility, which may produce negative effects in future reporting by and operations of these organizations. How should we tackle these less than ideal performances?

When seeking reforms specifically for the management of Crown corporations, reformers should ask, how can governance of these entities improve through an increase in democratic control? Delegative models of democracy will resolve the issues caused by the blurring of private and public realms that characterize Crown corporations.

How might the presence of thousands of proxy voters alter the reality and management of Crown corporations? I propose that Canada adopt a proxy voting schema that provides additional oversight of Crown corporations and helps to inform political actors about the myriad issues they must address using the principles of delegative democracy as a guidepost. Specifically, the formation of "a Canadian Assembly" can address these needs by gathering qualified proxy-voters together to question public policy as it relates to Crown corporations and to determine the scope and activities of such entities. Further, using a DQV within the organization itself will provide voting patterns that would highlight potential unethical behavior for continuous feedback.

Executives of Crown corporations may initially see the oversight processes of an internal proxy voting body as an inconvenience. However, boards of directors would still steer their proverbial ships, as long as they manage to stay on course. A Canadian Assembly would add a new layer of oversight to activities of public-private entities, one that encompasses the views of a broader range of people than current, insular, hierarchical systems allow. Essentially, they are there to ensure that ethical rules are followed. This development would strengthen democratic systems by adding more voices to civic discourse and by ensuring that semi-public institutions act in way that honours public trust.

Such a process could also enable participating delegates to speak as to whether structural changes in the management of the Crown Corporation need to be forthcoming. Labelled "structural heretics" by author J.E. Hodgetts for their opaque governance and arm's-length distance from ministers, Crown corporations could benefit from further direct feedback on their internal structures.

Despite recent declines in voter turnout across North America, Canadians remain passionate about civic issues. Issues surrounding Crown corporations elicit particularly lively debates. Their concerns can be adequately addressed by delegative democracy and proxy voting.

This will improve the relationship between government agencies and Crown corporations, thereby improving trust in public systems, and improving effectiveness of these corporations. Specifically, the proposed liquid democracy could benefit democratic institutions by providing rigorous, ongoing, and much-needed oversight.

If transferable voting shows a marked improvement in the Governance of state owned organizations, then the legal system would also seem to benefit from a modified governance structure dramatically improving entire cultures.

# INFORMATION OVERLOAD

We live in an Age of Distraction – a state described by Crawford, Hassan, and others. This means distractions abound in modern culture, and can reach you nearly everywhere. Today, the solutions to societal ills, finding an agreed upon truth and improved group decision making process, seem increasingly out of reach.

There are many causes of societal disengagement: economic dislocation, poor health, and information overload. Discussing each cause of alienation provides an opportunity to think through how proxy voting systems can alleviate voter apathy and turn what exists as distractions today into focused struggles for a more just society. The present argument models how rising above the muddled and maddening confusions of everyday life allows us to re-assess greater meaning in our role as community members and citizens.

Distractions create power vacuums wherein bad actors can profit, free from accountability and citizen oversight. Conversely, only an engaged citizenry can hold politicians and bureaucrats responsible for their words and actions.

Robert Putnam's ground-breaking research in Bowling Alone demonstrated that although we are as busy as ever, present life choices are detrimental to social capita. That means entrenching a more democratic ethic will be challenging.

One of the issues with imposing a democratic ethic is the lack of commonalities. Many of our ideas are not shared widely.

One reason for this is the internet. Google, for example, has placed a staggering abundance of information at our fingertips, but is more information necessarily better? Does too much information in fact work against the rational pursuit of truth?

The specialist may refine internet searches but the general reader may agree with the opinion of film critic Roger Ebert, who compared internet research to "using a library assembled by pack rats and vandalized nightly." The painful distraction the internet provides does not come from the variety of rich sources of information now available on the worldwide web, but rather the large amount of irrelevant or fraudulent data that one must parse through as well.

David Shenk has described the state of information overload as "data smog." Shenk's inquiry into data smog finds that, over a short period of time, our society has gone from a state of informational scarcity to an opposite and equally confounding age of informational overload.

Shenk decries information overload as "the noxious muck and druck" that "obstructs [...] contemplation" that "spoils conversation, literature and even entertainment." The author worries that alongside increasing stress levels, the distraction of digital data and messaging reduces the human capacity for skepticism, producing "less sophisticated [...] consumers and citizens." The harmful effects of information overload that Shenk traces do not stem from increased quantity of information so much as from reduced quality of information.

This degrading value does not limit itself to the electronic realm. In response to funding cutbacks in the digital age, print media outlets have adopted the practice of disguising advertisements and articles, known as "advertorials." This disguises advertising as something more trustworthy. This decreases the quality of information. Here as elsewhere, the decreased quality of information available on the web forces an undercutting of professional standards in competing forms of media – the task of differentiating between content and advertising, formerly the purview of broadcasters and publishers, now falls to the individual user. And they often do not have adequate tools for the job.

These effects also impede the efficiency of businesses. Information overload and digital smog introduce new sets of problems into the corporate environment, as Angela and Anne Morris have demonstrated. This makes the reader not trust those around them. The authors write that, "an abundance of information, instead of enabling people to do their job, threatens to engulf and diminish his or her control over the [workplace] situation."

The number of individuals in the class group of the "information poor" has reduced. We have democratized internet service through public access points meaning that our society has information. At the same time, the number of citizens who are "information poor" has also increased, because the availability of quality information has decreased. By simultaneously increasing the quantity of data available and degrading the quality of that data, we have made finding truth and through it trust, very difficult.

As David Bawden observes, "new information and communication technologies, aimed at providing rapid and convenient access to information, are themselves responsible for the high overload effect."

The true democratization of information will only come to fruition if citizens are equipped with the tools necessary for navigating the vast sea of data, which we aim to provide.

If we follow the logic of researchers Speier, Valacich, and Vessy, who say that "information overload occurs when the amount of input to a system exceeds its processing capacity," today every type of human endeavor faces major hurdles. Diversity, normally a positive element in any system, when over present degrades the quality of decision-making processes and extends the time and effort that decisions require. Information overload must be overcome if we are to find a reliable way, as individuals and as citizens, to receive, parse, and apply information in the time we are given.

As our solution to this problem, transferable Voting strengthens civic participation through its emphasis on building and utilizing social capital. It also encourages a greater pluralism of opinion than do representative systems. If individuals feel that their ideas and opinions will be welcomed, they are more likely to participate in public decision-making processes. Moreover, its low barrier to participation is also conducive to furnishing greater civic literacy and combating information overload.

Today, the prevailing systems of government in the West are in need of reform. As a new system that has thoughtfully examined the possible problems with a system like this, transferable voting opens up greater potential for healing institutions than the current system allows.

# ENHANCED

POLITICAL FREEDOM

Suffrage is political franchise, the right to vote in a public election. When modern democracy began approximately two centuries ago, only rich landowning white men held the privilege to vote.

Gradually, this right has been extended to minorities such as other races, women, and the poor. There is still one group, however, that is notably missing: children (and their Umwelt and Interoception). Is it plausible that a child has great access to their Intuitive genius via a lack of conditioned paradigms and flawed logic? Many view the lack of a political voice for children as a modern-day injustice in the same way it was for minorities and women. And just as these historical minorities fought and eventually won their right to vote, advocates have been calling for voting reform for children. We could practice this with delegative democracy.

Children are the forgotten constituency of the modern era. In many developed countries around the world, a higher standard of living, modern health care, and improved diets have resulted in extending the average human lifespan. This has resulted in a significant growth in the elderly population. Meanwhile, fertility rates in these same countries have plummeted. Therefore, more resources have been used to support the elderly than children. In fact, the resources used to support children has dropped.

As a result, there is a close relationship between these major demographic trends and the rights of children, as shown below.

Since children are disenfranchised to the degree that they have no right to vote, whilst the elderly capture an increasing share of the adult votes, policy and the resultant allocation of public resources are skewed in favor of the needs of the elderly. In 1992, using United States data from 1959 to 1990, political scientist Paul Peterson demonstrated this connection between policy and impact on children and the elderly. His research clearly showed how the greater share of the elderly vote resulted in policies that favored the elderly, taking a bigger piece of the social welfare pie for their use, at the same time decreasing the share for children.

The net result of this demographic trend is to further erode the rights of children. The political system needs to change to reflect the real needs of the changing demographics. In spite of the noble voices of the elderly who proclaim concern for the younger generation, research shows that the opposite is in fact the case. The reality today is that children suffer immensely for lack of representation. For a country such as the United States that built its constitution on taxation with representation, it speaks volumes that 75 million citizens, or 25% of the population, is in effect taxed without representation. Globally, the figure is even worse; children comprise a third of the world's population (UN, 2015), but they remain almost universally disenfranchised. Without a voice they have to work extremely hard to be counted.

By issuing debt of any kind, whether financial, ecological, or both, this is a form of taxing the future generation (Aoki and Vaithianathan, 2009). It leaves the future generation to deal with deficiencies in capital or the natural environment. The debt is incurred by the current political class, without little of any consultation with the next generation. In other words, the youth have no political representation on matters which will dramatically affect them for decades to come. They are the ones most affected by corruption, imperfect decision making, and abuse.

If children were allowed to vote, could their representation correct the current political disequilibrium that emerges from imbalanced representation? Some recent social experiments show that they can.

On Sept 18th, 2009, nine days before the Germany's general election, a youth organization called the German Federal Youth conducted a voting experiment called the U18 (for "under 18") to determine if children voting would have an impact on the outcomes of the general election. This experiment sought to answer the question: what impact would citizens under 18 years of age have if they are given the opportunity to vote in an election? 127,208 children cast their votes at 1,000 voting stations in an impressive demonstration of the high interest among children in participating in the democratic process. The youngest voter was nine years old.

The results speak for themselves. Leaders of the political parties were impressed by the turnout. The mock vote of the children was not counted in the general election, but if they had been, the outcome of the election could have been starkly different. The children voters' views and preferences were highly divergent from those of their older counterparts.

Even if the voting age were dropped, there is another condition that must be met if children are to make real gains; they must turn up to vote.

Unfortunately, recent research shows that the lowest voter turnout is consistently found in the young voters' age bracket, ages 19 to 29, the very ones who could make the most difference in correcting historical imbalances. In the 2014 US midterm elections, only 19.9% of Americans in that age bracket actually turned up to vote (Circle, 2014 Youth Turnout and Youth Registration Rates Lowest Ever Recorded; Changes Essential in 2016). This was the lowest rate of youth turnout ever recorded in the U.S.A.

This is a clear argument against the idea that there are people who cannot vote. Even children, who are engaged, can have intelligent political actions. Therefore, delegative democracy has a strong foundation to build upon.

Critics commonly argue that children do not have sufficient life experience to appreciate the consequences of their decisions. This argument goes back all the way to Plato's Republic, written in approximately 380 BC, wherein Plato argues against democracy. In the Republic, Plato establishes that there are true answers as to how a state should be run. Next, he argues that these answers are not obvious and generally speaking, the general population will not have these answers. If leaders are chosen through a democratic process and there happens to be a charismatic leader who can easily manipulate them but lacks true knowledge of how to run the state, the equally ignorant public could conceivably vote such a leader into power. This has been proven true over and over again. Plato's ideal republic, therefore, is governed by the knowledgeable, and the philosophers. Here we see the introduction of the concept of the espiocrat, the ruler who has greater knowledge of normative political truths.

The argument for epistocracy or rule by the knowledgeable, is set out in a series of four claims in David Estlund's Democratic Authority (Estlund, 2009):

  * There are true, procedure-independent normative standards by which political decisions ought to be judged. (The truth claim)

  * For any demos, it is true that there is a small group of people, the epistocrats, who know those normative standards better than others and, thus, know better what the decisions that conform to those standards are. (The privileged knowledge status claim)

  * For any demos, if it is true that the epistocrats know those standards better than others etc., then these people should have political authority over others. (The authority claim)

  * Thus, for any given demos, epistocrats should have political authority over others. (The epistocratic conclusion)

In this classic work on democratic theory, Estlund presents a theory called epistemic proceduralism which avoids epistocracy. Epistocracy is a system in which only the informed vote. He argued that while a few people probably do know best, this can only be used in political justification if their expertise is acceptable from all reasonable points of view.

These critics hold that children, lacking this maturity, could actually damage the voting process, bringing about undesirable results. Yet, numerous research studies show that it is not possible to broadly label everyone under 18 years of age in the same categories. Children's parliaments around the globe have achieved positive political outcomes comparable to adult parliaments (see Children's Parliament section). In many cases, studies have shown that young teens have far more knowledge of current trends about the world, especially technological and social media than some of their parents.

Again, this shows that most people can be informed enough to trust essential decision-making processes to them.

In 1986, demographer Paul Demeny wrote a passing commentary in a paper exploring ways to improve low fertility rates in countries around the world. Developed countries face a double threat of low fertility rates and an increasing proportion of elderly, resulting in a dwindling number of capable workers who need to support a growing population of the elderly.

In a disproportionately older population, voting is skewed towards the needs of the elderly. One way to begin to correct this result is to increase the number of young voters by dropping the voting age. Another way is to do what Demeny proposed in his paper: Giving parents a proxy vote for their children.

Demeny reasoned that such a proxy vote would support more policies that support the rights of children. Demeny held that children should not be in a position of having no voice for the first 18 years of their lives and suggested allowing parents to exercise their voting rights until they come of age (Demeny, 1986). In effect, Demeny envisioned a proxy vote in which each parent would receive and exercise an additional half vote for each child under his or her guardianship. Demeny's proposal gained traction in the academic and political science community and has since come to be popularly known as Demeny voting.

In recent years, a number of attempts have been made to grant children voting rights through Demeny voting. Two cases stand out in particular, Germany and Japan. In 2003 and again in 2008, members of the German parliament introduced the "Kinderwahlrecht" bill (German term for Demeny voting) to the Bundestag (the national legislature of Germany), which would have given proxy voting rights to parents (Weimann, 2002).

However, both the 2003 and 2008 proposals were defeated.

Japan was motivated to seriously consider introducing Demeny voting for the same reasons that the United States and Germany have been exploring it; all three countries have a common problem of an increasingly older population and a shrinking youth population due to low fertility rates. Due to a lack of representation from children, policy trends in all three countries have been significantly skewed to favor the elderly and disadvantage the young. Noticing this, researchers Reiko Aoki of the Centre for Intergenerational Studies at Hitotsubashi University and Rhema Vaithianathan of the University of Auckland authored a research paper that proposed Demeny voting as a political solution in the face of low fertility rates in Japan (Aoki & Vaithianathan, 2009). Citing Japan's 2005 census, the researchers found that within the existing population, the share of the vote was skewed towards those over 55 years of age and that a Demeny vote would rebalance the share of the vote between the elderly and the young.

Without a voice at the table, it is easy for anyone, let alone children to become disaffected with the democratic establishment. This may actually cause low voter turnout as they do get the right to vote, because they do not believe that their opinion matters. With this form of delegative democracy, that trend could reverse.

The debate surrounding Demeny voting is complex. Some critics argue that Demeny voting that gives parents additional votes can be easily abused. But all systems of voting are currently abused. Some critics have pointed out that some cultural and religious groupings have a much higher number of children on average and they could use those votes to over-represent their own political agendas. Japanese researcher Reiko Aoki offers a contrasting view. When interviewed about the fairness of giving parents an additional vote, he replied that: "Currently, the pension system (the relationship between premium and receipt) is independent of how many children the person has. With pay as you go, pensions are paid by the current generation. Even if you did not spend time changing diapers, helping them learn to read and write, driving them to piano and soccer lessons, losing sleep, or having to stay home when children get sick, you are paid the same amount as those who did. Is this fair?" (Sharp, 2011)

This indicates that democracy as a whole is predisposed to take care of everyone in it, whether old or young, sick or healthy, poor or rich. Since all of these people are part of the system, and in one form or another pay for the care of others, should we all not have a system where we all have a direct voice in the outcome?

Consider the case of children's parliaments. In a 2014 paper on the subject, researcher John Wall counted at least 30 countries which have some form of children's parliaments, including India, Norway, Germany, Slovenia, Bolivia, Ecuador, Brazil, Nigeria, Congo, Burkina Faso, Liberia, New Zealand, England, Scotland, and a Children's United Parliament of the World (Austin, 2010; Cabannes, 2005; Children's United Parliament of the World, 2009; Conrad, 2009). Some of these parliaments have made responsible decisions equal in skill, finesse, understanding, and discretion to that of any adult politician (Wall, 2014). Again, more proof that age is not a sole indicator of wisdom and intelligence.

In the 1990s, one of the first children's parliaments in Rajasthan, India, comprised of 6 to 14-year-olds, had significant positive impact on their community such as improving educational policies, dismissing poor teachers, improving community services, and funding new utilities (Bajpai, 2003: 469; John, 2003: 235–9).

These unique insights could be used to improve the governance of higher institutions, like universities or public utilities.

In Bolivia, the children's parliament worked closely with the adult national assembly making important key recommendations (Sarkar and Mendoza, 2005) and in 1998, the children's parliament in Barra Mansa state, Brazil participated in the allocation of municipal funds, ensuring the city council addressed children's needs (Cabannes, 2005:1 191). In another case in Brazil,

three students presented a proposal to a plenary session of Brazil's Chamber of Deputies arguing about the hazards of using flatbed trucks to transport school children. This argument was accepted by Congress These cases illustrate that children are capable of making responsible decisions that impact public policy, and is another illustration of what any motivated population could share with a larger Umwelt.

We are not the only ones who have noticed this. Progressive governments around the world are enhancing governance by giving children a voice in setting policies. In 2011, UNICEF's Inter-Parliamentary Union issued A Handbook on Child Participation in Parliament. This book provided governments with guidelines on how to include children in the decision-making process. Successful children's participatory engagements in parliament included:

2001 New Zealand developed an Agenda for Children based on an ambitious national consultative process in which children were asked to express their society-wide problems and desires (Brown and McCormack, 2005)

2003 South Africa launched the Children in Action (Dikwankwetla) project to include children in some parliamentary hearings and public debates (Jamieson and Mukoma, 2010)

The Israeli Knesset now regularly invites children to participate in its child-related committees (Ben-Arieh and Boyer, 2005: 50)

The government of Rwanda holds a National Summit for Children and Youth every year around a particular theme (Pells, 2010)

2004 The UK has instituted four Children's Commissioners (for England, Scotland, Wales and Northern Ireland), whose purpose is to promote children's concerns in government legislation and policy (Williams and Croke, 2008: 184-7)

2009 The Kazakhstan government worked with UNICEF to organize a political consultative process with youth aged 10–24, called the National Adolescents and Youth Forum (Karkara and Khudaibergenov, 2009)

There is plenty of evidence that children, and by extension adults of any caliber, can participate in political decision making. This is evidence that delegative democracy can work with any group, around the world.

Three years after Demeny's proposal, the UN Convention for the Rights of the Child was ratified in 1989, conferring inalienable rights to children, including the right of free expression, free association, and peaceful assembly. This puts them closer in rights to the adults in their lives. The language of the document recognized children's agency and codified their freedom to be active participants in their own lives and play a role in civic decisions that affect them.

Curiously, while 193 countries ratified this convention, two countries abstained, Somalia and the United States.

While Article 12 guarantees children the right to express their views in matters affecting the child, the gap between theory and reality is large. In 2009, there were 2.2 billion children under the age of 18. Only a fraction of them, those aged 16 years on if they lived in Brazil, Cuba, Indonesia or Nicaragua, had voting rights (Tremmel, 2009). While the convention serves as an important framework for any future work, there is a long way to go to ensure equity for children's rights, and therefore accurate and consistent enfranchisement around the world. This means equity for the future, if it is implemented. Delegative democracy has an important role to play in that scenario.

In a 2011 interview with CBC Radio, Demeny reiterated his position that extending rights to children was a natural progression of the democratic project and that there should no bias against generational status. This further proves that any group should be trusted with the ability to vote,

as most groups contain a large amount of people who trust each other and can make good decisions. This will lead to a pool of voters that will make the best decisions possible.

Indeed, the knowledge and experience gap argument may not only apply to the youngest of children, newborns, and toddlers. There are many who have trouble navigating the immense amount of knowledge needed to cast an informed vote.

But even in these cases, there is growing sympathy that even they be granted rights in specific circumstances. They can make good decisions, and often are more intuitive and successfully insightful than we are. These are skills that we want to better incorporate into our growing democracy. In recent years children have launched lawsuits against various levels of governments, claiming that their inaction on climate change issues is endangering their future world, further proving that some of that plurality are engaged. Courts have begun to agree with them, granting them a space to hear their arguments. We are beginning to value the wisdom of children in this sense. In the same manner that there are government policies that transfer economic resources to parents for the benefit of children. There should be similar policies that transfer political resources to them, and others. This is to reinforce the idea that a wider range of people involved in democracy, the better the decisions made.

Corruption has occurred all across the political spectrum in almost all of the countries in the world. Could we counter it by giving more weight to the future? To our children's voices?

Human nature is not entirely composed of experts. Some people are better at some topics, than others are. In fact, the more diverse a range of life experience in a room, the better the overall decisions made. We should embrace this in our politics, rather than leave the decisions up to a small homogenous group. In a large organization, a very powerful way to actually embrace the decision-making power of the institution is to empower the people in the institution. This is what delegative democracy is about. We will empower the diverse range of life experience in an institution, therefore creating a healthier and more flexible institution.

This is the same for the voting rights of most citizens. If we are able to tell the government who we trust to make the best decisions, reflecting our real-world social networks, we will end up with significantly better government. We will only be limited by humanities own natural failings, not the individual systems failings.

# DQV EXPERIMENT: THE CORPORATION

Up to this point in the book, we have examined a lot of intuitive theories and ideas. However, to speak in a language of conventional science we still need proof, for example, evidence that success can be found in the transferable vote. Although there are groups around the world exploring governance, creating pockets of closed system groups, and inventing a way to transfer votes, will offer us the ability to create a virtual testing laboratory of various governance designs. We call this technology TAGDit.

We created TAGDit because Change is a constant in our lives: if there's one thing we can be sure of, it is that things around us will change. But what about us; how much do we change?

And how much of our change are we aware of? Do we want to change, and what causes us to change? If I asked you to change you would likely meet my request with resistance – you would want to know why. If I provided a reason, you might then offer a counter argument, justifying your current state of being, and defending it against an external imposition. After all, who am I, to ask you to change? What authority does one individual possess to make such claims over another? And in addition to the problem regarding where a request for change originates, is not change itself difficult? To answer all of the potential questions we created a system to collate knowledge. After all we are trying to ensure that a positive outcome from change can be guaranteed.

One of the variable that needs to be accounted for is the cost. We know that social change often comes with a cost. Every innovation, like ours, carries with it potentially negative, unintended consequences. As the French thinker Paul Virilio has argued, the Industrial Revolution's technological inventiveness has unleashed a string of new kinds of catastrophes: the invention of the automobile gave birth to the car accident, that of the boat to the shipwreck, the emergence of the airplane to the plane crash, and so on; to say nothing of the nuclear winter following upon the splitting of the atom.

Something similar can be said to take place in the political sphere. The French political philosopher Pierre Manent speaks of the phenomenon of the "organ-obstacle" or "instrument obstacle," whereby something that once allowed us to achieve a desired objective becomes the very obstacle to achieving our aim. The examples Manent provides includes that of the law, which has the aim of protecting the weak from the strong, but often results in privileging the strong over the weak, as well as that of the sovereign state, which was founded to guarantee peace among individuals but has become a major factor in modern warfare.

With all of this in mind we might ask about the Democratic Quality Vector itself and wonder whether it too will bring forth new kinds of political catastrophes – or at least certain inherent negative possibilities – not otherwise intended by its early advocates. Yet, is it not our duty to change, to seek betterment, to strive for what is greater – or for the good at large?

Surely, all human beings desire what is good by nature, as philosophers like Aristotle have long acknowledged. Do we not therefore have a responsibility to pursue it? We are the beings that not only can change, but are aware that we can change – both internally and externally – and consequently, we have a responsibility to initiate change for the better. This is the obligation of being human. Nevertheless, as suggested, we find much resistance to change. This is because there are many pressures generating momentum for the status quo; many factors and people adding their weight to the gravity of convention. Reasoned argument is not always successful in persuading individuals to change, or even to live up to their responsibilities and obligations. And without widespread societal change, progress remains trapped. We have but pockets of change, rather than progress for all. In our interconnected world today, we hear about global threats, or social calamities that will impact all of us. Contrarily, many theorize that there will be a "tipping point" or a point of "singularity," whereby humanity will undergo universal change. But thus far, the present moment outweighs future considerations, and these theories are imaginative longings.

So, what is to be done? How can one initiate change and convince others of the new direction? A tool is required. But not just any ordinary tool, rather a tool wrapped in an idea: an ideational tool.

Therefore, we need to tool up our ideas. By designing an idea so powerful that it will capture the imagination of individuals and the collective, we are granted access to the openness of change: reaching the imagination is key to unlocking the problem of resistance to change. Through inspiring and empowering the imagination we can answer the question, "why change?"

Getting out of the progress trap requires that one not act alone, but that we work together. Overcoming the resistance to change hinges on the design of a tool that embodies an idea and inspires the imagination. With a tool that incorporates both – one sufficiently basic so that all can access and use it, but one sufficiently powerful so as to effectuate the change desired – the roadblock of convention may be dismantled. Through the combination of simplicity and ubiquity, mixed with the prospect of genuine results, the perceived difficulty of great change will fade away.

In sum, the change we are talking about is nothing short of a cultural shift and an overhaul of the current democratic structure. And just such a tool is now being created.

The developers of TAGDit offer hope for updating democracy and minimizing the perverse effects of democratic practices today. TAGDit is a software being tested in a closed system test where one can control influential variables. Using business settings for this test as planned will provide tremendous value to organizations in facilitating the creation of a shared knowledge pool, alongside an algorithmic amalgamation and ranking process to produce desired outputs.

The TAGDit software, furthermore, a offers a platform for individuals to both build and measure trust, or social capital, through a simple method where individuals can vote for anybody, not just predetermined representatives. In this regard, it retains the sense of a democratic process by the people insofar as all participate, but especially works for the people by elevating information, outputs and skills of the people – to the benefit of all.

Enhanced trust of a group, somewhat obviously, has been linked to greater economic performance of a group. This, in layman's terms, has a sum total that equals more jobs.

The trial process in a corporate setting will demonstrate the great potential of the Democratic Quality Vector, and it won't be long before its broader application in society will become apparent.

Groups thrive when members possess and contribute complimentary skillsets. But groups flourish when they draw on the multiple skillsets of individual members to become flexible and strong. The transferrable vote will cause your institute to flourish and in the process become much stronger.

Corporations, for example, are typically hierarchies made up of teams which perform specialized, but often separated, tasks. Individuals within effective teams often contribute value based on their critical expertise in a field. This expertise is essential to the team as a whole. As the same time successful teams must be flexible in their response to and adaptation to shifting circumstances. These needs are often not complimentary, often fighting against one another.

Amid this constant shifting, managers may come to recognize the value of a naturally attentive employee, one who senses opportunity and challenge before their peers do. Along the same lines, the information produced through Delegative Democracy can identify employee potential. This will allow management's time invested to allow them to capitalize on that potential.

Adopting democratic structures will also give a much-needed boost to attitudes and practices while improving private industry. Entrusting one's employees with greater roles in decision making can enhance the decision-making process, boost workplace morale, and ultimately increase profits for corporations.

Through sharing valued information between staff members, institutional flexibility and therefore strength can be increased.

To solve many of these problems, liquid democracy offers hope that networking technology can facilitate the implementation of a form of direct democracy in the midst of global political chaos. Compelling arguments in modern leadership studies claim that democratic systems are the best governance systems for large, modern societies (Slater & Bennis, 1990). The seminal work of Warren Bennis, founder of the field of leadership studies, argues that traditional, autocratic management structures are ill equipped to manage the rapid rate of change associated with the modern era. This gives more weight to The DQV, as it is a flexible system that gives power to those who are faced with daily problems so they can help solve those problems rapidly.

The best groups – no matter the situation – are those that communicate well (giving a voice to each member), that work together effectively (realizing and developing individual talents), and that harvest value from the competencies of all team members. A democratic system will encourage a diverse group to learns how to accomplish many tasks, leading to a thriving system.

Find out who the champions of your organization are by using the transferable vote algorithm, operating seamlessly in the background of your cloud social media network, to create a reputation currency heat map of your organization.

After you build these numbers, the transferable vote system then scores organization member performance using transferable vote algorithms applied to semantic and sentiment analysis of social media conversation threads, task assignment, decision & decision results, and referrals. The most able individual is able to increase the weight of their vote for certain synchronized types of information. Spikes in information are then averaged and smoothed to continue to enhance the quality of knowledge.

Furthermore Hence, Slater, and Bennis (1990) argue that democracy – which is inherently egalitarian, pluralistic, and liberal, at least in theory – is the only system of human organization capable of effectively governing a modern, technological society characterized by an accelerating rate of change. They also find tremendous value in citizens and societies who can continually learn about the conditions that shape their existence and who can refine group dynamics to respond to those ever-changing conditions. And DQV and delegative democracy has been designed to adapt to these modern conditions, increasing the value for the whole organization.

These principles for governing large societies can also apply to governance of smaller groups, organizations, and companies. The importance of meaningful belonging, open communication, and equality, then, ought to inform the structural organization of businesses and the teams in that business. As Slater and Bennis (1990) put put forth, companies do benefit from democratic governance in much the same way that nations do.

Improved communication, at the minimum leads to significant outcomes. Imagine what this system can do for you? Corporations cannot only be the drivers of technological innovation and growth in isolation; they are actually microcosms of the whole civilization, with similar needs and potential outcomes. As such, why do we tolerate such a profound gap between corporate ethics and individual ethics in the public and private sectors?

One attempt to integrate democratic decision making systems into the workplace has been proposed by James Whitehurst, president of Red Hat (the world's leading opensource software vendor). Whitehurst, in his book The Open Organization: Igniting Passion and Performance, focuses on the principle of meritocracy as a core value for democratized companies. In a meritocracy, any employee (whether a CEO or a new hire) has equal opportunity to contribute their voice. This openness, Whitehurst argues, is a benefit to the employee and to the organization as a whole.

###

###

As this is such a large potential benefit, many have argued in favor of opening up workplaces to voting and for including options for voluntary vote delegation systems. The argument relies on two axioms. Firstly, a democracy must give citizens a right to vote on decisions that will affect the collective. The second axiom is that democratic voters ought to have the right to delegate their votes to others (and to revoke said delegations) as they see fit.

Several lines of reasoning support this second point. First, it preserves the voting power of those who are otherwise unable or unwilling to vote (due to time pressures or limitations of knowledge, for example). Such individuals may know of others who possess values similar to their own and who are known to be well-informed on the relevant issues.

Transferring votes allows all citizens to have their values accurately reflected, even those who choose not to cast a ballot themselves. Thus, the second point avoids the problem that those with less time can participate less well. Implementing a "proxy" option into the structure of democracy would strengthen the nodes in our existing community.

Second, the specific implementation of a proxy voting social network outlined in this chapter would be able to harvest value and information from a group and in the process, quantifying the qualitative social value of that information. Any rejection of the second axiom, then, would suggest that organizations ought to restrict the flow of information,

to their own determent.

Taken together, these two axioms can show that a voluntary delegation system can be more democratic than either a traditional representative system or a direct voting system. On the other hand, traditional democracy is highly vulnerable to arbitrary, skewed, and/or unfavorable results due to poor decisions. How might a voluntary delegation system perform under the same conditions?

Fig 22. Reputation currency map showing reputation of individuals (x-axis) in various subject areas (y-axis) as a result of vote transfer.

# CONCLUSION

The digital information age has brought many benefits, but the unintended consequences that have resulted from it that threaten to bury our gains. One of those problems is today, our lives are governed by information more than ever before, and the decisions we make using that information can affect us deeply. The quality of that information is essential in making clear decisions.

One illustration of that principle is in a simpler time, the speed of information travel was slow, and our entire lives were matched to that speed accordingly. We would wait weeks to receive a letter overseas from a loved one, and parsing that information wasn't very challenging. We now generate more recorded data in two years than in all of the rest of history. That is so much data that we struggle understanding what to do with it all. We really do need AI just to start to make sense of the growing heaps of information.

On top of that digital facsimile is becoming so powerful that we can replicate any kind of information with high fidelity, whether image, text, audio or video. This has profound implications on the veracity of information.

As social media has well demonstrated, it is becoming easier every day to manufacture false information and mislead people into engaging in very harmful behavior that could damage all of society. The digital age, with poor information quality, can amplify once manageable decision- making problems into a crisis. Subsequently, the problem of bad data quality is becoming one of the major issues of our times. If information is what we need to make a decision, and good quality information is necessary to make a good decision, then one of the unintended consequences of mass consumption of psychoactive compounds may have been a collective, decades-long societal brain fog.

Since its inception, democracy has provided challenges for philosophers, politicians, and citizens alike. Although resolving these challenges requires intensive resources and large amounts of time, the effort is well worth the struggle. This is because only democratic governments offer the possibility of direct citizen participation. However, the failures of modern democracies are often linked directly to flaws in that participation such as contemporary voting systems, voter apathy, demographic challenges (distances

and diversity) and now, vote tampering through information distortion. A potential solution to these difficulties is to create voting systems that promotes the engagement of citizens in their own governance. This system, Proxy voting, encourages participation, decreases voter apathy, and integrates marginalized voices into debates. When successfully implemented, liquid democratic platforms employ flexible communication systems to transmit information between representatives and their constituents.

Without the ability to adequately analyze and shape decision- making processes, we will fall prey to dominant and untrustworthy narratives. Some have systems to prevent this, like autonomous organizations (businesses and governments alike) which control not only their decisions but their process for making decisions as well. However, while decisions that address quantifiable data are repeatable, decisions that rely on qualitative or intuitive-type data are, in a sense, less decidable and require more nuance, therefore requiring another system of control to prevent these situations.

Research into intuitive knowledge mechanisms such as interoception is one example of how we are slowly lifting the veil that has obscured intuition's mechanics, showing that our ability to sense the signals inside our body can scientifically explain how we arrive at correct intuitive decisions. "Factual information" might not be as trustworthy as we think because science is obsoleting information at an ever-increasing rate. This is also because our drug culture may have distorted our decision-making processes, producing knowledge that itself may be suspect, such as normative ethical knowledge that may have been acceptable in a drug induced state. Then perhaps a stronger reliance on a mixture of both intuitive, system 1 and analytic system 2 knowledge decision-making may improve the quality of our decisions.

So, a way to increase system resiliency is to make room for fast, intuitive system 1 decision making. Although analytics and metrics provide hard data, they may perform better when balanced with human intuition and judgment.

One method of ensuring a flexible group structure that can navigate such changes is to install heterarchical relations between group members, a flat structure instead of a hierarchical one. Integrating strong leadership into our governance while maintaining individual political freedom can be done with an enhanced transferable voting structure. Subsequently, we can balance some of the best parts of a dictatorship with the best parts of democracy. To do that requires a shift in management approaches that democratically empowers everyone in the organization. Transferrable voting that incorporates a democratizing framework and balances qualitative and quantitative decision can improve the decisions any social network makes, making that shift easier. In government, it can form the basis of a truly delegative democracy. In both business and government transferrable voting can help participants make better choices. It can also offer a promising means of curing the decision quagmire that ails our democracy. The 6D mathematics, based on symmetry in nature promises to quantify intuition, and bring it into the realm of analysis.

6D mathematics promises to turn intangibles into tangibles, it promises a way of turning system 1 intuitive knowledge into system 2 analytic knowledge. Then, the 6D-based Democratic Quality Vector will allow the user to select an appropriate mixture of system 1 and system 2 data for decision-making.

A problem in implementation is that Delegative democracy is very different from the democracy we are familiar with. It could be very disruptive to attempt to replace the current system with it. However, by gradually introducing 6D delegative structures to the current system in limited and well-defined circumstances, we can begin to experience the benefits of a new voting system. One such instance of this is when users begin to rebel against the centralized data ownership models of platform providers such as Facebook, the cries for individual data ownership increase. Delegative democracy will benefit from the 6D-based algorithms which hold the promise of converting intangibles into tangibles and harvesting the power of intuitive decision-making across many sectors. This system will make visible once invisible intuitions for solid decision-making to take place.

Data integrity will always be an issue and will only become a larger problem as we head into the future. Fake news and growing volumes of data do require new tools to recognize and warn society. At the same time, we are beginning to recognize the important value of intuition in decision-making, and the value of intangible knowledge to any organization. With the new symmetry-based 6D mathematics, we have created new tools such as the Democratic Quality Vector (DQV) that can measure and quantify intangibles, capturing what was once regarded as vague, intuitive knowledge. With this, we can provide decision-makers with a metric of intuitive knowledge that can be combined with already quantified information to produce better decision-making. Voting, especially, is an intrinsic part of these human networks and the DQV can significantly enhance voting efficiency and decision making in politics or in business.

# REFERENCES

Aitken, Hugh G. J. "Defensive Expansionism: The State and Economic Growth in Canada." The State and Economic Growth. Ed. H. G. J. Aitken. New York: Social Science Research Council, 1959.

Ambady, Nalini. "The Perils of Pondering: Intuition and Thin Slice Judgments." Psychological Inquiry 21.4 (2010): 271-78. Web

Anonymous. "Employment Equity in the Federal Sector: A Prog- ress Report." The Worklife Report. 9, 3 (1994): 1-3.

Asheley, C.A., and Snails, Reginald George Hampden. Canadian Crown Corporations: Some Aspects of Their Administration and Control. Toronto: Macmillan and Company, 1965.

Atkinson, Q.D. (2011) Phonemic Diversity Supports a Serial Founder Effect Model of Language Expansion from Africa, Science 15 Apr 2011: Vol. 332, Issue 6027, pp. 346-349

Bača, Miroslav, Markus Schatten, and Dinko Deranja. Autopoietic Information Systems in Modern Organizations. Organizacija, letnik 40:2 (2007): 157-165.

Belanger, F., & Carter, L. (2010). The Impacts of the Digital Divide on Citizens' Intentions to Use Internet Voting. International Journal on Advances in Internet Technology, 203.

Bellamy, Matthew J. Profiting the Crown: Canada's Polymer Corporations, 1942-1990.

Montreal, Quebec & Kingston, Ontario: McGill-Queen's University Press, 2005.

Bird, Malcolm J. "The Embedded Crowns: The Evolution of Three Provincial State- Owned Enterprises." Canadian Political Science Review.9, 2 (2015): 1-20.

Boardman, Anthony E., and Vining, Aidan R. "Public Service Broadcast- ing in Canada. "The Journal of Media Economics 9, 1 (1996): 47-61.

Inventory. "Psychometrics. 2010. 8 Jul. 2015 www.psychometrics.com/docs/wpi-eqi.pdf Borins, Sandford F. "World War II Crown Corporations: Their Functions and

Their Fate." Crown Corporations in Canada: The Calculus of Instrumen- tal Choice. Ed. J.R.S. Prichard Toronto: Butterworths, 1983. 447-475.

Branson, R. (n.d.). The Biography Channel. Retrieved from The Biography Channel Web Site: http://www.thebiographychannel.co.uk/biographies/richard-bran- son/quotes.html;jsessionid=566B4B36A0FF53F22A400F83B5F3311E

Brin, Sergey, and Lawrence Page. "The Anatomy of a Large-Scale Hypertextual Web Search Engine. "World-Wide Web Conference. Brisbane, Australia. 14 Apr. 1998.

Buffett, W. (1983, March 14). Chairman's Letter 1983. Canada, House of Commons. Debates. May 31, 1928.

Canada. The Federal Public Service. Blueprint 2020: Getting Started – Getting Your Views, Building Tomorrow's Public Service Together. N.p.: n.p., n.d. Print.

Caron, B. (2012, April 23). Cyber Social Structure. Retrieved from A Cyber Social Structure Web  Site: http://cybersocialstructure.org/2012/04/23/building-a-double-loop-for-liquid-innovation/

Cattell, J. (n.d.). Classics in the History of Psychology. Retrieved from A Classics in the History of Psychology Web site: http://psychclassics.yorku.ca/Cattell/mental.htm

Churchill, Winston. U.K. House of Commons. November 11, 1947.

Churchill, W. (2003, May 9). Democracy: Democracy and Churchill. (R. Hilton, Editor) Retrieved March 06, 2015 from A Stanford Web site: http://wais.stanford.edu/Democracy/democracy_ DemocracyAndChurchill

(090503).html

Craig, A.W. "Business, Globalization, and the Logic and Ethics of Corruption." Eth- ics and Capitalism. Ed. J.D. Bishop. Toronto: University of Toronto Press, 2000.

Crisan, Daria, and McKenzie, Kenneth J. "Government Owned Enterprises in Can- ada." The School of Public Policy Research Papers.6, 8 (2013): 1-30.

Dempsey, Allison, and Levesque, Jacques. "Governance of Crown Agencies. "Pro- ceedings of the March 2005 Conference, March 10-11, 2005, Vancouver, B.C.

Dietvorst, Berkeley J., Joseph P. Simmons, and Cade Massey. "Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err." Journal of Experimental Psychology: General 144.1 (2015): 114-26. Web.

Dunbar, Robin. "Coevolution of neocortical size, group size and language in humans." Behavioral and Brain Sciences 16.4 (1993): 681–735

Emery, J.C. Herbert, and McKenzie, Kenneth J. "Damned if you do, Damned if you don't: an Option Value Approach to Evaluating the Subsidy of the CPR Main- line." The Canadian Journal of Economics 24, 2 (1996): 255-270.

Ford, Bryan. Delegative Democracy.2002. Viewed at: http://www.bryno- saurus.com/log/2002/0515-DelegativeDemocracy.pdf

Ford, B. (2014, November 16). Delegative Democracy Revisited. Retrieved from A BfordGithub Web site: http://bford.github.io/2014/11/16/deleg.html

Friesen, Milton. "Is Social Capital Measurement Still Relevant? A Roundtable Policy and Social Impact Assessment White Paper." Cardus.2014. 8 Jul. 2015https://www.cardus.ca/store/4191/

"Flat, Flexible, and Forward-Thinking: Public Service Next." Canada's Public Policy Forum. 2014. 8 Jul.  2015. <http://ppforum.ca/sites/default/files/Flat%20Forward%20Flexible%20Final%20Report_0.pdf>

Ford, Brian "Working with Delegative Democracy (www.brynosaurus.com/De\- leg/Deleg.pdf) À Intro to Sd2: Structural Deep Democracy***S

Ford, Bryan. (2002). Brynosaurus. Retrieved from A Brynosaurus Web Site.

Ford, Bryan. (2014, November 16). Delegative Democracy Revisited. Retrieved from A BfordGithub Web site: http://bford.github.io/2014/11/16/deleg.html

Galton, F. (n.d.). Life of Francis Galton by Karl Pearson. Retrieved from A Life of Francis Galton by Karl Pearson Web site:  http://galton.org/cgi-bin/searchImages/search/pearson/vol2/pages/vol2_0393.htm

Gray, Tara. Crown Corporations and Governance and Accountability Framework: A Review of Recent- ly Proposed Reforms. Ottawa, Ontario: Library of Parliament, Parliamentary Research Service, 2006.

Green-Armytage, James. "Direct Voting and Proxy Voting.

"Constitutional Political Economy Jun. 2015: 190-220. Print.

Gregorius, J. (2013, December 03). Social 3.0 Mastering the Global Transition on Our Way to Society 3.0. Retrieved from Social 3.0 Mastering the Global Transition on Our Way to Society

3.0 Web site: http://www.society30.com/liquid-democracy-future-democracy-digital-age-2/

Gordon, H.S. "The Bank of Canada in a System of Responsible Government." The Canadian Journal of Economics and Political Science. 27, 1 (1961): 1-22.

Gunzelmann, Glenn. "Unified Theories of Cognition: Newell's Vision After 25 Years."

Habib, Adam, and Schultz-Herzenberg, Collette. "Accountability and Democracy: Is the Rul- ing Elite Responsible to the Citizenry." Democracy in the Time of Mbeki. Eds. R. Calland and P. Graham. Cape Town, South Africa: Institute for Democracy in South Africa, 2005.

Hammond, J. Daniel. "Paul Samuelson on Public Goods: The Road to Ni- hilism." History of Political Economy 47, 1 (2015): 174-198.

Hirst, Paul. "Representative Democracy and Its Limits." The Political Quarterly. 59, 2 (1988): 190-205.

Hood, Christopher. "The 'New Public Management' in the 1980s: Variations on a Theme." Accounting, Organization and Society 20, 2/3 (1995): 93-109.

International Symposium. Saint-Petersburg, Russia. 2 Jul. 2004. Address.

Iacobucci, Edward M., and Trebilcock, Michael J. The Role of Crown Corporations in the Ca- nadian Economy. Calgary, Alberta: School of Public Policy, University of Calgary, 2012.

International Conference on Cognitive Modelling. Groningen, The Netherlands. 11 Apr. 2015. Address.

Jones, B. (2007). Manuscripts, Books, and Maps: The Printing Press and a Changing World. Retrieved from http://communication.ucsd.edu/bjones/Books/printech.html

Kahnemen, Daniel. Thinking Fast and Slow. Anchor Canada. Print.

Kyriazis, N; Emmanouil, M.L.E.(Jr); Loukas, Z. (2012) Direct Democracy and so- cial Contract in Ancient Athens, World Academy of Science, Engineering and Technology, Int'l Journal of Soc., Beh., Ed., Ec., Bus. and Eng. Vol:6-11;3086-3091

Mahoney, James, and Thelen, Kathleen. Explaining Institutional Change: Ambigui- ty, Agency, and Power. Cambridge, England: Cambridge University Press, 2009.

Meyer, D. (2012, May 7). Techpresident. Retrieved from A Techpresident Website: http://tech- president.com/news/wegov/22154/how-german-pirate-partys-liquid-democracy-works

Michelman, Hans J., and Steeves, Jeffrey S. "The 1982 transition to Power in Saskatchewan: The Progressive Conservatives and the Public Service." Canadian Public Administration.28, 1 (1985): 3-28.

Michell, J. (2009). Methodological Thinking in Psychology: 60 Years Gone Astray? United States of America: Information Age Publishing.

Michell, Joel. "History and Philosophy of Measurement: A Realist View." IMEKO TC7

Michell, Joel. "The Quantity/Quality Interchange: A Blindspot on the Highway of Science." Methodological Thinking in Psychology: 60 Years Gone Astray? Ed. Aaro Toomela and Jaan Valsiner. Charlotte: Information Age Publishing, 2010. 45-68. Print.

Milner, Henry. Civic Literacy: How Informed Citizens Make Democracy Work. Ha- nover, Massachusetts: University Press of New England, 2002.

Mosaic White Paper: Communication Theory https://mosaicprojects. com.au/WhitePapers/WP1066_Communcation_Theory.pdf

Mossenburg, K., C, T., & Stansbury, M. (2003). Virtual Inequality: Beyond the Digital Divide. Washington D.C.: George Washington University Press.

Noever, R., J. Cronise, and R. A. Relwani. 1995. Using spider-web patterns to determine tox- icity. NASA Tech Briefs 19(4):82. Published in New Scientist magazine, 29 April 1995

Papadopoulos, Yannis. "Accountability and Multi-Level Governance: More Ac- countability, Less Democracy?" Paper presented at the "Connex" workshop on Ac- countability, European University Institute, Florence, Italy, April 21, 2008.

Pawluk, Joanne. An Introduction to Alberta's Crown Corpora- tions. Legislative Internship Paper, June 1984.

Perl, Anthony. "Public Enterprise as an Expression of Sovereignty: Reconsidering the Origin of Canadian National Railways." Canadian Journal of Political Science 27, 1 (1994): 23-52.

Perron, Denis. "Roundtable: Patronage and the Scrutiny of Appointments." Canadian Parliamen- tary Review.1986. Viewed at: http://www.revparl.ca/english/issue.asp?art=711&param=120

Plaunt Papers, E.H. Blake to Spry, March 13, 1931.

Prang, Margaret. "The Origins of Public Broadcasting in Canada.

"Canadian Historical Review 46, 1 (1965): 1-31.

Prichard, J. Robert S. Crown Corporations in Canada: The Calculus of Instrument Choice. Toronto: Butterworths, 1983.

Proc. Nat. Acad. Sciences. "How Social Influence Can Undermine the Wisdom of Crowd Effect", 2011. Purves, D., Cabeza, R., Huettel, S. A., LaBar, K. S., Platt, M. L., & Woldorff, M. G.

(2013). Principles of Cognitive Neuroscience. Sinauer Associates Inc.

Raboy, Marc. "The Role of Public Consultation in Shaping the Canadian Broadcasting System." Canadian Journal of Political Science 28, 3 (1995): 455-477.

Rosenberg, A. (2005) Thomas Hobbes: An English Philosopher in the Age of Reason. Rosen Publishing Group.

Rosenthal, Alan. The Decline of Representative Democracy: Process, Participation, and Power in State Legislatures. Washington, D.C.: CQ Press, 1998.

Saskatchewan. Debates and Proceedings. Fourth Session, Nineteenth Legislature, December 1st, 1981.

Siegler, Veronique. Measuring Social Capital. Measuring National Well- Being Programme, 18 July 2014.Web. 8 July 2015.

Simpson, Jeffrey. "The Two Trudeaus: Federal Patronage in Quebec, 1968-84." Journal of Canadian Studies.22, 2 (1987): 96-110.

Smith, Garry J., and Campbell, Colin S. "Tensions and Contentions: An Examination of Electronic Gaming Issues in Canada." American Behavioral Scientist.51, 1 (2007): 86-101.

Stevens, Douglas F. Corporate Autonomy and Institutional Control: The Crown Corporation as a Problem in Organizational Design. Montreal, Quebec and Kingston, Ontario: McGill-Queen's University Press, 1993.

Todorov, A. "Inferences of Competence from Faces Predict Election Outcomes." Science 308.5728 (2005): 1623-626. Web.

Thompson, Elizabeth. "Doomed Harper Government Made 49 'Future' Patronage Ap- pointments." iPolitics November 23, 2015. Viewed at: http://ipolitics.ca/2015/11/23/ doomed-harper-government-made-49-future-patronage-appointments/

Treasury Board of Canada Secretariat. Meeting the Expectations of Canadians: Review of the Governance Framework. Report to Parliament. Ottawa, Ontario: Treasury Board of Canada, 2005.

van der Gaag, Martin. Measurement of Individual Social Capital. Amsterdam: F&N, Boekservices, 2005. Print.

Van Zyl, Casper and Taylor, Nicola. "Work Personality Index and Emotional Quotient, Boekservices, 2005. Print.

Vining, Aidan R., and Botterell, Robert. "An Overview of the Origins, Growth, Size and Functions of Provincial Crown Corporations." Crown Corporations in Canada: The Calcu- lus of Instrument Choice. Ed. J.R.S. Prichard. Toronto: Butterworths, 1983. 303-368.

Williams, G.L. (1950) Classics of International Law, Oxford, Oxford University Press. (De iurepraedaecommentarius, edited by G.Hamaker, The Hague: Nijhoff, 1868)

Whitaker, Reg. "Between Patronage and Bureaucracy: Democratic Poli- tics in Transition." Journal of Canadian Studies.22, 2 (1987): 57-61

Yi, S. K. M., Steyvers, M., Lee, M. D. and Dry, M. J. (April 2012). "The Wisdom of the Crowd in Combinatorial Problems". Cognitive Science 36 (3). doi:10.1111/j.1551-6709.2011.01223.x

Psychometrics

link.springer.com/article/10.1007/s40435-016-0271-9  www.sciencedirect.com/science/article/pii/S0022103116300300 philosophymagazine.com/PM_determinismvsfreewill.html

eprints.lse.ac.uk/46931/1/ lse.ac.uk_storage_LIBRARY_Secondary_libfile_shared_re- pository_Content_List,%20C_Free%20will_List_Free%20will_2015.pdf

www.thegreatdebate.org.uk/determinismandfreewill.html philarchive.org/archive/COVEPAv1 onlinelibrary.wiley.com/doi/full/10.1111/j.1467-6494.2012.00799.x psycnet.apa.org/record/2013-05714-001

books.google.ca/books?hl=en&lr=&id=_t4k_r7-2jgC&oi=fnd&p- g=PR7&dq=related:nODNHvfKpvcBwM:scholar.google.com/&ots=I67Bw- BItcf&sig=CrzDCRXrUdTZdq7lSJO9CYviu3E#v=onepage&q&f=false

books.google.ca/books?hl=en&lr=&id=-SOSBgAAQBAJ&oi=fnd&pg=PT12&d- q=related:nODNHvfKpvcBwM:scholar.google.com/&ots=y2VOPB04yX- &sig=PWPENbkQbbvQZ3CJUA_u1OZNfFE#v=onepage&q&f=false

en.wikipedia.org/wiki/Social_determinism

 www.psychologytoday.com/ca/blog/ulterior-motives/201106/stereotypes-and-social-determinism www.simplypsychology.org/freewill-determinism.html

www.psychologytoday.com/ca/blog/the-consciousness-ques- tion/201402/destiny-determinism-versus-free-will

humancond.org/analysis/philosophy/determinism_vs_free_will www-f1.ijs.si/~rudi/sola/prizmic.pdf

arxiv.org/pdf/cond-mat/0001118.pdf www.pnas.org/content/pnas/97/21/11149.full.pdf link.springer.com/article/10.1007/s40435-016-0271-9  www.sciencedirect.com/science/article/pii/S0022103116300300

philosophymagazine.com/PM_determinismvsfreewill.html

eprints.lse.ac.uk/46931/1/ lse.ac.uk_storage_LIBRARY_Secondary_libfile_shared_re- pository_Content_List,%20C_Free%20will_List_Free%20will_2015.pdf

www.thegreatdebate.org.uk/determinismandfreewill.html philarchive.org/archive/COVEPAv1 onlinelibrary.wiley.com/doi/full/10.1111/j.1467-6494.2012.00799.x psycnet.apa.org/record/2013-05714-001

books.google.ca/books?hl=en&lr=&id=_t4k_r7-2jgC&oi=fnd&p- g=PR7&dq=related:nODNHvfKpvcBwM:scholar.google.com/&ots=I67Bw- BItcf&sig=CrzDCRXrUdTZdq7lSJO9CYviu3E#v=onepage&q&f=false

books.google.ca/books?hl=en&lr=&id=-SOSBgAAQBAJ&oi=fnd&pg=PT12&d- q=related:nODNHvfKpvcBwM:scholar.google.com/&ots=y2VOPB04yX- &sig=PWPENbkQbbvQZ3CJUA_u1OZNfFE#v=onepage&q&f=false

en.wikipedia.org/wiki/Social_determinism

 www.psychologytoday.com/ca/blog/ulterior-motives/201106/stereotypes-and-social-determinism www.simplypsychology.org/freewill-determinism.html

www.psychologytoday.com/ca/blog/the-consciousness-ques- tion/201402/destiny-determinism-versus-free-will

humancond.org/analysis/philosophy/determinism_vs_free_will www-f1.ijs.si/~rudi/sola/prizmic.pdf

arxiv.org/pdf/cond-mat/0001118.pdf www.pnas.org/content/pnas/97/21/11149.full.pdf

Wang et al. 2015 onlinelibrary.wiley.com/doi/full/10.1002/bdm.1903 journals.sagepub.com/doi/abs/10.1177/0146167293193010

Figures & Tables

Fig 1. Anders, Andrae (2017) Total Consumer Power Consumption Fore- cast. Retrieved Feb 6, 2019 from https://www.researchgate.net/publica- tion/320225452_Total_Consumer_Power_Consumption_Forecast

Fig 2. Baker, M. (2016) 1,500 scientists lift the lid on reproducibility. Retrieved on Dec 3, 2018 from: https://www.nature.com/news/1-500-scientists-lift-the-lid-on-reproducibility-1.19970

Fig 3. Science Museum London Empty bottle for opium tincture, London, England, 1880- 1940. Retrieved on Feb 10, 2019 from: https://wellcomecollection.org/works/pfzy2a4v

Fig 4. Ohler, N. (2015) Blitzed. Drugs in the Third Reich. London: Penguin Books.

Fig 5. Batemen, O. (2017) The Nazis were on meth, but that's not the whole sto- ry. Retrieved on Nov 3, 2017 from: https://theoutline.com/post/1103/the-na-

zis-were-on-meth-but-that-s-not-the-whole-story?zd=1&zi=k4o536fj

Fig 6. Noever, R et al. (1995) Using spider-web patterns to de- termine toxicity. NAS Tech Briefs 19(4):82.

Fig 7. Mahler, K. (2016) Interoception. The Eighth Sensory System. Shawnee, Kansas: AAPC Publishing Fig 8. Price-James, S. (2019) The Original Homunculus Company. Retrieved on Dec 7, 2019

from: https://www.sharonpricejames.com/the-original-homunculus-company.html

Fig 9. Hobbes, T. (1651) Leviathan or the Matter, Forme and Power of a Com- mon-Wealth Ecclesiasticall and Civil. London, Menston, Scholar P.

Fig 10a. Faggella, D. (2019) Exploration Post-Human Consciousness States

\- The Value of Psychedelics. Retrieved on Dec 12, 2018 from: https://danfag- gella.com/post-human-consciousness-value-of-psychedelics/

Fig 10b. Boyack, K.W. and Klavans, R. (2013) Dynamic, Global Mod- els and Maps of Science. ISSI 2013, Volume 1 pp.361-376.

Fig 15. Our World in Data (2014) Trust vs GDP per capita, 2014 (or latest available data). Retrieved on Nov 15, 2018 from: https://ourworldindata.org/trust

Fig 16. "Morin, G. (2017) Why Didn't I Know Anything About Antifa Before August 2017? ( Char- lottesville VA Events). Retrieved on Nov 18, 2018 from: https://grondamorin.com/2017/08/29/ why-didnt-i-know-anything-about-antifa-before-august-2017-charlottesville-va-events/

Fig 16. Wikipedia (2006) High-power magnification (1000 X) of a Wright's stained peripheral blood smear showing chronic lymphocytic leukemia (CLL). The lymphocytes with the darkly staining nuclei and scant cytoplasm are the CLL cells. Retrieved on Nov 18, 2018 from: https://en.wikipe- dia.org/wiki/Chronic_lymphocytic_leukemia#/media/File:Chronic_lymphocytic_leukemia.jpg"

Fig 17. McKerchar, T.L. et al. (2009) A Comparison of Four Models of Delay Dis- counting in Humans. Behav Processes. 2009 Jun: 81(2): 256-259.

Fig 18. Cowen, A.S.; Dacher, K. (2017) Self-report captures 27 distinct cate- gories of emotion bridged by continuous gradients. PNAS 114(38).

Fig 19. Carvalho, N. et al. (2018) Feasibility of a Health Utility Approach to Quantifying Non- economic Losses from Personal Injury. Journal of Legal Studies. Volume 15, 2, pp: 278-319.

Fig 20. UN Department of Economic and Social Affairs, Population Divsion (2015) World Fertility Patterns 2015 - Data Booklet (ST/ESA/SER.A/370.

Fig 21. UN Dept of Economic and Social Affairs, Population Division (2015) World Population Prospects: The 2015 Revision. United Nations, New York.

Table 1. Wikipedia "List of cognitive biases" Retrieved Jan 3, 2019 from: https://en.wikipedia.org/wiki/List_of_cognitive_biases

Table 2. Wikipedia "List of memory biases" Retrieved Jan 3, 2019 from: https://en.wikipedia.org/wiki/List_of_memory_biases

Table 3. Sylvian & Reis, 2009

Table 4. Aoki, R. ; Vaithianathan, R. (2009) Is Demeny Voting the Answer to Low Fertility in Japan? Center for Intergenerational Studies, Institute of Econom-

ic Research, Hitosubashi University, No. 435 PIE/CIS Discussion Paper.

