I was scanning absent-mindedly the public transport vehicles parading in front of me when I noticed a new bus model. That happened not because it was dramatically different, but because it was red and it stopped in a place usually visited by blue buses. That gets my attention! While my unconscious was busy observing colours, my conscious-self stopped to examine an unusual video-camera. How cute! Protruding from the exterior top-edge of the bus, just above the mid-doors, the device was almost unnoticeable. Just like a little rounded soft bump, the camera had its own electronic eye looking down towards the door and the back of the bus. It reminded me of the chameleon.
At that moment the image of the bus, people waiting and moving slowly through the mid section to take their seats, the cameras and the monitor hanging on the ceiling with its images silently switching at regular intervals from one view to another, the whole picture stroke me as how much our urban environment has changed, almost by stealth. The bus – the symbol of slow pace and simplicity- has become such a sophisticated electronic beast. Video sensors are all over it communicating continuously with a central base. The entire vehicle has become a data collector, not just a record keeper to be used only in case of public disorder. Think about how quickly the quality of the monitors has improved in the past decade. The video frames were a joke. Who could identify anyone that committed a crime from a trail of rough pixels smudged over a screen with dubious colours? We now have HD. Faces are easily identifiable; everyone’s face.
Notice how this is a two-edged sword: it is easier to catch a few trouble makers, but it is also easier to monitor everyone else. This system could be used to track your normal citizens who are blissfully unaware that someone is watching them. We do have the technology now to raise the surveillance bar to dangerous levels, and no one knows for sure how this is going to end-up. Face recognition technology means that someone with access to public data could trace your movements during the day by simply running a process that identifies your face and re-constructs your day. A commercial firm in UK used facial images of people who ‘like’ a retailer’s Facebook page to identify those who visit the shop in person with video cameras installed on the premises. This works great for targeted advertising.
So what? This is better service, isn’t it? It is true, well, at least not dangerous. But what if something or someone wants to get something by influencing the public to agree with an idea, which is not necessarily good for the people, but certainly for the benefit of the few? They could target all those individuals that have similar patterns to inoculate message with deep emotional impact, in a way that doesn’t involve obvious advertising, but personal and ‘natural’ communication. This entity could be anything. Anything that is able to conceive a goal that can be described in broad terms: patterns, data, large scale behaviours. Such a goal has no room for personal considerations of the many of us.
The bus I chose to pick on is chicken feed. You want something scarier? Meet ARGUS: the world’s highest resolution video surveillance platform.
With its 1.8 gigapixel video camera, ARGUS can create contextual movements of individual, vehicles, or anything sizeable that moves, for the duration of any day, in the area in which this device was ordered to scan.
I am not a fan of conspiracy theories at all. I am even sceptical that any individual or group of individual would be able to exploit this vast network of sensors and data in which we are enmeshed and destined to become smaller (even though smart!) nodes stripped of any major significance as individuals. But I wonder if the sheer size of this nervous system isn’t bound to create a consciousness that may set goals beyond our grasp. The internet of things will definitely make this more interesting over the next decade.
I recall the day when I bought a Gateway PC (this was about 17 years ago, I think) that came with a Kodak camera (DC25) equipped with a 256k pixels sensor. A few months ago I got a 36M pixels camera. This is an increase of over 1,000 times in resolution. Imagine what kind of world we will live in twenty years from now, with software that will be able to sift through mountains of data and find you wherever you are no matter what you do. What is the meaning of privacy, security and individual freedom in a world like that?
While trying to explore the principles of creation of knowledge, I wanted to run an overview of the epistemology as a way of understanding knowledge from a philosophical perspective. As result of this mini-exercise am writing a brief overview with a few comments, followed by my brief personal critique of epistemology.
What Is Epistemology?
Philosophy as a thinking system likes to explore the universe through “rational investigation of the truths and principles of being, knowledge or conduct” (www.dictionary.com). Where does epistemology fit?
According to Peter D. Klein “epistemology is concerned with the nature, sources and limits of knowledge”. This is where the trouble starts. What is knowledge? According to some knowledge should include objective forms, others think that in the context of epistemology, knowledge is only about beliefs that something is true as opposed to knowledge about how to do things.
Epistemology ventures into areas where precise measurement is impossible. This is why there are many definitions of epistemology and heated debates have been going on for centuries.
Propositional Knowledge Epistemology
The focus of epistemology on knowledge analysed on the basis of beliefs and truth takes this philosophy out of the natural philosophy branch, where I would have preferred it to be. In my view this limits the influence of the advances of science on the development of epistemology simply because subjective knowledge is impossible to measure. The claim of the traditional epistemology is that the quality of the reasons for our beliefs determines the conversion of beliefs into knowledge. This approach is called the normative epistemology, and is supported by theories of justification. Another tradition, the naturalized epistemology, claims that the conditions in which the beliefs are acquired determine the truthfulness of the beliefs.
This tradition has two views about the structure of reasons: foundationalism and coherentism. The foundationalism reminds me of the Anglo-Saxon common law. Rulings can be made based on precedent rulings which have been gathered throughout the history of application of the law in territories under the crown for many, many centuries. Some of the rulings are unique and they can form the basis of a new ruling for a case that could occur in the future. When they occur and the reference to the precedent is accepted as being similar, the court can rule without having to repeat the previous process. Thus, according to this view, beliefs can be based on other beliefs which have been proven true in the past, therefore they don’t have to be justified and thus together they form the basis of the aggregating belief and deem it true. In other words if the new belief X is based on A deemed true, then X is true. Of course, it gets complicated when deciding if a belief is true when more than one hypothesises are available.
The basic beliefs can be of several types: empiric (Hume and Locke), rational intuition (Descartes, Leibniz and Spinoza), innate (Kant, Plato), or conversational contextual.
Coherentism by contrary, states that a belief is true if multiple beliefs are inferred for its justification. But this is not very helpful either. Gettier formulated a scenario (Gettier’s problem) where the assumptions might be true, but the inferred belief is not necessarily true. Gettier’s example of Jones and Smith when they apply for a job and Smith is making a deduction in which he concludes “the person who has ten cents in his pocket will get the job” which proves in the end to be false, although the hypothesis are true, seems to be focused on semantics rather than facts. When Smith said “the person with 10cents in his pocket will get the job” he was thinking of Jones. Thus, the actual belief was that Jones will get the job because he knew he has 10 cents in his pocket. Gettier tries to prove the point by solely focusing on the last sentence that went through Smith’s mind, not on the actual belief. The experiment clearly makes no connection whatsoever between the 10 cents and the job allocation, hence the hypothesis is false anyway.
Also called naturalistic epistemology, this tradition describes the knowledge as produced in natural circumstances and beliefs are considered true based on conditions verified using methods, results and theories specific to empirical sciences. This type of epistemology tends to rely on cognitive psychology and its empirical methods to determine the quality of conditions in which the knowledge is acquired. Quine, a naturalistic epistemologist, considers epistemology as part of psychology, while Thomas Kuhn thinks the social sciences should be applied to epistemology. This approach would solve the Gettier’s problem by qualifying the source of knowledge as not entirely reliable. Mind you, this is not bullet proof because the method cannot be applied to what you don’t know. Smith didn’t know he does not have 10 cents in his pocket, so his statement sounds true.
The fundamental issue I have with the proposition offered by epistemology, that knowledge is about beliefs and justification as an indication of truth, is that it is entirely subjective (even the empirical methods ultimately attempt to “guess” the quality of the subjective thought) and limited to human interpretation and mental storage of knowledge.
With the development of computers and large network systems the idea that knowledge is limited to the human brain and defined by individual beliefs is unsatisfactory. There are two major weak points in the traditional epistemology: knowledge can be stored outside the human brain and used as a repository which is accessible on a need by need basis or through gradual discovery and that knowledge could be distributed across large number of people and shared as common source of knowledge.
The first issue is a bit surprising. Epistemology seems to be stuck in a debate that has only marginally changed since Plato, based around a discourse focused on beliefs as mysterious forms of reflection of the external environment or as outcomes that result from internal mental processes. At a time when information was an inexistent concept and everything was mechanical, far more obvious and easier to recognise than thoughts, the fascination with the mind’s perceptions and deductions was understandable. But know this approach is outdated in my view because it does not recognise the possibility of knowledge created by and with computer systems in vast networks.
The second objection has to do with the lack of recognising the socially created knowledge as something that is acquired by large social group through an iterative process of sharing, collaboration and collective action. The role of social networks is ignored completely, thus missing the opportunity to explore the creation of knowledge at a higher order and implications of availability of knowledge across large populations and geographical areas, including the whole planet.
Understanding how things work is an obsession and a necessity of ours as a human race. We explain how things work by trying to link facts in a logical sequence that builds and demonstrates the understanding.
The linear logic that has prevailed for centuries as the only reliable formal tool of thought has encountered a few challenges lately. The surprising aspect is that the challenges are more substantial right now when we have more research data than ever before and which supposedly should help us solve many problems with the application of systematic thought. However, we are finding that the admirable logic that worked so well in the world of mechanics stutters when it comes to more fluid world of biology and social phenomena.
Industrial thinking taught us that if you break down the system into its smaller components we could explain how it works by figuring out the relationship between these components. This works when dealing with systems with low complexity. In that situation it is easy to confuse causality with reality, or facts with “logical” beliefs. That problem didn’t bother us too much, because the logic was sufficient and it worked. But in complex and dynamic systems, the confusion causes big problems. Johan Lehrer explains beautifully the modern paradox caused by the abundance of information in Trials and Errors: Why Science Is Failing Us. Oops, I used the word “explains”!
Gathering data and identifying correlations works up to a certain extent as a way of demonstrating causality. Until that extent is reached one can successfully use the expression “explain” as a form of describing causal relationship, but beyond that level the explanation is simply an illusory way in which our brain deals with complexity. It is a bit like Voodoo science. We are generating a huge amount of new information captured in digital format. The social synapsis that connects us in so many ways accelerates the creation process which will boost the pace of generating new information. Very soon, the digital information stored on computer systems around the world will surpass the total of information stored in the people’s brains of the entire global population. The social, economic, cultural and political consequences will be vast and impossible to predict.
When the world is too complex linear logic ceases to operate. We only have our own interpretations as good guesses of what happens. There may be a way to identify the boundaries between the two worlds (the simple and the complex), but I am not aware of any such method or theory. It is all blurry. In A Brief History of Time, Stephen Hawking concludes that entire universes with their distinct laws exist in the space created by the Big Bang. Laws don’t have to follow a linear system with which we are so familiar and they certainly don’t have to exist from the beginning of time. There are new laws and old laws. The laws that govern the social phenomena on planet Earth did not exist five billion years ago and we don’t know if they exist as such on other planets. The laws evolve; they change adopting new patterns accommodate behavioural discontinuities of systems in which the laws apply.
The difference between the realms of logic and non-logic thinking is similar to the difference between the art of Raphael (Raffaello Sanzio) and Pablo Picasso. The first is a leading actor in the Renaissance movement dominated by the desire to bring the classics back to life by creating a perfect rational world. Painters have perfected the use perspective as a way of reflecting the reality. Rafael’s paintings are “perfect”. With attention to detail, Raphael produces studies of perfect world where the geometry is used to give the viewer a sense of linear depth. As an example, The School of Athens contains architectural elements based on semicircles and lines to give a 3D perspective that takes the focus of the viewer to a point of view in the centre of the painting. The image is symmetrical, with people occupying spaces in equal weighting and what seem to be important characters being placed in the middle. Even more mundane life moments with architectural elements in ruin are painted with careful choreographed perspective lines – The Virgin with the Veil (the thumbnail on the left) is an example.
Enter Pablo Picasso. In his early years he was a keen learner of the classics and his studies reflected that. Very soon that he broke with the tradition. His paintings challenge the order we have been trained to accept. The perspective is abandoned completely and when it appears it is only to be mocked. The faces of his characters have their parts represented in a multidimensional plan as if several views are painted simultaneously. For instance, The Portrait of Dora Maar is conceptually so different from The Virgin with the Veil. For the typical viewer it is hard to understand and accept Picasso’s art. I am not trying to argue for or against the style, but I am only observing that the rules for “liking” his art are different. For one, the viewer is an active consumer of the artistic product. The viewing is a personal experience and the viewer’s imagination plays a key role in determining that experience. Some may see beauty in the portrait of Dora Maar, and imagine a woman with passion, beautiful eyes and elegant figure. If you use the optical perception as educated by our traditional upbringing, the portrait doesn’t make sense at all. What are those hands and what is that double nose doing in there? What Picasso did though, was to multiply the possible interpretations of the visual design and create a variety of worlds based on individual rules. While Raphael seek to represent one view which was to be readily accepted by all the consumers of his paintings, Picasso created something which triggers different representations created in the viewer’s mind. Picasso captured the expression of our differences, zigzagged and opposed, while Raphael captured the essence of our common understanding, beautiful and uncontroversial. Picasso is “illogical” in stark contrast with the “logical” Raphael.
Let’s imagine the game of chess designed by these two maestros.
The classical chess game is a construct that lives in a perfect world of logic. Everything is known. There are a few rules and the number of combinations is discoverable, although it takes good computational power to do it. With our increasingly capable computers we should be able to calculate the perfect chess match in which the white and the black make the optimum moves based on a library of a huge number of possible scenarios. This is a game Raphael would feel comfortable with.
The non-logical world is one in which the game of chess is changes its rules and structure in unexpected ways. Imagine a chess board with a shape that changes with the temperature of the environment and the rules are slightly altered with each move. If you move the Queen from C4 to F7, the board will lower a corner of the board like melted chocolate extending the affected fields and the Knight can only move one field in a shortened “L” shape because of increased distances. This makes very difficult the analysis of scenarios based on past experience. The decision-tree algorithm becomes useless. A better strategy in this type of game is to experiment, see what changes occur, and based on that observation, decide the next move. Collecting data to identify correlation between temperatures and rules and aspect of the chess board will only give you limited understanding of the game. Perhaps over time, collecting large data, one could build a collection of patterns and use them as a guide, but never as a certain how-to recommendation.
The non-logical chess game suits the world of Picasso, a world in which each game is unique, never to be repeated where players influence the rules. A champion is one that has a lot of practice but has also ability to pick up new skills and has an open mind. In fact, this is a game where entire teams play together collaborating on making the best moves. Because of the many possible interpretations, a collective thinking, a sharing of ideas works best in understanding the evolving game. It is a continuous adjustment of strategy and interpretations that requires many brains working together to solve the puzzled created by each turn of the game.
If we think of creating software programs capable of playing the two kinds of chess, we would recognise that we need two different teams of programmers. The classical chess computer software requires massive calculations that are quite repetitive. The challenge is one of volume and ability to optimise the software to make rapid decisions by navigating through a large library of patterns. For the Picassonian chess game, the team is very different. Their programming must be fuzzy and social to allow for sharing opinions, experiences and expertise. The programmers must be creative and emphasize on the elements of sharing, collaboration and collective action. The outcome will need to be software that can learn and adapt through analysis of large data and parameters as the number of scenarios are practically limitless.
One of the most magnificent aspects of a man’s character is his ability to overcome difficulties in the face of adversity, uncertainty and great disadvantage. Overcoming one’s condition is one of the most remarkable achievements in life. When I say “overcoming condition” I am not thinking of great acts of heroism, although they are part of this category of acts, but of those situations where normally the path of life’s episodes would lead to an outcome predicted by the collective past in a linear fashion, but somehow the actions take a sudden upward turn. I asked myself the question if this qualitative jump is something that an individual could make a claim in the name of personal free will or is it something that became reality because that trait is in his DNA?
The mind is like a Star Wars opening screen: a dark space in which slowly thoughts come forward with clarity. You see what you have now, but you don’t see what is to come. Who or what creates those thoughts? Are they the result of quasi-random sequences of DNA programming, or are they created by something that we call consciousness which somehow has this magical property of self-organisation?
It is tempting to say we are and we do as the result of soul-less deterministic laws. It is a clean logic, difficult to argue with. In the same time, here we are, bursting with self-awareness, producing these marvellous creations that come out on that black screen of projected consciousness. Can a fragile system produce rock-solid logic about itself? Because we haven’t solved completely this mystery of the black screen we could never prove completely with perfect clarity that free will doesn’t exist. The doubters will always be there.
So, going back to the original question: is it possible to overcome personal condition through the virtue of free will? If you consider the opposite view in which our DNA and the environment determine entirely how we think, then human aspiration is a fake and a curse. It is a fake because no matter what you do, you never achieve a higher condition and it is a curse because your life will be a string of failures as you attempt to achieve the impossible. You are trapped. Is it better to accept personal limitations and find satisfaction in a job as it comes? It should be easy. No effort is required because the condition will not improve anyway beyond what you get from life by default.
What we have is this dark space in which a deterministic Darth Vader is pulling the strings and the colourful, lively and capricious consciousness that we were introduced to soon after our birth and which lures us into believing our will matters.
We live like a fish in a fish tank. The fish tank boundary is a fine, sensitive field separating the enclosed space of free will (or the illusion of it) from the unmovable, stern deterministic universe. As depressing as it may be, if indeed this boundary exist, the best option is to be optimistic. If the wall exists, keep the aspirations alive and kicking because even if your potential is limited you never know what that limit is and you have to test it to make sure you get the best version of your destiny. If the wall doesn’t exist, that is the best news you can get: the opportunities are limitless.
The very action of theirs, that seems to them an act of their own free will, is in a historical sense not free at all, but in bondage to the whole course of previous history…
Lev Tolstoy, War and Peace
The total information digitally stored in the world in 2010 was 1 zettabyte. The human brain can store 2.5 petabytes. This means 400,000 people can carry in their brains the entire digital data stored in the world in 2010.
When the information stored will reach 7.5 billions (assuming population will reach this level in the next few years) times 2.5 petabytes = 18,750 zettabytes, ( that is 18.75 yottabytes), the size of the total digital information will be equal to the total information stored in people’s brains. When will that be?
If information stored doubles every 18 months, the world needs 14.195 periods to reach that limit. This is roughly the year 2032.
The computing power packed into microprocessors has followed the same growth rate for a long time, and it is highly probable it will do so in the next couple of decades. That means not only the computers will store more data, they will become significantly more intelligent.
We haven’t considered the networking effect. This increases dramatically the computing power of networked devices.
The digital ecology will look very different in 2032. Attempting to make detail predictions of what will happen is fraught with danger of missing the mark by a mile. However, we can try to anticipate some general changes based on past trends.
In the year 2032, a small personal device will have the smarts of a super computer today. The computers will have sufficient intelligence to display quasi-human attributes: metaphoric meaning, low level of perception, complex meaning, natural voice recognition, real time facial recognition, etc. The last two attributes will probably be heavily used in super-high definition of video cameras for pervasive supervision. The computing power will be sufficient then to create realistic special effects that can simulate voice and images, helping trouble makers to fool supervision cameras.
The drones will be smaller, faster and ubiquitous. They can be deployed by thousands to cover designated areas to identify and destroy strategic targets.
Cars will think and drive themselves even in busy urban districts.
Will we still use petrol? Maybe, but there will be a lot more green and smart energy by then.
How will people be?
Affluent society will thrive in creative environments where imagination will transform into usable, consumable outputs almost immediately. Creativity will be powered by work in collaborative and dynamic groups. Highly creative groups will be very fluid, surfing the wave of complexity and sophistication, enjoying privileges that come with success.
Robotics will replace humans in doing repetitive, dirty and dangerous jobs, but it is not likely that this will bring the happiness that many are hoping for. People who made a living out of those jobs will find they have nowhere to go. They can’t cope, they don’t know what to do and the growing gap between the social cognitive abilities of the ones who can and the ones who can’t will slowly push the unfortunate into ever larger enclaves.
This will be the biggest challenge of the modern days in the future: what to do with those who cannot adapt to complex and dynamic society. As the computing devices become smarter, the mental health of humans become a bigger problem. The cost of health, education and civilian protection will not go down, but up.
This is not new, but following a trend that started thousands of years ago when cities were invented.
This problem will be the seed out of which a danger will arise threatening the existence of the whole civilisation as there will be those who will use the ignorant and the desperate to commit crimes, a practice the evil born in wealthy mediums has known for a long time. Anger makes a very good recruiting agent for all the wrong reasons.
The Meaning of knowing has shifted from being able to remember and repeat information to being able to find and use it
– Herbert Simon, Nobel Laureate