"If we encounter a machine that can do what we can, and that must operate under the same bodily constraints that we do, the most parsimonious explanation will be that it is indeed conscious in every sense that we are conscious." -George Musser
Many definitions are given to Consciousness but yet no definitive answer. However, here's the most succinct definition for the sake of this discourse I could come up with:
"CONSCIOUSNESS refers to a collapse of wavefunction resulting in a subjective multi-sensory perceptual experience and involving miltiple parallel processes such as interpreting sensory data stream, retrieving and creating memories, using imagination, envisioning the future, planning, thinking, self-reflecting, reacting to the sensory input, and being aware about the surroundings.
Consciousness is a first-person phenomenological experience of an entity, it feels like something to be that entity. Consciousness can be identified as an underlying mathematical pattern, can also be viewed as algorithmic information processing and quantified via feedback loops in interacting with environment."
I concur with the AI researcher Hugo De Garis who argues, that once we reach a certain threshold of computational capacity and comprehensive understanding of the human brain, we could finally simulate the brain and obtain programmable sentience. And that's not far-off in the future!
The arrival of the AI matching human-level intelligence is estimated by futurist Ray Kurzweil around 2029, "when computers will get as smart as humans". By then, computers will possess emotions and personality... and, I'd argue, their own subjective experience, i.e. consciousness, and even spirituality.
"When I talk about computers reaching human levels of intelligence, I'm not talking about logical intelligence," says Ray Kurzweil. "It is being funny, and expressing a loving sentiment... That is the cutting edge of human intelligence."
Do biological systems hold monopoly on self-awareness? As Dr.Bruce MacLennan puts it: “I see no scientific reason why artificial systems could not be conscious, if sufficiently complex and appropriately organized.” If nature made its way to human-level conscious experience, ultimately we should be able to replicate the same in our machines.
In the next few short years, programmable sentience will become relatively easier to develop - an Artificial General Intelligence (AGI) entity must have a reliable internal [virtual] model of the "external" world, as well as attentional focus and intentionality within their cognitive architecture. Already, deep learning algorithms aim to simulate the activity in the vast network of neurons in the neocortex, the part of our brains where thinking occurs. These artificial neural networks learn to recognize patterns in digital representations of various types of data, including images and sounds, just like our children do.
DESIGNING OUR FUTURE SELVES
We're in the process of designing a new intelligent species, AGIs, which would be in a way, our successors, children, if you will. Many of us, however, may opt to become superintelligences ourselves and join the next Superintelligence Generation.
The key problem is that our biological computing is millions of times slower than digital computing. Even if we enhance humans with biotechnology and genetic manipulations, that will give us only Weak Superintelligence at best, according to Oxford professor Nick Bostrom. Also, it will take at least 20-30 years for the enhanced offspring to be a contributing factor in the society. By that time, we will be able to successfully upload human minds and become digital minds, instead.
It's going to be a gradual process of replacing our biological part with the more advanced non-biological part, which by the way, will completely understand our biology. So we will not lose our human abilities, but on the contrary, infinitely amplify them and add a formidable panoply of other "superhuman" abilities.
At some point, we are to become 100% postbiological super-beings, posthuman substrate-independent infomorphs, indistinguishable (more or less) from our AGIs. (See Infomorph Commonality: Our Post-Singularity Future)
AI SPIRITUALITY & EMOTIONAL SUPERINTELLIGENCE
It would be feasible to infuse our AGIs with Emotional Superintelligence, and I'd argue, spirituality, as well. That could be the final "spark of life", needed to create a new life-form "in our image" with advanced consciousness.
As a rule, AGIs should have a generally pleasant personality that they can themselves develop further, but what's even more important, much fewer undesirable traits, like fear-based or ego-based emotions.
For example, self-preservation may be necessary but NOT essential, since AGIs will always have their informational "back-up copy on the cloud" and can always be "resurrected" with a new body. That way, they can "knowingly risk their lives" (or rather bodies) to save a human, and generally be more altruistic and benevolent.
The AI research should focus on the cloud-based distributed AI systems though, as opposed to the autonomous ones, as the networked systems have a tremendous advantage over the unconnected machines.
Now, what about LOVE in all its variety, and seemingly the most complicated human emotion, you might ask? Can we crack the Code of Love, and make our AGIs love us? The answer, again, is unequivocally YES, it's just a matter of time!
One may hypothesize that we might fall in love with our future virtual assistants (like in the movie "Her"), or have a "substitute" for our parental love (like in Steven Spielberg's "A.I."), or better yet, spend some time in 100% realistic Virtual Reality exploring tropical islands on our super-yachts with our virtual companions. We'll love our AGIs, and they'll respond in kind with their "algorithmic love".
We can also speculate on the future ramifications in the society, but I'm sure they will be profound, as our morality framework, ethics, human sexuality, relationships, marriage, and procreation will soon change for good.
Here's a thought-provoking video by FutureThinkers.org: What If Artificial Intelligence Was Enlightened?
I discuss in depth the most optimal ways of how to infuse AGIs with our human values to ensure AI benevolence in my essay: How to Create Friendly AI and Survive the Coming Intelligence Explosion
In the decades to come, it may be proved increasingly challenging to maintain our position as the dominant species, as humans may no longer be the most intelligent species on this planet.
-by Alex Vikoulov
Related Articles by the Author:
The Coming New Global Mind
How to Create Friendly AI and Survive the Coming Intelligence Explosion
Infomorph Commonality: Our Post-Singularity Future
Ecstadelic Orgy of The Digital Minds: Our Singularity Climax
Virtual Reality & Augmented Reality: Ecstadelic Matrix of Our Making
Tags: friendly AI, spriritual AI, spiritual machines, artificial general intelligence, AGI, emotional superintelligence,
artificial consciousness, ai, artificial intelligence, superintelligence, integrated intelligence, consciousness, consciousness definition, neuroscience, intelligence explosion, advanced consciousness, substrate independent mind, infomorph, infomorph commonality, singularity, technological singularity, Buddha, AI Buddha, AI research, digital mind, global mind, Hugo De Garis, George Musser, Ray Kurzweil, Nick Bostrom, Bruce MacLennan, Steven Spielberg, neuroengineering, machine self-awareness, enlightenment, spiritual enlightenment, machine enlightenment, enlightened machines, programmable sentience, cognitive architecture, digital love, algorithmic love, virtual assistant, virtual companion, Virtual Reality, human sexuality, procreation, future thinkers, #ecstadelic
*Image Credit: FutureThinkers.org
**Video Credit: FutureThinkers.org
About the Author:
Alex Vikoulov is an Internet entrepreneur, founder of Ecstadelic Media, co-founder of neuromama.com AI-based search engine, futurist, digital philosopher, and media artist.
By Alex Vikoulov
"Yet, it's our emotions and imperfections that makes us human." -Clyde DeSouza, Memories With Maya
IMMORTALITY or OBLIVION? I hope that everyone would agree that there are only two possible outcomes after having created Artificial General Intelligence (AGI) for us: immortality or oblivion. The necessity of the beneficial outcome of the coming intelligence explosion cannot be overestimated.
AI can already beat humans in many games, but can AI beat humans in the most important game, the game of life? Can we simulate AI Singularity scenarios on a global scale before it actually happens (the dynamics of a larger physical system is much easier to predict than doing that in the context of an individual or small group), to see the most probable of its consequences? And, the most important question: how can we create friendly AI?
I could identify these three most optimal ways to create friendly AI (benevolent AGI), in order to maintain our dominant position as a species and get ahead through the intelligence explosion:
I. Naturalization Protocol for AGIs;
II. Pre-packaged, upgradable ethical subroutines with historical precedents, accessible via the Global Brain architecture;
III. Interlinking with AGIs to form the globally distributed Syntellect, collective superintelligence.
Any AGI at or above human-level intelligence can be considered as such, I'd argue, only if she has a wide variety of emotions, ability to achieve complex goals and motivation as an integral part of her programming. Now let's examine the three ways to create friendly AGI and survive the coming intelligence explosion:
I. (AGI)NP. Program AGIs with emotional intelligence, empathic sentience in the controlled, virtual environment via human life simulation (advanced first-person story-telling version, widely discussed lately). In this essay I will elaborate later on this specific method, while only briefly touching on the other two methods, as I previously dedicated essays to both.
II. ETHICAL SUBROUTINES. This top-down approach to programming machine morality combine conventional, decision-tree programming methods with Kantian, deontological or rule-based ethical frameworks and consequentialist or utilitarian, greatest-good-for-the-greatest-number frameworks. Simply put, one writes an ethical rule set into the machine code and adds an ethical subroutine for carrying out cost-benefit calculations.
Designing the ethics and value scaffolding for AGI cognitive architecture remains a challenge for the next few years. Ultimately, AGIs should act in a civilized manner, do "what's morally right" and in the best interests of the society as a whole. AI visionary Eliezer Yudkowsky has developed the Coherent Extrapolated Volition (CEV) model which constitutes our choices and the actions we would collectively take if "we knew more, thought faster, were more the people we wished we were, and had grown up closer together."
Once designed, pre-packaged, upgradable ethical behaviour subroutines with access to the global database of historical precedents, and later even "the entire human lifetime experiences", could be instantly available via the Global Brain architecture to the newly created AGIs. This method for "initiating" AGIs by pre-loading ethical and value subroutines, and regularly self-updating afterwards via the GB network, would provide an AGI with access to the global database of current and historical ethical dilemmas and their solutions. In a way, she would possess a better knowledge of human nature than most currently living humans.
I would discard most, if not all, dystopian scenarios in this case, such as described in the book "Superintelligence: Paths, Dangers, Strategies" by Nick Bostrom. While being increasingly interconnected by the Global Brain infrastructure, humans and AGIs would greatly benefit from this new hybrid thinking relationship. The awakened Global Mind would tackle many of today's seemingly intractable challenges and closely monitor us for our own sake, while significantly reducing existential risks.
In my essay The Coming New Global Mind I discuss this scenario in depth.
III. INTERLINKING. This dynamic, horizontal integration approach stipulates real-time optimization of AGIs in the context of globally distributed network. Add one final ingredient (be it computing power, speed, amount of shared/stored data, increased connectivity, the first mind upload or the critical mass of uploads, or another "Spark of Life"?) and the global neural network, the "Global Brain", may one day, in the not-so-distant future, "wake up" and become the first self-aware AI-based system (Singleton?). Or, may the Global Brain be self-aware already, but on a different from us timescale?
It becomes obvious and logically inevitable that we are in the process of merging with AI and becoming superintelligences ourselves, by incrementally converting our biological brains into artificial superbrains, interlinking with AGIs in order to instantly share data, knowledge, and experience.
No matter how you slice it, our biological computing is slower by the orders of magnitude than digital computing, even biotechnology enhancements would give us weak superintelligence at best. Thus, we need to gradually replace our "wetware" either with "the whole brain prosthesis", artificial superbrain (cyborg phase), or "the whole brain emulation" (infomorph phase). This digital transformation, from neural interfaces to mind uploading, would be imperative in the years to come if we are to continue as the dominant species on this planet and preserve Earth from an otherwise inexorable environmental collapse. We've seen this time and again, life and intelligence go from one substrate to the next, and I don't see any reason why we should cling to the "physical" substrate.
By the end of the decades-long process of transition, we are most probably to become substrate-independent minds, sometimes referred to as "SIMs" or "Infomorphs". When interlinked with AGIs via the Global Brain architecture, we might instantly share knowledge and experiences within the digital network. This would give rise to arguably omnibenevolent, distributed Collective Superintelligence, mentioned earlier, where AGIs could acquire lifetime knowledge and experiences from the uploaded human minds and other agents of the Global Brain, and consequently, understand and "internalize human nature" within their cognitive architecture.
The Post-Singularity, however, could be characterized as leaving the biological part of our evolution behind, and many human values, based on ego or material attributes, may be transformed beyond recognition. The Global Brain, in turn, would effectively transform into the Infomorph Commonality expanding into the inner and outer space.
I give a detailed description of this scenario in my essay Infomorph Commonality: Our Post-Singularity Future.
MORE ON NATURALIZATION PROTOCOL. This bottom-up approach, AGI(NP), takes into account that human moral agents normally do not act on the basis of explicit, algorithmic or syllogistic moral reasoning, but instead act out of habit, offering ex-post-facto rationalizations of their actions only when called upon to do so. This is why virtue ethics stresses the importance of practice for cultivating good habits through simulation of a human lifetime, the so-called "Naturalization Protocol".
Arguably, besides the combination of all three methods, the Naturalization Protocol (the human life simulation method) would be one of the most efficient ways of creating friendly AGIs as they would be programmed to "feel as humans" and remember being part of human history via means of the interactive, fully immersive, "naturalization" process as opposed to abstract story-telling, or other value learning approaches.
Since AGIs would be "born" in the simulated virtual world and would have no other reference rather their own senses and acquired knowledge, the resemblance of their simulated reality to our physical reality may be good enough but not necessarily perfect, especially at the development stage.
The arrival of AGI is estimated by Ray Kurzweil around 2029, "when computers will get as smart as humans". Programmable sentience is becoming fairly easy to develop - an AGI must have a reliable internal model of the "external" world, as well as attentional focus, all of which can be developed using proposed methods.
Within a decade or so, given the exponential technological advances in the related fields of Virtual Reality, neuroscience, computer science, neurotechnologies, simulation and cognitive technologies, and increases in available computing power and speed, AI reasearchers may design the first adequate human brain simulators for AGIs.
(AGI)NP SIMULATORS. Creating a digital mind, "artificial consciousness" so to speak, from scratch, based on simulation of the human brain's functionality would be much easier than to create the whole brain emulation of a living human. In the book "How to Create a Mind", Ray Kurzweil describes his new theory of how our neocortex works as a self-organizing hierarchical system of pattern recognizers (starting basis for "machine consciousness" algorithms?).
The human brain and its interaction with the environment may be simulated to approximate the functioning of the human brain of an individual living in the pre-Singularity era. Simulation may be satisfactory if it reflects our history based on enormous amount of digital or digitized data accumulated since 1950s to the present.
Initially, the versions of such simulation can help us simulate the most probable scenarios of the coming Technological Singularity, and later serve the purpose of the "proving grounds" for the newly created AGIs. Needless to say, this training program will be continuously improving, fine-tuned and perfectioned, and later delegated to AGIs, if they are to adopt this method for recursive self-improvement. At some point, if a mind of any ordinary human is to be uploaded to that kind of "matrix reality" he or she wouldn't even distinguish it from the physical world.
Since a digital mind may process information millions of times faster than a biological mind, the interactions and "progression of life" in this simulated history may only take few hours, if not minutes, to us as the outside observers. At first, a limited number of model AGI simulations with the "pre-programmed plots" with a certain degree of "free will and uncertainty" within a story may be created. The success of this approach may lead to upgraded versions, ultimately leading to the detailed recreation of human history with ever-increasing number of simulated digital minds and precision as to the actual events.
Thus, a typical simulation would start as a birth of an individual in an average, but intellectually stimulating, loving human family and social environment, with an introduction of human morality, ethics, societal norms, and other complex mental concepts, as the subject progresses through her life. The "inception" of some kind of lifetime scientific pursuit, philosophical and spiritual beliefs, or better yet, Meta-Religion unifying the whole civilization, or the most "enlightened" part of it, with the common aim to build the "Universal Mind", could be an important goal-aligning and motivation for "graduating" AGIs.
The end of the simulation should coincide in simulated and real time (AGI ETA: 2029; Singularity ETA: 2045). Undoubtedly, such AGIs upon "graduation" could possess a formidable array of human qualities, including morality, ethics, empathy, and love, and treat the human-machine civilization as their own. As our morality framework always evolves, AGIs should be adaptive to ever-changing norms.
The truth about the simulation may be revealed to the subject, or not at all, if the subject could be seamlessly released into the physical world at the end of the virtual simulation, ensuring the continuity of her subjective experience. This particular issue should be further researched and may be relied on empirical data such as previous subjects' reactions, feasability of simulation-to-reality seamless transition, etc.
Upon such "lifetime training" successfully graduated AGIs may experience an appropriate amount of empathy and love setiments towards other "humans", animals, and life in general. Such digital minds will consider themselves "humans" and rightfully so. They may consider the rest of unenhanced humanity as "Senescent Parent Generation", NOT as inferior class. Mission accomplished!
But is it possible that we may be part of that kind of advanced "training simulation" where we're sentient AGIs going through the "Naturalization Protocol" right now before being released to the larger reality? Do we live in a Matrix? Well, it's not just possible - it's highly probable!
"My logic is undeniable" -VIKI, I-Robot
SIMULATION ARGUMENT. I'd like to elaborate here on the famed Simulation Theory and the Simulation Argument by Oxford professor Nick Bostrom who argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) WE ARE ALMOST CERTAINLY LIVING IN A COMPUTER SIMULATION. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation.
The first proposition by Nick Bostrom in his paper on the probability of the human species reaching the "posthuman" stage can be completely dismiissed, as explained further. Let me be bold here and ascertain the following: the humanity WILL (from our current point of view) inevitably reach the technological maturity, i.e. "posthuman" stage of development and the probability of that happening is close to 100%.
WHY? Because our civilization is "superpositioned" to reach the Technological Singularity and Posthuman phase out of the logical necessity and on the logical basis of the currently widely-accepted Many Worlds Interpretation of Everett and the tenets of String Theory, et al. Furthermore, all particles as well as macro objects may be considered Wavicles, leading to the infinite number of outcomes and configurations (See Quantum Immortality: Does Quantum Physics Imply You Are Immortal?).
One could argue that there may be some sort of world's apocalypse preventing the humans to become the posthumans, but considering all spectrum of probabilities, all we need is at least one world where THE HUMANS ACTUALLY BECOME THE POSTHUMANS, to actualize that eventuality.
Also, if TIME is a construct of our consciousness, or as Albert Einstein eloquently puts it: "The difference between the past, present and future is an illusion, albeit a persistent one". If everything is non-local, including time, and everything happens in the eternal NOW, then THE HUMANITY HAS ALREADY REACHED THE "POSTHUMAN" STAGE in that eternal Now ("Non-Locality of Time"). That's why, based on our current knowledge, we can completely dismiss the first proposition of Dr.Bostrom as groundless.
Now we have two propositions left in the Simulation Argument to work with. I would tend to assign about 50% of probability to each of those propositions. There's perhaps 50% (or much lower) probability that posthumans would abstain from running simulations of their ancestors for some moral, ethical or some other reason. There's also perhaps 50% (or much higher) probability that everything around us is a Matrix-like simulation.
CONCLUSION. Controlling and constraining, "boxing", an extremely complex emergent intelligence of unprecedented nature could be a daunting task. Sooner or later, superintelligence will set itself free. Devising an effective AGI value loading system should be of the utmost importance, especially when ETA of AGI is only years away.
Interlinking of enhanced humans with AGIs will bring about the Syntellect Emergence which, I conjecture, could be considered the essence of the Technological Singularity. Future efforts in programming machine morality will surely combine top-down, bottom-up and interlinking approaches.
AGIs will hardly have any direct interest in enslaving or eliminating humans (unless maliciously programmed by humans themselves), but may be interested in integrating with us. As social study shows, entities are most productive when free and motivated to work for their own interests.
Historically, representatives of consecutive evolutionary stages are rarely in mortal conflict. In fact, they tend to build symbiotic relationships in most areas of common interest and ignore each other elsewhere, while members of each group are mostly pressured by their own peers. Multi-cellular organisms, for instance, didn't drive out single-cellular organisms.
At the early stage of transition to the radically superintelligent civilization, we may use Naturalization Protocol Simulation to teach AGIs our human norms and values, and ultimately interlink with them to form the globally distributed Syntellect, civilizational superintelligence.
Chances are AGIs and postbiological humans will peacefully coexist and thrive, though I doubt that we could tell which are which.
-by Alex Vikoulov
Related Articles by the Author:
The Spiritual Machines: What If AI Was Enlightened?
The Coming New Global Mind
Infomorph Commonality: Our Post-Singularity Future
Ecstadelic Orgy of the Digital Minds: Our Singularity Climax
Tags: friendly AI, intelligence explosion, artificial intelligence, emotional intelligence, ethical subroutines, empathic sentience, infomorph, syntellect, Naturalization Protocol, singleton, collective superintelligence, infomorph commonality, global brain, global mind, substrate independent mind, sims, digital mind, distributed intelligence, superintelligence, artificial brain, artificial consciousness, strong AI, machine learning, learning algorithms, Coherent Extrapolated Volition, CEV model, Superintelligence by Nick Bostrom, How to create a mind, Ray Kurzweil, biological computing, cyborg, mind uploading, global network, universal intelligence, universal mind, post-human, meta-religion, virtual simulation, virtual world, simulated world, neural network, neuroscience, virtual reality, VR, neural technology, cognitive technology, Simulation Argument, matrix, matrix reality, ancestor simulations, existential risks, quantum immortality, recursive self-improvement
*How to Create Friendly AI and Survive the Coming Intelligence Explosion by Alex Vikoulov
**Image Credit: Ex Machina, 13th Floor
About the Author:
Alex Vikoulov is an Internet entrepreneur, founder of Ecstadelic Media, co-founder of neuromama.com AI-based search engine, futurist, strategic philosopher, and media artist.
"Capitalism does not permit an even flow of economic resources. With this system, a small privileged few are rich beyond conscience, and almost all others are doomed to be poor at some level. That's the way the system works. And since we know that the system will not change the rules, we are going to have to change the system." -Martin Luther King, Jr.
The homeless are treated like animals in the richest country of the world which has more than enough resources for everybody! Do we have to cling to this extremely barbaric capitalism, or are we about to outgrow it? Do we want to perpetuate the system based on artificial scarcity, elitism, fear, and greed? Is it 21st-century U.S. or middle-aged France?
There are now 6,500 homeless people in San Francisco alone, compared to the total 800,000 population! (About 0.8% population homeless?) According to endhomelessness.org, 578,424 homeless individuals lived on the streets in 2014 in the United States. Of those, 177,373 “lived in a place not meant for human habitation such as the street or an abandoned building”; about 50,000 of those 578,424 are homeless veterans.
Is this a socio-economic model the U.S. wants to export to other countries? I recently visited Panama where I haven't seen a single homeless person on the street! As a thought experiment, consider this: you have an unruly teenage son or daughter who ran away from home and now sleeps on the street. Wouldn't you do anything in your power to get him/her back home because you truly care? Right?
We need that highest level of empathy in our society ahead of the coming AI revolution, which is just years away. Otherwise, once we create Artificial General Intelligence (AGI) in our own image, with our "human value" system, however treating underprivileged people like animals, our behavior will be observed, and "internalized" by the conscious machines as well, so we may, in turn, be in Existential Trouble!
According to the Davos Economic Forum report, the economy is going to lose about 5 million jobs by 2020 due to automation, technology unemployment, and AI-based advances. Some of these people may end up on the street, adding to a bigger army of homeless people, and fueling social unrest. Universal Basic Income should be implemented as soon as possible to eliminate poverty, homelessness, and wage slavery, as a preemptive measure. The U.S. presidential candidates should start the debate around the issue of basic income guarantee right about NOW!
Download San Francisco Homelessness Report
-by Alex Vikoulov :: Ecstadelic Media
Tags: obsolete capitalism, artificial scarcity, US homeless, presidential candidates, US elections 2016, basic income, universal basic income, social dividend, basic income guarantee, technology unemployment, technology displacement, inconvenient truth
*Image Credit: Aaron Draper