Lets think for a minute about the importance of an established internal consensus as an indication of awareness or intelligence. Bitcoin implements a consensus network capable of solving the decision problem. These first aware machine networks are incapable of the complexity of thought that we see even in unicellular life, however they represent an important first step on the road to more complex intelligence.
Fooling Humans – does it really take intelligence?
There was recently much reporting and buzz amongst AI researchers about the program “Eugene” which was able to produce text conversation and fool 30% of judges that it was human. The program was often passing the Turing test. Sadly these AI researchers are completely missing the show. It turns out that fooling humans is not a sign of true self awareness and the Turing test is mostly useless as a criteria of AI. It's main real use today is in playing whack-a-mole layered security to prevent script kiddies from performing Sybil attacks on service providers.
I first realized the true futility of using the Turing test as a criteria of self aware machine intelligence when I wrote a program that imitated a human and did so with 98% success. Amazingly I was able to do this basically with one line of code. My task at the time was to write a program that could imitate a human player in a game and so continue to gain game credits while I did something more interesting such as sleep. The solution was simple: when a Turing test was given to my program, it called an API function that routed the test to the desk of a human volunteer who could answer the question in real time in exchange for a small payment. This is called a Decaptcha service.
Perhaps this is “cheating” on the Turing test. However, there is not a clear criteria of what is or is not cheating in this task of fooling humans. If I cannot route an API call to somebody's desk, should I be able to draw from a dictionary prepared by humans? Is it cheating to draw from a database of conversations prepared by genuine humans? To attempt to draw a line here in terms of what is or isn't cheating on a Turing test is pointless: we are taking the wrong test. A wax model can pass this test, it has nothing to do with machine awareness. It turns out fooling humans doesn't take true intelligence. Or perhaps it does take true intelligence, but in this case we have been proven only that Eugene's creators are intelligent.
Human language interaction: not an entirely useless endeavor
Before you get all upset with me for telling you your research is not relevant to AI, I should point out that this research is indeed important and will lead towards useful new technologies. Work on imitating human intelligence gives us interfaces that we are coming to rely on, and also gives us great insight into linguistics and how our own neural nets work. Douglas Hofstader is a brilliant author, voice and language interfaces are incredibly useful, and this line of research is interesting. It just isn't AI. It is UI. Also, if you want something to imitate a human for you, consider hiring one of us. We're amazingly cheap. Drop me an email.
OK so what is intelligence / consciousness / self awareness?
Well, this is really a fundamental question that needs a lot more discussion than I am going to give you here. Inherent in this question is another: what is life? One mistake people make in trying to answer it is to focus on ourselves, another would be to focus on fooling ourselves as we would do by emphasizing the Turing test. There is a complex system going on in vertebrates and so to understand it we should start with simple parts and build up. So lets consider three examples of intelligence / cognition / consciousness:
A plant is capable of sensing the amount of light incident on various portions of itself, and acting on this information to control it's growth and attitude to attempt to maximize the light energy available to it. We can refer to the light as the “external input”, the reaction of chloroplasts as “sensing”, the communication between cells or components of cells as “network activity” and the final repositioning of the various components of the plant as the “decision” or the “resulting action”.
2 Unicellular Memory
A paramecium is capable of sensing the amount of digestible sugars in it's environment, noting a change compared to its recent memory, and using this information to make a change in it's direction of motion as produced by the muscular motion of its cilia.
3 Weather Vanes
A weather vane is capable of sensing the direction of the wind and adjusting its position accordingly.
From these three examples we have two which show some basic intelligence and one that our intuition tells us is either not intelligent or a very different class of intelligence. The weather vane is “dumb matter” in that no communication is required between the various components of it to reach consensus. Or rather, the type of communication that goes on between the portions of the weather vane are the same types of communication that go on between atoms in a solid which tell it to “stay put”. It is also more predictable (and reliable) than the other two.
The paramecium is perhaps not as intelligent as the plant as it uses a less complex internal communications network as it processes the information. The signals to the “muscles” of the cilia are most likely gradients in ion concentrations of some sort.
After considering these examples extremely briefly we will work from the following informal definition: An intelligence is some extended structure capable of taking external input and producing via internal communication a non-deterministic consensus course of action.
Before your Terminator instincts are triggered and you start thinking about how to shut down the entire internet to avoid skynet or superbrights, lets take stock of what this Frankenstein monster is capable of. The external input that the bitcoin consciousness senses is blocks created and suggested by its nodes. New blocks magically appear as external input, just as light magically appears to the plant consciousness. Information about these new blocks are passed between the nodes of the coin network (the body) and a decision is arrived at. Sometimes the network will “change it's mind” which is known as a reorganization. To make a long story short the coin creature cares about only one thing: adjusting a single integer parameter called the difficulty in such a way that the timestamps on accepted blocks indicate 10 minute interval between the blocks. That's it! No three laws of robotics, no ravenous appetite for data, simply a driven conscious (?) behavior to move towards an attitude condusive to 10 minute blocks. It's really just a clock. A heartbeat.
The consciousness cares basically nothing about who pays who (transactions are decided by miners) or even what the absolute hash rate is, or if timestamps are accurate. As users of the network, we of course care about these things very much. They are also important in enabling the coin creature to live because without the secure network and the mining reward, nobody would bother running the code which enables this poor creature to be aware.
The network consensus awareness has a single possible action which it alone controls: setting the difficulty in order to keep the block timestamps as close as possible to 1 every 10 minutes. It does this in the simplest possible way from a control systems standpoint, pure linear feedback. On the intelligence scale we should probably put this above the weathervane but below the paramecium.
Consciousness in a hostile environment
The mechanism by which a coin network achieves consensus is at first counterintuitive to those who first discover the proof of work system. If we had every one of our nerve cells working on arbitrary brute force arithmetic problems in order to arrive at a decision, we would be very different creatures. PoW is not the organic solution to the distributed consensus problem. Inefficient though it may be, it works in a hostile environment. Inside the body of a coin network, communications are not reliable, unauthenticated, and can be malicious. New nodes can and will jump into the network at any time. It is remarkable that consciousness can emerge from this environment at all.
More complex creatures are emerging in a myriad of ways (pun intended). Specialized nodes on networks that perform specialized tasks based on other external input are feasible. In some sense, we have done very little here in making this step of creating the first machine intelligence. One small step for Satoshi.
To see how little it matters from a practical standpoint, consider a robot which has 10 accelerometers and uses their readings to remain in an upright position. This robot takes the readings from all 10 accelerometers and pours current to its servo motors deterministically from a CPU. Now consider a second robot which also has 10 accelerometers. Each of these accelerometers is connected to a node with a CPU which mines on an internal coin network. The robot brain now takes a consensus of their recommendations to decide the current to its servos. How can we compare these two robots? The second is slower, more complex, and also more likely to fall down. It is also more alive.
Now go do something useful with your time.