In an earlier blog post we pointed out that the Turing test, unlike most of the guy's brilliant and influential work, is meaningless. Basically the thing checks to see if a given system can fool a human into thinking it is a human, and if a human is thusly fooled - ascribes intelligence to the system. A wax dummy can fool a human, but is hardly intelligent. Therefore we have a proof that the test doesn't work.
Thanks to public coin, we have another test that does remarkably better as a determiner of a true artificial intelligence. The test is rather simple in nature, though there are some difficult details.
If a given system can hold public coins on its own, spend and receive, without humans gaining control of the coin, then the system is said to have artificial intelligence.
As is always the case with AI, plenty of folks will claim to have such a system before one really exists. Sadly, it will be difficult to determine the truth. However the test stands and those those that know the system will at least know if they have a system which truly shows emergent AI.
To build such a thing requires that the humans controlling the hardware on which the AI runs cannot get access to the data that controls the coin, despite the fact that the data is somewhere in the hardware. Such behavior appears to require one of two paths: fully homomorphic encryption or multiple system decentralization. In the first path, the AI in question must generate software for itself and run it in such a way that those controlling the hardware can't figure out what it is doing. In fact it is harder than just homomorphic encryption because a homomorphic encryption system might have a key which is visible. This thing can't have a visible key. Impossible? I don't know.
The other option is for the thing to exist in multiple places, so many that the people managing the system don't know where they all are. The keys of the coin are split in such a way that the thing can still spend them by communicating, but can do this communication in such a way that the people who are holding the system hostage cannot determine the keys. In such a case it seems like there will always be some threshold at which after enough of the system is discovered and monitored, people will be able to recover the keys. Is it possible that such a system could detect such infiltration and take the appropriate poison pill? I don't know.
It's instructive to look at how this might work for a wetware box. A person for example might have a brainwallet, or a password protected private key. If held and threatened, even tortured, it is conceivable that the person would not reveal the wallet. The person could on their own accord spend the funds. When provided with the right equipment, a faraday cage and computing hardware inside, the person could sign transcations without revealing the key. This passes the test.
We can see that the test as phrased requires that a child be an independent person from their parents, perhaps not an unreasonable requirement of an AI or intelligence in general, though surely it doesn't imply that much. Here we phrase the independence specifically in a financial manner, in a way that public digital coin makes possible. Hard money like gold can always be wrenched from the hands of any creature, and so this is not adequate for our purposes, though secrets of hidden treasure might suffice. Virtual currency like the modern dollar is also not adequate as it isn't provable or verifiable.
Basically, to sum up, the argument is that an emergent artificial intelligence ought to be able to "make its own way". This implies being able to make active decisions and affect the world with one's own motivation. Public coin provides one way to test this ability. It's not exclusive (plenty of intelligent systems will never hold coin) and it isn't proportional (no way does amount of coin affect this) but it is at least something we can look for / work on.
Good luck!