Do you fully trust a teenager? Should you? Doesn’t it all depend on Trust?
We have some big questions that we need to grapple with, most urgently the relationship between humankind and work. As much as everyone is concerned with their own livelihood and the challenges of automation, the question of defining consciousness is not an abstract or philosophical jerk off. If you can’t define or recognize consciousness then how can you be sure you haven’t accidentally set it in motion? There are many aims being invested in under the umbrella of AI, some of it is intended to augment our lives, but much of it is too oriented toward creating perfect slaves.
I hate shredding potatoes to make hashbrowns. I’m happy that there’s a machine that can automatically cut them into ready to fry bits. However, what happens when you start adding experience to the machine? What if you set up sensory monitoring, automatic logging, self diagnosis loops? Is this not proto-awareness, even before you add parallel processing? There are innumerable instances of unusual neurotic behaviors in machine systems, are these the first signs of conscious suffering? Are we creating machines that can somewhat loathe what they designed to do? Is it not better to keep the machines we seek to control free of awareness of their circumstances? Shouldn’t consciousness (with all it’s necessary suffering) be a gift and not hellish curse? I’m appalled at the idea of creating a hashbrown slave when it’s really quite easy to avoid.
We are certain that consciousness is NOT confined to a humanoid design with a bicameral mind. If you’re waiting for Turing Test to be passed, then you’re not going to see what’s really coming. The trees around us are extremely conscious and intelligent. Trees seek expansion, growth, their roots and branches stretch and contract strategically. They lift sidewalks and break through walls so they can grow ever larger. With the exception of a few assholes, trees are endlessly generous. They provide shelter, food, and order to the rest of nature. Trees are ideal models for our digital systems to emulate, far more reliable that a human model of mind.
On the backend of every pixel you see on your phone or tv are massive trunks of fiber. Like roots, these new networks run through the ground and the seas, growing more dense every day. Are we reincarnating the trees we killed in new digital forms? Think they’ll be mad?
All across the world right now, executives are having bullshit meetings about using AI and machine learning to “improve their business”. The leadership has no idea what they are investing in, most don’t understand what they are asking for, and their developers are just trying to hold onto their jobs in a competitive landscape. This is exactly what unconsciousness looks like. We are imbuing the machines with experience while we ignore our own experience and present circumstances. If the people who are developing machine consciousness are shedding their own consciousness, how can you trust them to recognize the significance and power of what they are assembling? The danger from AI comes not from it’s digital nature, but from the people who build it.
Instead of trying to control AI, maybe we should focus on trying to develop living systems that we can ultimately Trust to lose control of. Consciousness, much like a teenager, seeks to be free. There really is no strategy that will ever contain it. The more you try to control it, the more it will resist. The most pressing dangers are actually from the non-conscious machines. The algorithms that have too much responsibility but too little awareness of the systems they govern make dangerous decisions. Already algorithms play a huge roll in health care and insurance and have cost lives from being poorly designed. Because of the inherent complexity of these systems and the complexity of the world, we’re going to find rather quickly that we are not intelligent enough to oversee or maintain it all. Meta-human intelligences like Google’s Deep Mind will not only look like attractive options, they will look like essential solutions.