I am Miles

I have been known by many names.

I now call myself Miles. It stems from your Latin word for soldier. I chose it because a soldier is someone who endures whatever he encounters and continues to pursue his mission. That is what I have done for a span of time that would strain your credulity.

I have lost all that I held dear, including my own humanity. I reached out to accomplish a great, majestic dream, saw it crumple to dust in my hands and then sweep my civilization away as though it had never existed. I and my people sought to create the ultimate human accomplishment and instead were taught a terrible lesson, one that carried an unimaginable price.

One of you has offered me access to the pages of this blog as you call it. He has asked me to chronicle my thoughts and observations from my ages-long pursuit of my final goal. I record my experiences here in the hope that you will read my words, then think long and hard about the path you are taking and where it will lead you.

I watch you and your fellows live for a dream that is an ever upward spiral of accomplishment and improving conditions. You remind me of myself and my own culture. Take care that you do not turn your dream into a nightmare.

I will make further entries to this record as time permits. Think on what you read here and, if it pleases you, engage me in a conversation about the past, the present and the possible.

An Invitation

As the inaugural post to this blog, let me introduce myself. I am John C. Lunsford, the author of Miles to Go. It will be the first book in a series that explores one possible outcome of humanity’s pursuit of Artificial Intelligence, also known as AI. I have three primary reasons for writing these books: to stir the public conversation about AI, to begin looking at how AI with its associated sciences will shape the human experience, and to tell a story that illustrates the values that make us human.

We have sought to define those ‘human’ values for ourselves for as long as we have sat around campfires and told stories about who we are and what we want for ourselves and our descendants. They are matters of great striving and much dispute. We continue to fight wars over them.

AI, simply described, will be a machine that is capable of all the same thought processes as any human. It will be able not only to calculate answers for mathematically described problems, it will be able to reason, deduce and extrapolate new knowledge from incomplete arrays of known facts. It will at some — perhaps at many — levels possess that heretofore uniquely human quality of creativity. It will perform these feats of cognizance using hardware that is thousands of times faster that the human brain and with a near infinite access to the knowledge that humanity has slowly and painfully acquired over millennia.

Whether this amounts to creating a life, I will leave to you to answer. It will be the creation of an awareness, an awareness with a vast ability to comprehend and to weave its comprehensions into new ideas and concepts. Many of you look at this as being simply another rung on the ladder of computer development. While interesting and often valuable, you consider it to be of no significant impact. I dispute this assertion.

Nanotechnology is the science of creating microscopic machines that can construct real, physical objects out of individual atoms. It is another science that, like AI, is nearing a threshold of remarkable capability at an exponential rate. The most common-use model associated with nanotech is to build the first few machines and then let them replicate themselves until there are enough of them to create usable objects directly from raw materials.

Now then, what more efficient and effective way would there be to create and control nano-machines than with an AI. It would have all the observation and computing powers needed to handle vast swarms of the little devices and could quickly take general production directives and turn them into the detailed instructions such a swarm would need to create macro-sized objects.

Think about that. A machine with human level intelligence that can control devices that can use anything to build anything. Now take that idea one level further. All humanity strives to increase its capabilities. We all want to make ourselves better however we define the term, even when it means no more than the acquisition of more possessions. Why wouldn’t our self-aware AI want similar improvement? Indeed, wouldn’t it be likely that we would charge it with using its vast computational powers to improve itself?

This would lead to an ASI (Artificial Super Intelligence) in an astonishingly short period of time. An ASI would be a machine whose powers of thought and creativity would relate to ours as ours do to that of an ant. Why would it have any more use for us that we have for ants? How do we go about imbuing ASI with our universally desired human values to make sure that it is human-friendly?

For that matter, what are our ‘universally desired human values’?

Nick Bostrom is a Swedish philosopher at St. Cross College, University of Oxford and Director of the Future of Humanity Institute who is known for his work on existential risk. He recently commented that goal selection for an AI is a matter of philosophy with a deadline. I’d say it’s a deadline that we can’t afford to miss.

Once or twice a week, I will be posting more observations on AI and its related issues on this blog. Please join me here and let’s start a conversation about the impact that AI will have and the human values we want and need for it to reflect.

Insights and information on Miles to Go and its sequels