Posts Tagged “singularity”
In part one of, “From ‘Content is King’ to GODLIKE“, we introduced some mind-boggling facts: namely that:
- 90 percent of all content ever generated in the history of mankind, was created in the last two years
- 99 percent of all content is yet to be accessed, let alone worked, mined or analyzed.
Throw artificial intelligence (AI) in the mix and it quickly becomes apparent that we are embarking on the creation of a technology that would rival any god ever envisioned — and there have been thousands. The big difference between the gods of the past and those of the present are worth elucidating.
Traditional or conventional gods provided answers to most if not all of life’s mysteries. Of course, there are problems with that, most notably, a lack of empirical rigors to go along with the robustness of the claims. Newer gods — I’ll use Google’s search engine as an example — are empirical for sure, and use information, i.e., empirical evidence, as an epistemic foundation. From there the commonalities between the old and new intersect again since both models are pretty big on predictions (or prophecy). And before I can complete that sentence we experience yet another bifurcation with the old depending on one or another form or revealed truth and the Google god relying on inference, induction, and most recently, artificial intelligence to answer the secular prayer more commonly known simply as THE SEARCH.
God or godlike dichotomies aside, what’s a civilization to do with all this content, especially since 99 percent of it is just sitting here and there (and everywhere), doing nothing? Well, we already have the technology, i.e., Google and similar technology, to harness it. That’s one thing but taking benign predictability — the search — to profound prophecy and beyond through weird and counterintuitive correlations that provide answers even before we think of them, let alone type them into Google, is where the future of content vis-a-vis AI is heading.
The next article will get into the specifics of precisely how this might look, using everyday problems and contemplations.
In the recent article by Abigail Klein Leichman titled “Could robots replace psychologists, politicians and poets?” published by Israel30c.com, Leichman concludes that AI will never develop a mind that can solve problems. Yet many neuroscientists, computer scientists, and those on the front lines of neural networks, machine learning, and all-things artificial intelligence believe they already have evidence that computers will develop true learning capabilities and some already have.
For the purpose of this essay, I’ll combine machine learning, neural networks, and artificial intelligence into the artificial intelligence monolith. In fact, today there is little difference between three other than their labels and the baggage that each label carries.
This debate has been around since humans started asking important existential questions. For most, free will is a given. We just must have it because by most accounts we are free to make decisions or choices save for governmental, religious or cultural restrictions and taboos. Even many theists advocate for free will. Christianity is predicated on it, that God gave us all free will so that we are free to accept the Christian god but don’t have to. Yet aside from these authorities it certainly seems that we have free choice, that we are presented with options and make a decision. Such decisions can be as simple as choosing which flavor of ice cream or as consequential as to whom to marry or what philosophy or politic to endorse.
Philosopher and neuroscientist Sam Harris says not true, and he believes he can prove it. In his best selling book, Free Will, Harris presents several scientific studies conducted over two decades that seem to confirm that free will is a delusion. The studies all conclude that the unconscious brain is what makes each and every decision, then it sends that information (and the conclusion) to our conscious mind where we then go through the motions of deciding something that was already decided, usually about a second before we began our conscious deliberation, sometimes a bit longer or shorter depending on the complexity of what is being considered. Yes. All of this has been measured and the data is quite unambiguous.
So how does this impact artificial intelligence and computers’ ability to think, solve problems, even give psychological advice and direction? If Harris is right, the question itself is misguided, arrogant and flat out wrong, for all the assumptions from which it hinges are incorrect. I must say, this is one of those rare times when new science tends to contradict the prevailing reality of existence, not to mention it’s counterintuitive if not downright unpleasant to consider, which is where I’m at right now.
That said, more on the fascinating topic of artificial intelligence to come, I promise.