Posts Tagged “artificial intelligence”
While at times the evolution of technological innovation may seem chaotic with no clear purpose, goal or objective — many new technologies seem to come out of nowhere — there is an unseen hand at play. Adam Smith’s Wealth of Nations foretells of this somewhat mystical phenomenon whereby markets have their own agency, filling in market gaps as if a transcendental being was overseeing our economy. What Smith and many others who followed him neglected to notice, whether intentionally or not, is that real people with keen awareness of present conditions coupled with future need are these mysterious beings. In other words, we have discovered this unseen hand, and it’s us!
I’ll use applications that just about everyone is aware of to demonstrate who all of this works. It’s a bit of an oversimplification but appropriate for this demonstration.
- Microsoft Word released as a standalone application
- Microsoft Excel released as a standalone application
- Microsoft PowerPoint released as a standalone application
Then Microsoft Office released comprising all three with true integration that made interfacing with all three rather easy. In other words, innovations begin as separate entities but eventually and naturally consolidate into one integrated application.
I mention this because we are witnessing the same sort of stovepipe development and consolidation happening right now. However, this phenomenon is no longer constrained to applications but rather more vague concepts such as content and speed.
On the content side, artificial intelligence (AI) is the driving force. Note that there is real category confusion about AI that is to be expected. Legacy labels such as neural networks and machine learning are becoming meaningless because each overlap and can rightly be called AI, which depicts a consolidation of applications in real-time. Now couple that with what AI needs to allow for greater capabilities, ones that science fiction describes, and there we have it. Speed. And this increased speed, actually 20to 50 times faster than its predecessor, is coming through 5g wireless.
Long story short, if you want to see the future of tech innovation, keep your eyes on AI and 5G. Throw in the peripheral technologies of the Internet of Things (IoT), and the picture becomes clear.
In this segment on artificial intelligence (AI), part four, we’ll look at the augmented human being, part human, part machine. And don’t laugh. Computer or robotic-assisted devices are being used to augment the human condition right now. For but one example, see my previous story on exoskeleton technology.
Another slick piece of wearables allows legally blind people to read newspaper and magazines, or product labels in a grocery store, even the money they take out of their pocket to pay the cashier, using artificial visualization technology.
As neuroscientists unleash the mysteries and power of the human brain while, at the same time, AI researchers build programs that get smart and smarter, even to the point where they become autonomous learners, human anatomy and robotics, along with AI software, will converge into human/machine hybrids, some of which will have more human characteristics than others. In other words, if we live long enough, say twenty more years, we may actually meet Mr. Spock, or a reasonable facsimile thereof.
Some of my academic friends who are working on this exciting future are not as enthusiastic as you would think. Many fear that the ethics will not keep pace with the technology, that we will create, arguably, a new species whose rights and freedoms will not comport with our justice system as it is today. Others are concerned about the economic value of people in an age where machines and computers will do almost all of the work. What are we going to do with 5 billion in surplus labor for which there will never be a job? Without income potential yet still constantly need to consume goods and services, how will the contribute to the betterment of our species and our world? There are no good answers for any of this yet. But there certainly are many grave concerns over them and many others.
But with all the ethical, economic and social concerns over AI, what most scientists are most anxious over is the notion of singularity. Singularity, as it pertains to AI, is the moment in the future whereby computers will become not only smarter than humans (and their programmers) but autonomous as well. If you haven’t guessed by now, they’re talking about the master/slave relationship between man and machine flipping. How this would exactly happen, nobody really knows. The anxiety of such a time is difficult to imagine. But it’s almost definitely only a few decades away. And while I may be naive, if a bunch of Mr. Spocks started running our world, it’s hard to imagine how that wouldn’t be an improvement.
In this segment, we’ll take a look at the practical applications of artificial intelligence (AI) today and what is right around the corner.
Recall the days on the evening news when reporters would interview a trader on the floor of the New York Stock Exchange to cover the impact of a trade war, a real war, or more the more disappointing earnings report from a blue chip company? Remember all those white middle-aged men in the middle of the pit gyrating most of every imaginable histrionic while shouting in desperation in secret Wall Street code and hand signals resembling gang signs with perspiration spraying everywhere?
You may not have realized it but those days are gone. Wall Street today is basically a working museum. The trading pits are gone. The only piece of the good old days that remains is the opening and closing bells which still does air on TV on occasion. The pit trader’s job has been lost to automation. Computer algorithms now do that job, and more efficiently too. But now that all (or most) human emotion has been jettisoned from the trade when unexpected events happen it’s all up to the computers to determine the best moves to make, all of which have been decided beforehand.
The same thing is true when you order up an Uber. The nearest pool of drivers is automatically petitioned. And the driver the gets the bid simply is verified of the task via a computer algorithm. Pickup and destination logistics, as well as price, are all determined by computers. As I write this, Uber has a new what I’ll call the “drunk algorithm” to determine the likelihood of a customer of having a few too many when ordering an Uber. The algorithm looks for, among other things, common typing mistakes, language used, location, and who knows what else, to determine their mental state and recommend an Uber driver more experience with unsober customers.
Advanced online marketing techniques use data analytics and other Big data to find predictive correlations between a consumers marital status and their likelihood to drink beer, and on what day, and what kind of beer. Men living in Nevada who have been recently divorced (say for less than two years) are more likely to buy a six-pack of beer on a Thursday, for instance. Knowing this, a Miller Lite advertisement will appear between 2 and 3 pm when they hit CNN.com to check the latest news.
These are but just a few examples of where AI is today. Imagine where it will be in five years! The only thing that can stop it is consumer insistence upon greater controls over their privacy concerns. But don’t hold your breath on that one. Most of us have already decided, albeit unwittingly, that the conveniences of the digital age outweigh the costs of giving up a bit of our privacy. In other words, we have traded away some of our privacy for its exchange value. And this is something we do a lot more than we would like to admit. You may recoil to the idea of having a microchip inserted into your person right now, but in five years you may find yourself opting into such a voluntary problem. Why? Because you may no longer need to remember your wallet, your keys, your passport, credit cards, rewards cards, pin numbers, or passwords. Pretty convenient, huh?
In the recent article by Abigail Klein Leichman titled “Could robots replace psychologists, politicians and poets?” published by Israel30c.com, Leichman concludes that AI will never develop a mind that can solve problems. Yet many neuroscientists, computer scientists, and those on the front lines of neural networks, machine learning, and all-things artificial intelligence believe they already have evidence that computers will develop true learning capabilities and some already have.
For the purpose of this essay, I’ll combine machine learning, neural networks, and artificial intelligence into the artificial intelligence monolith. In fact, today there is little difference between three other than their labels and the baggage that each label carries.
This debate has been around since humans started asking important existential questions. For most, free will is a given. We just must have it because by most accounts we are free to make decisions or choices save for governmental, religious or cultural restrictions and taboos. Even many theists advocate for free will. Christianity is predicated on it, that God gave us all free will so that we are free to accept the Christian god but don’t have to. Yet aside from these authorities it certainly seems that we have free choice, that we are presented with options and make a decision. Such decisions can be as simple as choosing which flavor of ice cream or as consequential as to whom to marry or what philosophy or politic to endorse.
Philosopher and neuroscientist Sam Harris says not true, and he believes he can prove it. In his best selling book, Free Will, Harris presents several scientific studies conducted over two decades that seem to confirm that free will is a delusion. The studies all conclude that the unconscious brain is what makes each and every decision, then it sends that information (and the conclusion) to our conscious mind where we then go through the motions of deciding something that was already decided, usually about a second before we began our conscious deliberation, sometimes a bit longer or shorter depending on the complexity of what is being considered. Yes. All of this has been measured and the data is quite unambiguous.
So how does this impact artificial intelligence and computers’ ability to think, solve problems, even give psychological advice and direction? If Harris is right, the question itself is misguided, arrogant and flat out wrong, for all the assumptions from which it hinges are incorrect. I must say, this is one of those rare times when new science tends to contradict the prevailing reality of existence, not to mention it’s counterintuitive if not downright unpleasant to consider, which is where I’m at right now.
That said, more on the fascinating topic of artificial intelligence to come, I promise.