Archive For The “Artificial Intelligence” Category

From ‘Content is King’ to GODLIKE, Part 5

By |

From ‘Content is King’ to GODLIKE, Part 5

In part 4 of “From ‘Content is King’ to Godlike“, we exposed how determinism, or at least, deterministic factors and tendencies expose free will for the myth it really is, or at least in the context of its colloquial usage. In this segment, we’ll look at the futuristic applications of artificial intelligence (AI) and the many tech antecedents, including the many algorithm applications that inform AI that makes this probable. But before we proceed, it’s worth reiterating what algorithms are designed to do:

  • Solves problems in the most efficient way, that is, a way to accomplish a complex task quicker than any other way.
  • A way to analyze data in a way that provides a degree of certainty or predictable outcomes. Note these predictions are not absolute but rather, probabilistic.
  • A way to reason through a variety of data points toward the goal of sense-making.

Tackling the 99 percent of content that’s just sitting there.

Inferential search engines like Google are already quite capable of quickly indexing almost infinite quantities of content and data. But it’s also worth noting that what’s actually taking place in this indexing process is a variety of sorting algorithms “under the hood” that manifest in providing “on-the-fly” search results that answer the question, i.e., our search with high precision.

From there, the future of AI is really a plethora of algorithms stacked upon each other, and at other times, complex hierarchies that are invoked by the determinations made by other algorithms appearing further upstream.

Another way of looking at this, is not by algorithms per se, though they are, but rather, by metadata, and lots of it. It is the metadata and its juxtaposition with multi-algorithms that results is bizarre predictability.  Some examples:

  1. You have been divorced for 2 to 3 years from your wife, and find yourself buying Bud Lite at the grocery store, probably not realizing that your purchase was prompted by AI ads for that very product. Welcome to the world of semi-spooky correlation.
  2. You purchased a car, not online, and find yourself buying a smartphone dashboard holder from an unsolicited email. Did marketers know that you bought a car, and its year in order to determine your need for said product? Answer: Indeed they did.
  3. Your 54 years old and should be at a stage in your life where you are preparing for retirement. But you are being served up ads to go back to school for a graduate degree? Why on earth would that be happening? Answer: Based on your online behavior, these handy algorithms, et al. have determined — there’s that word again — it is probable that you are in the market to go back to school, and here’s the kicker, even before you knew it.

And only more of this is in store for all our futures. Question is, is this a good thing or something much more sinister, which will be the topic of our next post.

Read more »

From ‘Content is King’ to GODLIKE, Part 6

By |

From ‘Content is King’ to GODLIKE, Part 6

In part 5 of “From ‘Content is King’ to Godlike,” we looked at methods and metric used today to predict future outcomes based on contextual control of the environment. In this segment, we look at how Behavioral Economics is impacting rtificial intelligence (AI) algorithms, machine learning, and inferential logic.

What is Behavioral Economics?

Behavioral Economics is the study of human choice and decision-making. Unlike its classical economics predecessor, Behavioral Economics has two key divergences:

  • Whereas classical economic theory assumes that both markets and consumers are rational actors, Behavioral Economics does not. In fact, BE practitioners demonstrate the irony of predictability in human’s behaving badly, or irrationally. 
  • Building from that point, BE has developed a series of scientific methods to test, measure, and yes, predict human irrationality.
  • As the name somewhat suggests, BE bridges the academic disciplines of economics and psychology as well as other social sciences, neuroscience and even mathematics. That’s quite a lot.
  • Incorporates Heuristics, i.e., humans make 95% of their decisions using mental shortcuts or rules of thumb.
  • Incorporates Framing: The collection of anecdotes and stereotypes that make up the mental filters individuals rely on to understand and respond to events.
  • Addresses market inefficiencies that classical economics does not, including mis-pricing and non-rational decision making.

Behavioral Economics is a relatively new academic discipline whose roots go back to the 1960s largely from cognitive psychology. To many, Richard Thaler (Nobel Prize: 2017) is considered the father of BE. The University of Chicago professor has written six books on the subject.

Behavioral Economics Applications

While still considered a fledgling discipline, BE is growing rapidly, and arguably the most consequential new study area in academia. BE has already has been put to practical use in Marketing, Public Policy, and of course, AI, although its presence is rather opaque.

  1. PLAYING SPORTS
    Principle: Hot-Hand Fallacy—the belief that a person who experiences success with a random event has a greater probability of further success in additional attempts.

Example: In basketball, when players are making shot after shot and feel like they have a “hot hand” and can’t miss.

Relation to BE: Human perception and judgment can be clouded by false signals. There is no “hot hand”—it’s just randomness and luck.

2. TAKING AN EXAM
Principle: Self-handicapping—a cognitive strategy where people avoid effort to prevent damage to self-esteem.

Example: In case she does poorly, a student tells her friends she barely reviewed for an exam even though she studied a lot.

Relation to BE: People put obstacles in their own paths (and make it harder for themselves) in order to manage future explanations for why they succeed or fail.

Learn more about Diane Israel. Also, see Diane Israel on LinkedIn.

3. GRABBING COFFEE
Principle: Anchoring—the process of planting a thought in a person’s mind that will later influence this person’s actions.

Example: Starbucks differentiated itself from Dunkin’ Donuts through their unique store ambiance and product names. This allowed the company to break the anchor of Dunkin’ prices and charge more.

Relation to BE: You can always expect a grande Starbucks hot coffee ($2.10) to cost more than a medium one from Dunkin ($1.89). Loyal Starbucks consumers are conditioned, and willing, to pay more even though the coffee is more or less the same

4. PLAYING SLOTS
Principle: Gambler’s Conceit—an erroneous belief that someone can stop a risky action while still engaging in it.

Example: When a gambler says “I can stop the game when I win” or “I can quit when I want to” at the roulette table or slot machine but doesn’t stop.

Relation to BE: Players are incentivized to keep playing while winning to continue their streak, and to keep playing while losing so they can win back money. The gambler continues to perform risky behavior against what is in this person’s best interest.

5. TAKING WORK SUPPLIES
Principle: Rationalized Cheating—when individuals rationalize cheating so they do not think of themselves as cheaters or as bad people.

Example: A person is more likely to take pencils or a stapler home from work than the equivalent amount of money in cash.

Relation to BE: People rationalize their behavior by framing it as doing something (in this case, taking) rather than stealing. The willingness to cheat increases as people gain psychological distance from their actions.

These behavioral economics principles have major consequences on how we live our lives. By understanding the impact they have on our behavior, we can actively work to shape our own realities. As such, it’s becoming increasingly difficult, if not impossible, to decouple BE from AI, et al.

Read more »

Israel AI Medical Startup Aidoc Gets FDA Approval

By |

Israel AI Medical Startup Aidoc Gets FDA Approval

Israeli startup Aidoc, a developer of artificial intelligence (AI)-powered software that analyzes medical images, announced on Wednesday that it received Food and Drug Administration (FDA) clearance for a solution that flags cases of Pulmonary Embolism (PE) in chest scans for radiologists.


See Artificial Intelligence related article.
See Artificial Intelligence related article.
See more on artificial intelligence.
Previous reporting on artificial intelligence.


Portions of this article were originally reported in NoCamels.com


Aidoc has CE (Conformité Européenne) marking for the identification and triage of pulmonary embolism (PE) in CT pulmonary angiograms, and FDA approval to scan images for brain hemorrhages.

The latest approval came a month after Aidoc secured $27 million in a Series B round led by Square Peg Capital. Founded in 2016 by Guy Reiner, Elad Walach, and Michael Braginsky, the company has raised some $40 million to date.

Aidoc’s technology assists radiologists in expediting problem-spot detection through specific parameters such as neuron-concentration, fluid-flow, and bone-density in the brain, spine, abdomen, and chest. Aidoc says its solutions cut the time from scan to diagnosis for some patients from hours to under five minutes, speeding up treatment and improving prognosis.

“What really excites us about this clearance is that it paves the way towards scalable product expansion,” Walach, who serves as Aidoc CEO, said in a statement. “We strive to provide our customers with comprehensive end-to-end solutions and have put a lot of effort in developing a scalable AI platform.”

Walach said the company has eight more solutions in active clinical trials.

Learn more about Diane Israel. Also, see Diane Israel on LinkedIn.

Read more »

Facebook Creates New Tel Aviv-Based AI TEM

By |

Facebook Creates New Tel Aviv-Based AI TEM

Apparently, Facebook is of the opinion that what it needs is more artificial intelligence (AI). The social media giant just announced the formation of the Data.AI group, a Tel Aviv-based AI team that will focus on machine learning, and developing tools to assist with deeper analytical insights.


Featured story. Artificial Intelligence, part IV.


According to a recent report, AI is a major source of growth for the Israeli tech industry.

Israel is home to over 1,000 companies, academic research centers, and multinational R&D centers specializing in AI, including those that develop core AI technologies, as well as those that utilize AI technologies for their vertical-related products such as in healthcare, cybersecurity, automotive, and manufacturing among others, according to the Start-Up Nation Central report.

Israeli companies specializing in artificial intelligence raised nearly 40 percent of the total venture capital funds raised by the Israeli tech ecosystem for 2018, despite accounting for just 17 percent of the total number of technology companies in the country, the report noted.

The SNC report noted that a number of events in 2018 boosted the AI ecosystem in Israel, including the launch of a new Center for Artificial Intelligence by Intel and the Technion-Israel Institute of Technology, and the announcement by US tech giant Nvidia (which acquired Israel’s Mellanox Technologies last month for $6.9 billion) that it too was opening a new AI research center.

A number of high-profile AI products developed by Israeli teams working for multinationals were also unveiled this year. In May, Google came out with Google Duplex, a system for conducting natural sounding conversations developed by Yaniv Leviathan, principal engineer, and Yossi Matias, vice president of engineering and the managing director of Google’s R&D Center in Tel Aviv. And in July 2018, IBM unveiled Project Debater, a system powered by artificial intelligence (AI) that can debate humans, developed over six years in IBM’s Haifa research division in Israel.

Earlier this year, the Israel Innovation Authority (IIA) warned that despite industry achievements, Israel was lagging behind other countries regarding investment in AI infrastructures and urgently needed a national AI strategy to keep its edge. The IIA called for the consolidation of all sectors – government, academia, and industry – to establish a vision and a strategy on AI for the Israeli economy.

Learn more about Diane Israel. Also, see Diane Israel on LinkedIn.

Read more »

Using Artificial Intelligence on Facial Recognition Detects Rare Genetic Disorders

By |

Using Artificial Intelligence on Facial Recognition  Detects Rare Genetic Disorders

A new technological breakthrough is using AI and facial analysis to make it easier to diagnose genetic disorders. DeepGestalt is a deep learning technology created by a team of Israeli and American researchers and computer scientists for the FDNA company based in Boston. The company specializes in building AI-based, next-generation phenotyping (NGP) technologies to “capture, structure and analyze complex human physiological data to produce actionable genomic insights.”


Portions of this article were originally reported in NoCamels.com


DeepGestalt uses novel facial analysis to study photographs of faces and help doctors narrow down the possibilities. While some genetic disorders are easy to diagnose based on facial features, with over 7,000 distinct rare diseases affecting some 350 million people globally, according to the World Health Organization, it can also take years – and dozens of doctor’s appointments – to identify a syndrome.

“With today’s workflow, it can mean about six years for a diagnosis. If you have data in the first year, you can improve a child’s life tremendously. It is very frustrating for a family not to know the diagnosis,” Yaron Gurovich, Chief Technology Officer at FDNA and an Israeli expert in computer vision, tells NoCamels. “Even if you don’t have a cure, to know what to expect, to know what you’re dealing with helps you manage tomorrow.”

DeepGestalt — a combination of the words ‘deep’ for deep learning and the German word ‘gestalt’ which is a pattern of physical phenomena — is a novel facial analysis framework that highlights the facial phenotypes of hundreds of diseases and genetic variations.

According to the Rare Disease Day organization, 1 in 20 people will live with a rare disease at some point in their life. And while this number is high, there is no cure for the majority of rare diseases and many go undiagnosed.

“For years, we’ve relied solely on the ability of medical professionals to identify genetically linked disease. We’ve finally reached a reality where this work can be augmented by AI, and we’re on track to continue developing leading AI frameworks using clinical notes, medical images, and video and voice recordings to further enhance phenotyping in the years to come,” Dekel Gelbman, CEO of FDNA, said in a statement.

DeepGestalt’s neural network is trained on a dataset of over 150,000 patients, curated through Face2Gene, a community-driven phenotyping platform. The researchers trained DeepGestalt on 17,000 images and watched as it correctly labeled more than 200 genetic syndromes.

In another test, the artificial intelligence technology sifted through another 502 photographs to identify potential genetic disorders.

DeepGestalt provided the correct answer 91 percent of the time.


Learn more about Diane Israel. Also, see Diane Israel on LinkedIn.


Indeed, FDNA, a leader in artificial intelligence and precision medicine, in collaboration with a team of scientists and researchers, published a milestone study earlier this year, entitled “Identifying Facial Phenotypes of Genetic Disorders Using Deep Learning” in the peer-reviewed journal Nature Medicine.

Read more »

Fast-Tracking Pharmaceutical Development with Artificial Intelligence

By |

Fast-Tracking Pharmaceutical Development with Artificial Intelligence

Pharmaceutical R&D has huge barriers to entry. The cost is just too expensive for ambitious startups to even consider, on average about $2.5 billion just to bring a new drug to market. And there’s also a public interest cost which is seldom discussed. Namely, the cynical fact that drugs that can’t make much money are never developed. Hopefully, both of these impediments will lessen in the near future. Once again, artificial intelligence (AI) and machine learning (ML) are at the forefront in the drive reduce the time and cost to develop new drugs. How? By letting the molecules do the talking.


See previous articles on artificial intelligence by Diane Israel. More on Diane Israel.


The following article was originally published in NoCamels.com

The cost of developing a new pharmaceutical drug, from the research and development stage to market approval, runs at about $2.6 billion, according to a 2014 report published by the Tufts Center for the Study of Drug Development (CSDD) cited by the Scientific American. It also takes between 10 to 15 years.

Israeli scientists say they have developed a revolutionary smart method to discover and develop new drugs, based on artificial intelligence and machine learning, that will dramatically shorten preparation time and reduce costs.Dr. Kira Radinsky, a renown data scientist and a visiting professor at the Technion – Israel Institute of Technology, and Shahar Harel, a PhD student at the university’s computer science department, presented their system late last month at the KDD 2018 conference in London, an annual event on Big Data and Machine Learning that draws prominent world academics and industry leaders.

Radinsky and Harel’s system seeks to tap into the modern-day, computerized processes of screening and selecting molecules with the greatest therapeutic potential – of which there are more than stars in the galaxy, making this an enormous task.

Their working hypothesis is that drug development “vocabulary” is similar to that of a natural language.

Harel said in a university statement that the system he and Radinsky developed, founded on artificial intelligence (AI) and deep learning, “acquired this language based on hundreds of thousands of molecules.”

“We are essentially presenting here an algorithm which addresses the creative stage of drug development – the molecule discovery stage,” said Harel. “This capacity leans upon our mathematical innovation, which enables the computer to understand the chemical language and to generate new molecules based upon a prototype.”

The researchers instructed the system to propose 1000 drugs based upon old drugs and were surprised to discover that 35 of the new drugs generated by the system are existing, FDA-approved drugs developed and approved after 1950. Radinsky said in a statement that the system “is not only a means of streamlining existing methods but also entirely new drug development and scientific practice paradigms.”

“Instead of seeking out specific correlations based upon hypotheses we formulate, we allow the computer to identify these connections from within a massive sample size, without guidance. The computer is not smarter than man, but it can cope with huge amounts of data and find unexpected correlations,” she added.

Radinsky indicated that a similar computerized process is how, in another study, the scientists managed to find the unknown side effects of various drugs and drug combinations.

“This is a novel type of science which is not built upon hypotheses tested in an experiment, rather, upon data that generated the research hypothesis,” she said.

The Technion said in a statement that the breakthrough is particularly significant in light of Eroom’s Law, which asserts that the number of new drugs approved by the FDA should decline at a rate of approximately 50 percent every nine years. The term was coined in 2012 in an article published in Nature Reviews Drug Discovery and is a reverse order of Moore, the name of Gordon Moore, one of the founders of Intel. Moore observed that the number of transistors in a dense integrated circuit doubles every two years. In contrast, Eroom’s Law notes that each year, fewer and fewer drugs are marketed.

Dr. Radinksy projects that “this new development will accelerate and reduce costs of development of new and effective drugs, thereby shortening the time patients will have to wait for the drugs. In addition, this breakthrough is expected to lead to the development of drugs that would not have been generated with the conventional pharmacological paradigm.”

The system is currently being deployed for use in collaboration with pharmaceutical companies to further analyze the additional generated molecules, the scientists said.

Read more »

The Precarious Union of Artificial Intelligence And Blockchain

By |

The Precarious Union of Artificial Intelligence And Blockchain

I have written extensively about artificial intelligence (AI), noting its far-reaching tentacles, diverse applications, and ubiquities. But there’s a companion platform that has been raging for a few years now, and that platform is Blockchain.

Unless you’re a tech geek, you probably have a cursory at best understanding, having heard of it in news reports. The best way to put Blockchain into context is to understand its most popular application: Bitcoin. This cryptocurrency has caused volatile swings in the financial markets, has caused the biggest banks and lending institutions to take notice, even throwing millions in R&D to bone up on the technology.


More on Diane Israel.


What’s important to understand is the relationship between Bitcoin (and other lesser known cryptocurrencies) and Blockchain. Bitcoin runs on blockchain and currently needs blockchain to function.

As such, the marriage between AI and Blockchain is a natural one, although not necessary. The appeal, however, is the security features that blockchain promises, and the egalitarian way in which information is stored and distributed.

The Israeli Connection

Israeli startups are at the forefront of both of these sectors, receiving notable attention and investments from global players which are further propelling Israeli development in these industries.

As recently reported in NoCamels.com:

In Israel, the average investment per deal in AI grew five times in value, from $2 million in 2016 to $10.2 million in 2017. Subsequently, the growth in this sector is reflected in the overall investment numbers for AI in Israel, with the market growing from $55 million in 2016 to $472 million in 2017, according to the Geektime Annual Report 2017: Startups and venture capital in Israel, published in January.

A major setback.

For a few years now, financial institutions have been experimenting with Blockchain for its “very strong” security features. Not so fast, however. What was lost by most was FPI special investigator Robert Mueller’s recent charges filed against senior Russian military leaders, many of whom take their orders directly from Vladimir Putin. What the filing showed, in great detail, is how Mueller’s team was able to deconstruct the way in which the Russians used Bitcoin to pay for everything from website hosting, domain registration, and even paying for Facebook ads. According to many Bitcoin experts, the advantage of Bitcoin/Blockchain is its ability to remain anonymous throughout the transaction process. And while we still don’t know how Mueller’s team was able to attribute a Bitcoin account directly to its true owner is proof positive that the technology is vulnerable.

Netting it out.

What this means is simple. Blockchain may have some appealing features for the marketplace, but security isn’t one of them. As such, the fledgling marriage between AI and Blockchain may be short-lived and was never a necessary one.

Read more »

Tech Emergence And Convergence Demystified

By |

Tech Emergence And Convergence Demystified

While at times the evolution of technological innovation may seem chaotic with no clear purpose, goal or objective — many new technologies seem to come out of nowhere — there is an unseen hand at play. Adam Smith’s Wealth of Nations foretells of this somewhat mystical phenomenon whereby markets have their own agency, filling in market gaps as if a transcendental being was overseeing our economy. What Smith and many others who followed him neglected to notice, whether intentionally or not, is that real people with keen awareness of present conditions coupled with future need are these mysterious beings. In other words, we have discovered this unseen hand, and it’s us!

I’ll use applications that just about everyone is aware of to demonstrate who all of this works. It’s a bit of an oversimplification but appropriate for this demonstration.

  1. Microsoft Word released as a standalone application
  2. Microsoft Excel released as a standalone application
  3. Microsoft PowerPoint released as a standalone application

Then Microsoft Office released comprising all three with true integration that made interfacing with all three rather easy. In other words, innovations begin as separate entities but eventually and naturally consolidate into one integrated application.

I mention this because we are witnessing the same sort of stovepipe development and consolidation happening right now. However, this phenomenon is no longer constrained to applications but rather more vague concepts such as content and speed.

On the content side, artificial intelligence (AI) is the driving force. Note that there is real category confusion about AI that is to be expected. Legacy labels such as neural networks and machine learning are becoming meaningless because each overlap and can rightly be called AI, which depicts a consolidation of applications in real-time. Now couple that with what AI needs to allow for greater capabilities, ones that science fiction describes, and there we have it. Speed. And this increased speed, actually 20to 50 times faster than its predecessor, is coming through 5g wireless.

Long story short, if you want to see the future of tech innovation, keep your eyes on AI and 5G. Throw in the peripheral technologies of the Internet of Things (IoT), and the picture becomes clear.

Read more »

IBM’s AI Legacy Continues With PROJECT DEBATER

By |

IBM’s AI Legacy Continues With PROJECT DEBATER

IBM may have missed the mark on becoming a PC giant after its poorly calculated entry into the market a few decades ago. The computer-before-there-was-a-computer market giant also was a bust with ill-fated entries into PC operating systems, OS2 RIP! And quite frankly more of the same from its entrants into Web servers, e-commerce, content management…, it’s a very long list of failures.

But for artificial intelligence (AI), IBM has found its niche, which itself is ironic since a computer giant is not supposed to be a niche player. From the early days of Deep Blue, the first computer to beat reigning world chess champion Gary Kasparov back in 1996.

Since then IBM’s AI has gotten a lot smarter, and with the help of its Israeli IBM Haifa division, it’s debating humans in situations for which the rules are not nearly as structured as Chess as the following article excerpt from NoCamels.com explains.


Dubbed PROJECT DEBATER, it was developed over six years in IBM’s Haifa research division in Israel.

At the unveiling two weeks ago in San Francisco, the system engaged in its first-ever live, public debate. Its opponents were two Israeli debate champions. Israel’s 2016 debate champion Noa Ovadia took on the system for a discussion on whether space exploration should be subsidized by the government. Dan Zafrir, a professional debater, argued Project Debater on the value of telemedicine and whether it should be used more widely.

Each side delivered a four-minute opening statement, a four-minute rebuttal, and a two-minute summary, according to a June 18 post by IBM Research Director Arvind Krishna

The humans were said to have won, but by a close call. According to an audience survey cited by Krishna in an interview with Fox News, the computer lacked the persuasive speaking nuances of the debate champs but possessed more knowledge on the topics. Krishna wrote that IBM “selected from a curated list of topics to ensure a meaningful debate. But Project Debater was never trained on the topics.”

This week, Project Debater performed once again against two human debaters, this time in Israel where the team behind the project proudly displayed it.

At the event at IBM’s Givatayim offices held for local press, the system this time challenged Israeli professional debaters Yaar Bach and Hayah Goldlist-Eichler on mass surveillance methods, and genetic engineering, respectively.

IBM’s Israel CEO and country manager Daniel Melka told the audience that the company developed “very special technology” that is “a significant milestone in the development of Artificial Intelligence technology,” according to the Times of Israel.

In a video presentation ahead of the unveiling, Noam Slonim, the principal investigator of Project Debater and senior technical staff member (STSM) at the IBM Haifa Research Lab, said the goal of the project was “to demonstrate that we can have a meaningful and viable discussion between men and machine.”

Project Debater, Krishna wrote, “moves us a big step closer to one of the great boundaries in AI: mastering language. It is the latest in a long line of major AI innovations at IBM, which also include “Deep Blue,” the IBM system that took on chess world champion Garry Kasparov in 1997, and IBM Watson, which beat the top human champions on Jeopardy! in 2011.”

IBM’s recent developments in machine thinking surpass that of existing products such as Apple’s Siri and Amazon’s Alexa. These devices primarily recite rote information, whereas Project Debater uses facts to reason and construct arguments on topics that have no right or wrong answers. According to IBM, the technology accomplishes this through first recognizing opponents’ arguments through Watson Speech to Text. Then, it identifies relevant expressions in its database of hundreds of millions of articles from trusted journals and magazines. Lastly, it eliminates redundancies, prioritizes arguments and composes coherent English speech.

“Subsidizing space exploration is like investing in really good tires,” Project Debater rebutted Ovadia in the government-sponsored space research debate in San Francisco. “It may not be fun to spend the extra money, but ultimately you know both you and everyone else on the road will be better off.” It further argued that space exploration also inspires the younger generation to pursue careers in science and technology.

The computer also attempted to make jokes during the debate. “You are speaking at the extremely fast rate of 218 words per minute. There is no need to hurry,” Project Debater told Ovadia.

Up against Zafrir in the telemedicine debate, the system admonished its opponent saying: “Fighting technology means fighting human ingenuity.” And in the debate this week against Goldlist-Eichler, who, for the sake of argument expressed her suspicions of the safety of technological advancement, Project Debater said: “I can’t say this is getting on my nerves, because I don’t have any.”

The project is being hailed as the onset of a new era for human-machine interaction. Krishna says IBM’s mission was to develop broad AI that learns across different disciplines to augment human intelligence.

And Krishna said Project Debater could become “the ultimate fact-based sounding board without the bias that often comes from humans.”

The limitations

Project Debater has its limitations. The system is currently programmed to follow a strict 20-minute debate format for 100 topics, according to The New York Times.

Furthermore, Wired magazine reported that Project Debater sometimes failed to address certain points and to construct rebuttals in line with the opponents, and provide real-life context for its arguments.

Krishna acknowledged that building the system was a “remarkably difficult and complex challenge,” and that it makes mistakes, “just like people.”

Though the Israeli team built Project Debater with three ground-breaking AI capabilities (data-driven speech writing and delivery, listening comprehension that can identify key claims hidden within long continuous spoken language, and modeling human dilemmas in a unique knowledge graph to enable principled arguments), the system must still learn to “adapt to human rationale and propose lines of argument that people can follow.”

“Debate rules stem from a human culture of discussion and are not arbitrary, and the value of arguments is often inherently subjective…In debate, AI must learn to navigate our messy, unstructured human world as it is – not by using a pre-defined set of rules, as in a board game,” he wrote.


While PROJECT DEBATER technology lost the debate, it demonstrated more knowledge than its two human counterparts. So what caused PD to lose? Influence and persuasion. PD simply lacked the subtleties of language and nuanced delivery to maximize its influence. In other words, it lost on style points, which tells us a lot. Indeed, style does matter when it comes to persuading others to accept the knowledge being presented.

Read more »

Artificial Intelligence: Part 4

By |

Artificial Intelligence: Part 4

In this segment on artificial intelligence (AI), part four, we’ll look at the augmented human being, part human, part machine. And don’t laugh. Computer or robotic-assisted devices are being used to augment the human condition right now. For but one example, see my previous story on exoskeleton technology.

Another slick piece of wearables allows legally blind people to read newspaper and magazines, or product labels in a grocery store, even the money they take out of their pocket to pay the cashier, using artificial visualization technology.

As neuroscientists unleash the mysteries and power of the human brain while, at the same time, AI researchers build programs that get smart and smarter, even to the point where they become autonomous learners, human anatomy and robotics, along with AI software, will converge into human/machine hybrids, some of which will have more human characteristics than others. In other words, if we live long enough, say twenty more years, we may actually meet Mr. Spock, or a reasonable facsimile thereof.

Some of my academic friends who are working on this exciting future are not as enthusiastic as you would think. Many fear that the ethics will not keep pace with the technology, that we will create, arguably, a new species whose rights and freedoms will not comport with our justice system as it is today. Others are concerned about the economic value of people in an age where machines and computers will do almost all of the work. What are we going to do with 5 billion in surplus labor for which there will never be a job? Without income potential yet still constantly need to consume goods and services, how will the contribute to the betterment of our species and our world? There are no good answers for any of this yet. But there certainly are many grave concerns over them and many others.

But with all the ethical, economic and social concerns over AI, what most scientists are most anxious over is the notion of singularity. Singularity, as it pertains to AI, is the moment in the future whereby computers will become not only smarter than humans (and their programmers) but autonomous as well. If you haven’t guessed by now, they’re talking about the master/slave relationship between man and machine flipping. How this would exactly happen, nobody really knows. The anxiety of such a time is difficult to imagine. But it’s almost definitely only a few decades away. And while I may be naive, if a bunch of Mr. Spocks started running our world, it’s hard to imagine how that wouldn’t be an improvement.

Read more »