Archive For The “Artificial Intelligence” Category

Israel AI Medical Startup Aidoc Gets FDA Approval

By |

Israel AI Medical Startup Aidoc Gets FDA Approval

Israeli startup Aidoc, a developer of artificial intelligence (AI)-powered software that analyzes medical images, announced on Wednesday that it received Food and Drug Administration (FDA) clearance for a solution that flags cases of Pulmonary Embolism (PE) in chest scans for radiologists.


See Artificial Intelligence related article.
See Artificial Intelligence related article.
See more on artificial intelligence.
Previous reporting on artificial intelligence.


Portions of this article were originally reported in NoCamels.com


Aidoc has CE (Conformité Européenne) marking for the identification and triage of pulmonary embolism (PE) in CT pulmonary angiograms, and FDA approval to scan images for brain hemorrhages.

The latest approval came a month after Aidoc secured $27 million in a Series B round led by Square Peg Capital. Founded in 2016 by Guy Reiner, Elad Walach, and Michael Braginsky, the company has raised some $40 million to date.

Aidoc’s technology assists radiologists in expediting problem-spot detection through specific parameters such as neuron-concentration, fluid-flow, and bone-density in the brain, spine, abdomen, and chest. Aidoc says its solutions cut the time from scan to diagnosis for some patients from hours to under five minutes, speeding up treatment and improving prognosis.

“What really excites us about this clearance is that it paves the way towards scalable product expansion,” Walach, who serves as Aidoc CEO, said in a statement. “We strive to provide our customers with comprehensive end-to-end solutions and have put a lot of effort in developing a scalable AI platform.”

Walach said the company has eight more solutions in active clinical trials.

Learn more about Diane Israel. Also, see Diane Israel on LinkedIn.

Read more »

Facebook Creates New Tel Aviv-Based AI TEM

By |

Facebook Creates New Tel Aviv-Based AI TEM

Apparently, Facebook is of the opinion that what it needs is more artificial intelligence (AI). The social media giant just announced the formation of the Data.AI group, a Tel Aviv-based AI team that will focus on machine learning, and developing tools to assist with deeper analytical insights.


Featured story. Artificial Intelligence, part IV.


According to a recent report, AI is a major source of growth for the Israeli tech industry.

Israel is home to over 1,000 companies, academic research centers, and multinational R&D centers specializing in AI, including those that develop core AI technologies, as well as those that utilize AI technologies for their vertical-related products such as in healthcare, cybersecurity, automotive, and manufacturing among others, according to the Start-Up Nation Central report.

Israeli companies specializing in artificial intelligence raised nearly 40 percent of the total venture capital funds raised by the Israeli tech ecosystem for 2018, despite accounting for just 17 percent of the total number of technology companies in the country, the report noted.

The SNC report noted that a number of events in 2018 boosted the AI ecosystem in Israel, including the launch of a new Center for Artificial Intelligence by Intel and the Technion-Israel Institute of Technology, and the announcement by US tech giant Nvidia (which acquired Israel’s Mellanox Technologies last month for $6.9 billion) that it too was opening a new AI research center.

A number of high-profile AI products developed by Israeli teams working for multinationals were also unveiled this year. In May, Google came out with Google Duplex, a system for conducting natural sounding conversations developed by Yaniv Leviathan, principal engineer, and Yossi Matias, vice president of engineering and the managing director of Google’s R&D Center in Tel Aviv. And in July 2018, IBM unveiled Project Debater, a system powered by artificial intelligence (AI) that can debate humans, developed over six years in IBM’s Haifa research division in Israel.

Earlier this year, the Israel Innovation Authority (IIA) warned that despite industry achievements, Israel was lagging behind other countries regarding investment in AI infrastructures and urgently needed a national AI strategy to keep its edge. The IIA called for the consolidation of all sectors – government, academia, and industry – to establish a vision and a strategy on AI for the Israeli economy.

Learn more about Diane Israel. Also, see Diane Israel on LinkedIn.

Read more »

From ‘Content is King’ to GODLIKE, Part 6

By |

From ‘Content is King’ to GODLIKE, Part 6

In part 5 of “From ‘Content is King’ to Godlike,” we looked at methods and metric used today to predict future outcomes based on contextual control of the environment. In this segment, we look at how Behavioral Economics is impacting rtificial intelligence (AI) algorithms, machine learning, and inferential logic.

What is Behavioral Economics?

Behavioral Economics is the study of human choice and decision-making. Unlike its classical economics predecessor, Behavioral Economics has two key divergences:

  • Whereas classical economic theory assumes that both markets and consumers are rational actors, Behavioral Economics does not. In fact, BE practitioners demonstrate the irony of predictability in human’s behaving badly, or irrationally. 
  • Building from that point, BE has developed a series of scientific methods to test, measure, and yes, predict human irrationality.
  • As the name somewhat suggests, BE bridges the academic disciplines of economics and psychology as well as other social sciences, neuroscience and even mathematics. That’s quite a lot.
  • Incorporates Heuristics, i.e., humans make 95% of their decisions using mental shortcuts or rules of thumb.
  • Incorporates Framing: The collection of anecdotes and stereotypes that make up the mental filters individuals rely on to understand and respond to events.
  • Addresses market inefficiencies that classical economics does not, including mis-pricing and non-rational decision making.

Behavioral Economics is a relatively new academic discipline whose roots go back to the 1960s largely from cognitive psychology. To many, Richard Thaler (Nobel Prize: 2017) is considered the father of BE. The University of Chicago professor has written six books on the subject.

Behavioral Economics Applications

While still considered a fledgling discipline, BE is growing rapidly, and arguably the most consequential new study area in academia. BE has already has been put to practical use in Marketing, Public Policy, and of course, AI, although its presence is rather opaque.

  1. PLAYING SPORTS
    Principle: Hot-Hand Fallacy—the belief that a person who experiences success with a random event has a greater probability of further success in additional attempts.

Example: In basketball, when players are making shot after shot and feel like they have a “hot hand” and can’t miss.

Relation to BE: Human perception and judgment can be clouded by false signals. There is no “hot hand”—it’s just randomness and luck.

2. TAKING AN EXAM
Principle: Self-handicapping—a cognitive strategy where people avoid effort to prevent damage to self-esteem.

Example: In case she does poorly, a student tells her friends she barely reviewed for an exam even though she studied a lot.

Relation to BE: People put obstacles in their own paths (and make it harder for themselves) in order to manage future explanations for why they succeed or fail.

Learn more about Diane Israel. Also, see Diane Israel on LinkedIn.

3. GRABBING COFFEE
Principle: Anchoring—the process of planting a thought in a person’s mind that will later influence this person’s actions.

Example: Starbucks differentiated itself from Dunkin’ Donuts through their unique store ambiance and product names. This allowed the company to break the anchor of Dunkin’ prices and charge more.

Relation to BE: You can always expect a grande Starbucks hot coffee ($2.10) to cost more than a medium one from Dunkin ($1.89). Loyal Starbucks consumers are conditioned, and willing, to pay more even though the coffee is more or less the same

4. PLAYING SLOTS
Principle: Gambler’s Conceit—an erroneous belief that someone can stop a risky action while still engaging in it.

Example: When a gambler says “I can stop the game when I win” or “I can quit when I want to” at the roulette table or slot machine but doesn’t stop.

Relation to BE: Players are incentivized to keep playing while winning to continue their streak, and to keep playing while losing so they can win back money. The gambler continues to perform risky behavior against what is in this person’s best interest.

5. TAKING WORK SUPPLIES
Principle: Rationalized Cheating—when individuals rationalize cheating so they do not think of themselves as cheaters or as bad people.

Example: A person is more likely to take pencils or a stapler home from work than the equivalent amount of money in cash.

Relation to BE: People rationalize their behavior by framing it as doing something (in this case, taking) rather than stealing. The willingness to cheat increases as people gain psychological distance from their actions.

These behavioral economics principles have major consequences on how we live our lives. By understanding the impact they have on our behavior, we can actively work to shape our own realities. As such, it’s becoming increasingly difficult, if not impossible, to decouple BE from AI, et al.

Read more »

How the Death of ‘Content is King’ May Take On Godlike Proportions: Part 1

By |

How the Death of ‘Content is King’ May Take On Godlike Proportions: Part 1

To say I was “blown away” by a recent editorial in NoCamels.com by Yaniv Garty, General Manager of Intel Israel, is a frustatingly cliche due to the poverty of English usage as it exists today. And it wasn’t Garty’s predictions of what the world could look like by 2025 that captured and downright agitated my imagination (in a way I enjoyed). Sure, his IT prophecies are all plausible among numerous pundits, evangelists, and visionaries. Nope. It wasn’t that.

It was the data, specifically the vast quantities of data being generated, even right now. Consider these three incredible facts:

  1. Of all the data created since the beginning of civilization, 90 percent of it has been generated in the last 2 years.
  2. By 2025, total data will reach 163 zettabytes. You probably never heard of a zettabyte, and you may want to pause before you attempt to digest it. 163 zettabytes is 1,000 Billion terabytes. Even with the comparison, I still find it incomprehensible.
  3. Only 1 percent of all data has been accessed in any meaningful way.

Garty, who is charged with growing Intel’s hardware for IT ecosystem of the future, has a lot to think about, namely…

Artificial Intelligence (AI), and how it can begin to mine the 99 percent for, among other things, greater insights and predictive measures. Intel already has its eyes on the medical field with aspirations to provide tailor-made solutions for each patient, perhaps and beyond, like unique biological and genetic characteristics.


Learn more about Diane Israel. Also, see Diane Israel on LinkedIn.


Another good example is the interface between data and transportation: The potential of saving lives by lowering the number of accidents made possible with autonomous driving is incredible. But to reduce accidents we need a combination of technologies working together – from computer vision to end-computing, mapping, cloud, and of course AI. All these, in turn, require a systematic change in the way the industry views data-focused computing and technology.

My personal take is that the IT ecosystem of the future will more and more resemble the different executive and subordinate functions of the human brain with neuroscientists and computer scientists conspiring to construct the greatest monster even seen: one giant decentralized and interdependent mega-brain.

In the next segment of this series, we will consider the moral and religious implications of this almost godlike monstrosity.

Read more »

Using Artificial Intelligence on Facial Recognition Detects Rare Genetic Disorders

By |

Using Artificial Intelligence on Facial Recognition  Detects Rare Genetic Disorders

A new technological breakthrough is using AI and facial analysis to make it easier to diagnose genetic disorders. DeepGestalt is a deep learning technology created by a team of Israeli and American researchers and computer scientists for the FDNA company based in Boston. The company specializes in building AI-based, next-generation phenotyping (NGP) technologies to “capture, structure and analyze complex human physiological data to produce actionable genomic insights.”


Portions of this article were originally reported in NoCamels.com


DeepGestalt uses novel facial analysis to study photographs of faces and help doctors narrow down the possibilities. While some genetic disorders are easy to diagnose based on facial features, with over 7,000 distinct rare diseases affecting some 350 million people globally, according to the World Health Organization, it can also take years – and dozens of doctor’s appointments – to identify a syndrome.

“With today’s workflow, it can mean about six years for a diagnosis. If you have data in the first year, you can improve a child’s life tremendously. It is very frustrating for a family not to know the diagnosis,” Yaron Gurovich, Chief Technology Officer at FDNA and an Israeli expert in computer vision, tells NoCamels. “Even if you don’t have a cure, to know what to expect, to know what you’re dealing with helps you manage tomorrow.”

DeepGestalt — a combination of the words ‘deep’ for deep learning and the German word ‘gestalt’ which is a pattern of physical phenomena — is a novel facial analysis framework that highlights the facial phenotypes of hundreds of diseases and genetic variations.

According to the Rare Disease Day organization, 1 in 20 people will live with a rare disease at some point in their life. And while this number is high, there is no cure for the majority of rare diseases and many go undiagnosed.

“For years, we’ve relied solely on the ability of medical professionals to identify genetically linked disease. We’ve finally reached a reality where this work can be augmented by AI, and we’re on track to continue developing leading AI frameworks using clinical notes, medical images, and video and voice recordings to further enhance phenotyping in the years to come,” Dekel Gelbman, CEO of FDNA, said in a statement.

DeepGestalt’s neural network is trained on a dataset of over 150,000 patients, curated through Face2Gene, a community-driven phenotyping platform. The researchers trained DeepGestalt on 17,000 images and watched as it correctly labeled more than 200 genetic syndromes.

In another test, the artificial intelligence technology sifted through another 502 photographs to identify potential genetic disorders.

DeepGestalt provided the correct answer 91 percent of the time.


Learn more about Diane Israel. Also, see Diane Israel on LinkedIn.


Indeed, FDNA, a leader in artificial intelligence and precision medicine, in collaboration with a team of scientists and researchers, published a milestone study earlier this year, entitled “Identifying Facial Phenotypes of Genetic Disorders Using Deep Learning” in the peer-reviewed journal Nature Medicine.

Read more »

The Implications Of Artificial Intelligence: Part 1

By |

The Implications Of Artificial Intelligence: Part 1

In the recent article by Abigail Klein Leichman titled “Could robots replace psychologists, politicians and poets?” published by Israel30c.com, Leichman concludes that AI will never develop a mind that can solve problems. Yet many neuroscientists, computer scientists, and those on the front lines of neural networks, machine learning, and all-things artificial intelligence believe they already have evidence that computers will develop true learning capabilities and some already have.

For the purpose of this essay, I’ll combine machine learning, neural networks, and artificial intelligence into the artificial intelligence monolith. In fact, today there is little difference between three other than their labels and the baggage that each label carries.

Free Will

This debate has been around since humans started asking important existential questions. For most, free will is a given. We just must have it because by most accounts we are free to make decisions or choices save for governmental, religious or cultural restrictions and taboos. Even many theists advocate for free will. Christianity is predicated on it, that God gave us all free will so that we are free to accept the Christian god but don’t have to. Yet aside from these authorities it certainly seems that we have free choice, that we are presented with options and make a decision. Such decisions can be as simple as choosing which flavor of ice cream or as consequential as to whom to marry or what philosophy or politic to endorse.

Philosopher and neuroscientist Sam Harris says not true, and he believes he can prove it. In his best selling book, Free Will, Harris presents several scientific studies conducted over two decades that seem to confirm that free will is a delusion. The studies all conclude that the unconscious brain is what makes each and every decision, then it sends that information (and the conclusion) to our conscious mind where we then go through the motions of deciding something that was already decided, usually about a second before we began our conscious deliberation, sometimes a bit longer or shorter depending on the complexity of what is being considered. Yes. All of this has been measured and the data is quite unambiguous.

So how does this impact artificial intelligence and computers’ ability to think, solve problems, even give psychological advice and direction? If Harris is right, the question itself is misguided, arrogant and flat out wrong, for all the assumptions from which it hinges are incorrect. I must say, this is one of those rare times when new science tends to contradict the prevailing reality of existence, not to mention it’s counterintuitive if not downright unpleasant to consider, which is where I’m at right now.

That said, more on the fascinating topic of artificial intelligence to come, I promise.

Read more »

The Implications Of Artificial Intelligence: Part 2

By |

The Implications Of Artificial Intelligence: Part 2

In this second part of the implications of artificial intelligence segment, we take a look at what is intelligence and can computers think, reason, and learn.

It was 1996. IBM’s “Deep Blue” supercomputer was to do battle with reigning world chess champion, Gary Kasparov, using standard rules of chess. Spoiler alert.  Deep Blue won its first game against Kasparov on February 10, 1996, when it defeated him in the first game of a six-game match. Kasparov, however, rallied and won the next three, then drew two of the five games, ultimately defeating Big Blue by a score of 4 to 2.  A year later, Big Blue was upgraded and this time defeating the reigning world champion narrowly.

Bear in mind, this was over twenty years ago. Computer processing capabilities were in the stone ages compared to where they are today, and today’s computing will follow the same fate with the future.

Was the computer thinking? Well if we consider thought to be the ability to reason correctly for the purpose of achieving some future goal, then yes. But was this reasoning ability trained by humans to reason well? Sure enough. But isn’t that the same thing our parents and teachers do when we are in our formative years (and beyond)? Perhaps the biggest difference is that humans teach other humans a mix of rational and irrational thought process whereas computers are thought “pure reason” much in the spirit of Mr. Spock in Star Trek.

For me what’s interesting is whether teaching a computer or teaching a human is fundamentally different. And whether the hardware and software used to think, reason, and learn is fundamentally different in how it processes information. What neuroscientists and computer scientists tell me — and I must defer to them for I am neither — is that computers were created by people and therefore the human way of reasoning was built into each of them. Whether this was conscious or not is a matter of debate and an important one. What both have in common is that they both learn through induction and inference. The chief difference is how the emotional content and influence on human behavior can cloud our ability to learn and reason. Computers have no such capability. Just imagine if they did! There would likely be no reason for this conversation if they could. And would we really want to rely on them to run our power grid, airline traffic, or our satellite communications? Then again, if all satellite communications crashed for a few hours it all could be attributable to the computers having a “bad day”.

I think we’re better off with computers has purely intellectual devices.

Read more »

Artificial Intelligence: Part 3

By |

Artificial Intelligence: Part 3

In this segment, we’ll take a look at the practical applications of artificial intelligence (AI) today and what is right around the corner.

Recall the days on the evening news when reporters would interview a trader on the floor of the New York Stock Exchange to cover the impact of a trade war, a real war, or more the more disappointing earnings report from a blue chip company? Remember all those white middle-aged men in the middle of the pit gyrating most of every imaginable histrionic while shouting in desperation in secret Wall Street code and hand signals resembling gang signs with perspiration spraying everywhere?

You may not have realized it but those days are gone. Wall Street today is basically a working museum. The trading pits are gone. The only piece of the good old days that remains is the opening and closing bells which still does air on TV on occasion. The pit trader’s job has been lost to automation. Computer algorithms now do that job, and more efficiently too. But now that all (or most) human emotion has been jettisoned from the trade when unexpected events happen it’s all up to the computers to determine the best moves to make, all of which have been decided beforehand.

The same thing is true when you order up an Uber. The nearest pool of drivers is automatically petitioned. And the driver the gets the bid simply is verified of the task via a computer algorithm. Pickup and destination logistics, as well as price, are all determined by computers. As I write this, Uber has a new what I’ll call the “drunk algorithm” to determine the likelihood of a customer of having a few too many when ordering an Uber. The algorithm looks for, among other things, common typing mistakes, language used, location, and who knows what else, to determine their mental state and recommend an Uber driver more experience with unsober customers.

Advanced online marketing techniques use data analytics and other Big data to find predictive correlations between a consumers marital status and their likelihood to drink beer, and on what day, and what kind of beer. Men living in Nevada who have been recently divorced (say for less than two years) are more likely to buy a six-pack of beer on a Thursday, for instance. Knowing this, a Miller Lite advertisement will appear between 2 and 3 pm when they hit CNN.com to check the latest news.

These are but just a few examples of where AI is today. Imagine where it will be in five years! The only thing that can stop it is consumer insistence upon greater controls over their privacy concerns. But don’t hold your breath on that one. Most of us have already decided, albeit unwittingly, that the conveniences of the digital age outweigh the costs of giving up a bit of our privacy. In other words, we have traded away some of our privacy for its exchange value. And this is something we do a lot more than we would like to admit. You may recoil to the idea of having a microchip inserted into your person right now, but in five years you may find yourself opting into such a voluntary problem. Why? Because you may no longer need to remember your wallet, your keys, your passport, credit cards, rewards cards, pin numbers, or passwords. Pretty convenient, huh?

Read more »

Artificial Intelligence: Part 4

By |

Artificial Intelligence: Part 4

In this segment on artificial intelligence (AI), part four, we’ll look at the augmented human being, part human, part machine. And don’t laugh. Computer or robotic-assisted devices are being used to augment the human condition right now. For but one example, see my previous story on exoskeleton technology.

Another slick piece of wearables allows legally blind people to read newspaper and magazines, or product labels in a grocery store, even the money they take out of their pocket to pay the cashier, using artificial visualization technology.

As neuroscientists unleash the mysteries and power of the human brain while, at the same time, AI researchers build programs that get smart and smarter, even to the point where they become autonomous learners, human anatomy and robotics, along with AI software, will converge into human/machine hybrids, some of which will have more human characteristics than others. In other words, if we live long enough, say twenty more years, we may actually meet Mr. Spock, or a reasonable facsimile thereof.

Some of my academic friends who are working on this exciting future are not as enthusiastic as you would think. Many fear that the ethics will not keep pace with the technology, that we will create, arguably, a new species whose rights and freedoms will not comport with our justice system as it is today. Others are concerned about the economic value of people in an age where machines and computers will do almost all of the work. What are we going to do with 5 billion in surplus labor for which there will never be a job? Without income potential yet still constantly need to consume goods and services, how will the contribute to the betterment of our species and our world? There are no good answers for any of this yet. But there certainly are many grave concerns over them and many others.

But with all the ethical, economic and social concerns over AI, what most scientists are most anxious over is the notion of singularity. Singularity, as it pertains to AI, is the moment in the future whereby computers will become not only smarter than humans (and their programmers) but autonomous as well. If you haven’t guessed by now, they’re talking about the master/slave relationship between man and machine flipping. How this would exactly happen, nobody really knows. The anxiety of such a time is difficult to imagine. But it’s almost definitely only a few decades away. And while I may be naive, if a bunch of Mr. Spocks started running our world, it’s hard to imagine how that wouldn’t be an improvement.

Read more »

From ‘Content is King’ to GODLIKE, Part 2

By |

From ‘Content is King’ to GODLIKE, Part 2

In part one of, “From ‘Content is King’ to GODLIKE“, we introduced some mind-boggling facts: namely that:

  • 90 percent of all content ever generated in the history of mankind, was created in the last two years
  • 99 percent of all content is yet to be accessed, let alone worked, mined or analyzed.

Throw artificial intelligence (AI) in the mix and it quickly becomes apparent that we are embarking on the creation of a technology that would rival any god ever envisioned — and there have been thousands. The big difference between the gods of the past and those of the present are worth elucidating.

Traditional or conventional gods provided answers to most if not all of life’s mysteries. Of course, there are problems with that, most notably, a lack of empirical rigors to go along with the robustness of the claims. Newer gods — I’ll use Google’s search engine as an example — are empirical for sure, and use information, i.e., empirical evidence, as an epistemic foundation. From there the commonalities between the old and new intersect again since both models are pretty big on predictions (or prophecy). And before I can complete that sentence we experience yet another bifurcation with the old depending on one or another form or revealed truth and the Google god relying on inference, induction, and most recently, artificial intelligence to answer the secular prayer more commonly known simply as THE SEARCH.

God or godlike dichotomies aside, what’s a civilization to do with all this content, especially since 99 percent of it is just sitting here and there (and everywhere), doing nothing? Well, we already have the technology, i.e., Google and similar technology, to harness it. That’s one thing but taking benign predictability — the search — to profound prophecy and beyond through weird and counterintuitive correlations that provide answers even before we think of them, let alone type them into Google, is where the future of content vis-a-vis AI is heading.

The next article will get into the specifics of precisely how this might look, using everyday problems and contemplations.

Read more »