Why Can’t Artificial Intelligence Machine Learning Explain Itself?

action android device electronics
Photo by Matan Segev on Pexels.com

Gregory (2018) argues artificial intelligence (AI) must learn to explain itself if it expects to be trusted. Gregory explained that “deep-learning programs,” or neural networks, learns and reason by processing programmed coding to recognize and process bits of data it can arrange into patterns. After citing a Georgia Institute of Technology study that trained AI to assign “snippets” of human language to activities during video game playing and other studies that have “taught” AI to appear to be explaining its decision logic, Gregory argued that, since a neural network designers usually cannot understand or explain how their systems use Machine Learning (ML) algorithms for reasoning or progressive self-improvements.  As such, Gregory argues it is risky to trust AI technology to run critical infrastructure or make life-or-death medical decisions at this stage of development (Gregory, 2018).

What is a “Recommender” System?

Portugal, Alencar, and Cowan (2018) attempted to shed light on the issue by reviewing 121 peer-reviewed journal articles the examined the segment of ML called “recommender systems.”  After defining recommender systems (RS) as basically AI providing users with recommendations, Portugal, Alencar & Cowan went on to distinguish between the different types of algorithms programmed for these popular neural networks. For instance, based on the literature, the researchers explained the three most popular RS machine learning programmed learning approaches are:

  • Collaborative, which exposes AI to a plethora of users-specific data that it can use to create patterns based on shared characteristics;
  • Content-based, which gathers information from multiple databases with similar attributes, organizes patterns, then makes recommendations; and
  • Hybrid, which use a combination of the other the other two strategies to create patterns to make recommendations.

But, Portugal, Alencar & Cowan explained other ML approaches are also being used including risk-aware recommendations that take into consideration critical information, such as the beneficiary’s vital signs, before making recommendations that threaten life or cause damage.  Portugal, Alencar & Cowan also delineated ML algorithm categories including: (1) supervised learning based on programmed training data sets; (2) unsupervised learning based on programmed real-world data sets AI must process to create hidden logic patterns; (3) semi-supervised learning data sets with missing information that causes AI to draw conclusion; and (4) reinforcement learning that provides feedback on wrong or right decisions. But, the researchers cautioned that even gaining this level of knowledge has not resolved.  Software engineers are challenged when trying to decide which algorithms or development tools to program for which situations to observe open problems with RS and algorithm trends (Portugal, Alencar, & Cowan, 2018).  Therefore, can AI/ML be trusted with making decisions that impact human life – current literature advises caution.

How Jennie Feels About It

I argue that AI/ML lacks common sense. Common sense, defined as “sound judgment based on experience instead of study” (Taylor, 2012) is something that cannot be programmed.  Man is an intuitive creature. Like other organic organisms, man began to learn and experience his defined truths, beliefs, and attitude from the moment of birth (Vygotsky, 1978). Therefore, man can draw new conclusions based on speculation – not mathematics. The body of knowledge must accept that AI/ML neural networks were programmed with yesterday’s probabilities – or knowledge that was known by its creators at that time.  Therefore, any learning AI uses to expand upon or improve beyond programming is unsupervised algorithms. By contrast, human neural networks can go beyond what is already known to test suppositions and probabilities grounded by a life of structured and unstructured learning and observations.  Therefore, the human brain to engage in higher-order abstract and analytical thinking using previously unknown deductive and inductive reasoning to solve problems (Goldstein, 2010). Thus, in my opinion, if AI/ML is unable to explain the mathematics or logic it used to make recommendations or determinations that fall outside what was known by its programmers – it can become a dangerous machine.

References

Goldstein, E. (2010). Cognitive psychology: Connecting mind, research and everyday experience. Nelson Education.

Gregory, O. (2018, February 15). For artificial intelligence to thrive, it must explain itself. Retrieved from https://www.economist.com/science-and-technology/2018/02/15/for-artificial-intelligence-to-thrive-it-must-explain-itself

Portugal, I., Alencar, P., & Cowan, D. (2017). The use of machine learning algorithms in recommender systems: a systematic review. Expert Systems with Applications.

Vygotsky, L. (1978). Interaction between learning and development. From: Mind and Society (p. 79-91). Cambridge, MA: Harvard University Press

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s