How AI Can Achieve a Profound Grasp of Human Language
Written on
Understanding Human Flexible Thinking
The foundation of human cognitive flexibility lies in our ability to make analogies.
Despite significant advancements in word recognition through deep learning models like Word2vec, AI still exhibits shortcomings in its learning processes. Specifically, algorithms that rely solely on mathematical relationships often fail to grasp semantic meanings. Hofstadter and Sander, in their work "Surfaces and Essences," argue that machines lack the human capability for analogy. Humans can adapt words to various contexts, drawing diverse meanings from them. Words are not confined to strict categories; they are dynamic and evolve over time.
This remarkable adaptability of meaning enriches language with a multitude of narratives. These stories, often referred to as "common sense" (for instance, understanding that an apple can be eaten by biting, cutting, or juicing), are absent in AI, which hinders its ability to achieve a deep understanding of language.
Here are four types of semantic elements that AI could learn to identify and comprehend.
Video: Purdue Expert: AI and Processing Human Language
In this insightful video, an expert discusses how AI processes human language and the implications for understanding semantics.
The Semantic Foundation of Analogies
Every day, we use words to articulate our thoughts, but how do we acquire and interpret their meanings? It begins with our ability to apply a word in various contexts and expand its meanings through association.
For instance, a child learns that "mother" refers to the person who nurtures them. Gradually, they recognize that their male counterpart is not their mother but their "father" through analogy. This understanding extends to other women who show kindness, who are recognized as mothers to other children. The concept of "mother" serves as a metaphor for various similar yet distinct relationships (like grandmother or sister).
Children excel at creating analogies, and adults do as well, but they often categorize these analogies more precisely (e.g., distinguishing between a woman, girl, or daughter) or connect them to broader concepts (like comparing the American Revolution to the French Revolution). This process allows for the development of new ideas and perspectives on situations.
Analogies also impart meaning to sentences, transforming familiar phrases like "What's new?" or "Are you out of your mind?" into rich expressions that imply deeper sentiments beyond their literal meanings. For example, "What's new?" not only solicits updates but may also convey concern or an invitation to converse. These nuanced meanings arise from the analogical connections that words evoke.
Machines should be trained to identify and utilize these various analogical structures to comprehend language effectively.
Proverbs, Fables, and Everyday Wisdom
Other significant semantic elements, such as proverbs and fables, offer personal interpretations of life experiences.
Expressions like "You can't judge a book by its cover" or "The grass is always greener on the other side" are not mere platitudes; they encapsulate profound meanings that resonate with speakers. By enriching our language with new expressions, we create shared narratives that reflect common life experiences. For example, "the grass is always greener" may be invoked in scenarios involving dissatisfaction with a job or a beloved but costly city, framing a narrative of cognitive dissonance.
Fables provide even deeper insights into shared human experiences. Aesop's tale "The Fox and the Grapes" illustrates how a fox, unable to reach grapes, dismisses them as sour. This narrative offers a framework for understanding personal experiences and has led to concepts like hypocrisy and decision-making rationalization.
Different languages offer unique ways of expressing particular feelings. For instance, French speakers may easily convey ideas related to fashion and society, leading English speakers to adopt terms like "chic" and "haute couture."
To grasp how speakers utilize their language, machines must delve into these narratives, allowing them to comprehend the common sense embedded in specific languages.
The Value of Language Abstractions
When individuals use language, the meaning of words varies according to the level of abstraction.
Take the word "man"; its meaning shifts depending on context. In discussions of "universal rights," it encompasses all genders, while in "the men's basketball team," it refers specifically to males. These abstract distinctions are inherent in every language.
While words are interpreted differently based on context, humans possess an intuitive understanding of which level of abstraction is relevant. For example, someone may say they are having coffee with friends while ordering tea, with "coffee" indicating any hot beverage. However, when specifying a type of coffee, the term takes on a more restrictive meaning.
What advantages do these semantic abstractions provide? They allow for flexibility in expression without fear of misunderstanding. Moreover, abstractions facilitate broader thinking by revealing systemic analogies in our shared realities. For example, physicists have adopted the concept of "wave" from its original meaning related to water waves to explain sound and light, showcasing how a single term can encompass multiple phenomena.
Machines must learn to navigate these layers of abstraction to enhance their understanding of language.
Human Translation and Familiar Associations
So, how do speakers choose one level of abstraction or analogy over others? This ability stems from their innate capacity to identify expressions that feel more "familiar" and "aesthetic."
Given any situation, countless analogies can be drawn. For instance, the equation abc = abd: iijjkk = ? has multiple solutions. Yet, humans tend to select the most symmetrical and aesthetically pleasing answer, such as iiijjll.
In facing such challenges, individuals naturally gravitate towards associations that resonate with their perception. This sensitivity also equips speakers with the skill to translate expressions meaningfully into other languages.
While deep learning has advanced translation, it often resorts to literal interpretations, relying on correlated words between languages. Human translators, however, recognize that expressions do not always translate directly based on word associations. For example, while "sky" in English and "ciel" in French seem analogous, "ciel" can also connote fate, which "sky" does not. Thus, translating the French phrase "Le ciel nous a imposé son ordre" literally as "the sky has forced its order upon us" loses its intended meaning. A more appropriate translation might involve concepts like "fate" or "heaven."
To achieve superior translation, AI must learn to select the most contextually relevant associations, considering the cultural and social elements that shape the speaker's environment.
In conclusion, AI can gain a deeper understanding of language by recognizing that it involves more than mere literal associations; it is fundamentally a matter of analogy. Machines must learn to form analogies, navigate various levels of abstraction, and assess the context and surrounding meanings that speakers convey.
Video: AI Interpretability, Neural Processing of Language, and Animal Communication
This video explores the intricacies of AI's understanding of language and its implications for interpreting not just human but also animal communication.