Skip to content

The Hard Limits of AI

Traditional Computing

Traditionally, computers were good at maths and algorithms (implemented via programming languages). They were not good at interacting directly with the human experience directly, however. This is because the human experience is generally qualitative, whereas computers were confined to the quantitative realm.

For example:

  • Quantitative problem (which computers are good at): Calculate Pi to a million decimal places.
  • Qualitative problem (which computers are not good at): Does this ice-cream taste good?

Even using traditional computer systems, the use of powerful processors, sophisticated algorithms, massive databases and high-quality user interfaces managed to persuade people that they were dealing with qualitative logic systems that related to the real world in which they lived. However, behind the facade all was still 1’s and o’s.


The Introduction of LLM AI

The emergence of large language model (LLM) AI has begun to bridge the gap between quantitative computing and the qualitative aspects of human experience. This means to say that an LLM AI is designed to directly interact with the qualitative, hard-to-measure aspect of human existence.

In order to perform this amazing feat, LLM AI bypass the correctness of the underlying calculations and reasoning altogether. Instead, they rely on the natural language patterns present in pre-existing textual training data to generate responses that appear to “make sense,” without relying on explicit computational accuracy.

The way this works is that LLM’s are “trained” on textual datasets encompassing books, articles, conversations, etc. This training allows them to identify patterns of the human use of language, which implicitly contains details of language usage, contextual understanding, and human reasoning. In this way, LLM’s use their understanding of language patterns (not the underlying problem) to predict and generate appropriate responses based on the input they receive.

For example, when posed with the qualitative real-world question, “Does this ice cream taste good?” an LLM doesn’t calculate sensory data or perform chemical analyses. Instead, it draws upon descriptions, reviews, and sentiments expressed in its training data to provide an answer that reflects common human opinions and experiences regarding taste.

You
If I made ice-cream using peanuts, celery and flounder, would it taste good?
ChatGPT (o1 mini)
Thought for a couple of seconds:

Creating ice cream with peanuts, celery, and flounder is certainly an unconventional combination! Here’s a breakdown of how these ingredients might interact:

  • Peanuts: These are commonly used in ice cream, adding a rich, nutty flavor and a pleasant crunch. Think of classics like peanut butter ice cream or peanut brittle mix-ins.
  • Celery: This vegetable has a fresh, slightly bitter taste that’s more typically found in savory dishes. Incorporating celery into ice cream would be challenging, as its flavor might clash with the sweetness expected in a dessert. However, if prepared creatively—perhaps by using celery juice or a celery-infused syrup—it could add an interesting herbal note.
  • Flounder: Adding fish to ice cream is highly unusual and could be off-putting to many people. Fish flavors are generally not associated with sweet treats, and flounder has a mild taste that might be overwhelmed by the other ingredients. Additionally, textural differences could pose a challenge.

Overall Opinion:

While experimenting with unique flavor combinations can lead to exciting culinary innovations, the mix of peanuts, celery, and flounder in ice cream is likely to result in a flavor profile that many might find dissonant. The sweet and nutty elements of peanuts might not harmonize well with the vegetal notes of celery and the savory aspects of flounder. However, for adventurous eaters or those interested in pushing the boundaries of traditional ice cream flavors, it could be an intriguing experiment. Success would heavily depend on the preparation methods and the balance of flavors to ensure that no single ingredient overwhelms the others.

As seen in ChatGPT’s response, an LLM can provide a reasonable response given no data at all, just based on its training on language patterns and the information embedded within its datasets. The response approximates the response of a human who would have read the texts included in the LLM’s training data.

Just as humans can form logical and reasonable conclusions from texts they read without ever having personally experienced the situations described in the texts, LLM’s can generate coherent and seemingly rational responses based purely on textual analysis.


LLM’s Illusion of Intelligence

Because AI appears to be able to handle qualitative tasks directly, there’s an expectation that it can offer definitive (correct) answers in these areas, just as it does with quantitative data. That means that people think, “The AI is right.”

This expectation stems from a combination of various illusions:

  • Illusion of cleverness: People are used to computers being “cleverer” than they are, i.e. being able to process large datasets and performing complex calculations faster than a human.
  • Illusion of self-confidence: LLM’s almost always provide a well-written response. This gives the impression that LLM’s “know the answer to everything.”
  • Illusion of humanity: Because LLM’s are programmed to respond in the same (polite) way that a human would respond, this gives the impression that the LLM possesses an element of humanity. Since the LLM is perceived to be somewhat human, by extension it is perceived to possess human reasoning capabilities.

In truth, however, since LLM AI do not, by design, reason according to underlying logical principles, they can never be anything more than (a good) “word machine.” I.e. they can never do more than generate what appears to be a reasonable “word pattern”, according to their training data and the quality of prompt engineering applied in the question.

AI Hallucinations

Much work has gone into reducing the tendency of LLM AI to “hallucinate,” or, to make up facts which it thinks are needed to provide an answer to the question the AI has been asked.

The truth is, however, that AI always hallucinates, meaning that the AI generates a stream of words without understanding the underlying reasoning. Just that sometimes AI hallucinates correctly, in which case we are happy with its output, and sometimes AI hallucinates incorrectly, in which case we aren’t.

This very simple assessment puts paid to the idea of LLM-based AI taking over humanity any time in the near future.


Future AI Models

Let’s say that, in the future, new AI models will be developed that are actually capable of human-style logical reasoning, based on basic logical constructs and accepted methods of composing a logical argument.

Could it ever be possible that these computerized AI models should bypass human reasoning and intelligence, rendering mankind an unnecessary inhabitant of this planet?

In order to address this question, it is necessary to understand the value of qualitative logic in the human experience, altogether.


Decision Makers

There as many ways of making a qualitative decision as there people in the world. Everyone has their own opinion and is convinced that “their way is best”. If so, we can ask, why are decision-makers, such as CEO’s and politicians, so highly rewarded by society?

In other words, if it is indeed true that your barber is as likely to give you sensible advice as your friendly local financial consultant, they why indeed not ask the barber for advice and save the consultant’s fee?

Some classical answers given to this question are as follows:

  1. Prior success: CEOs, financial consultants, and politicians may have proven track records that inspire confidence in their decisions.
  2. Expertise and Experience: CEOs and financial consultants may have spent years studying, honing their skills, gaining deep insights into their industries, and understanding the nuances of decision-making within their domains. This extensive background allows them to consider specific situations with greater understanding and in greater detail, and foresee potential outcomes that other less-knowledgeable and less-experienced people might overlook.
  3. Responsibility and Accountability: Professional decision-makers are accountable to their stakeholders, this ensures that they carefully consider the consequences of their decisions and act in the best interests of those they represent.
  4. Leadership: When a decision-maker makes a decision, that is actually a statement of leadership. The decision means “this is the course of action which we will pursue.” Hence the decision-maker is rewarded for their leadership abilities as much as for their decision-making capabilities.

It is possible that an AI could gain the first types of decision-maker attributes: prior success, expertise and experience. By using the correct set of training data and designing the AI correctly, the AI could gain a successful track record, and simulate expertise and experience.

Let’s consider if it is possible for an AI to gain the decision-maker attributes of responsibility, accountability and leadership, however.

You
Create a colour pencil sketch image showing a Decision Maker making a decision in a 1950’s office.
ChatGPT (4o)

Zones of Perceived Goodness

When humans make decisions, they rarely have absolute certainty about the outcomes. Instead, they aim to navigate toward what can be described as a zone of perceived goodness. In other words, they aim to position themselves in a situation where conditions are favorable for success and abundant opportunities are available. By making their decision, a person commits to a path where they anticipate being able to leverage positive factors while mitigating obstacles that could hinder their goals.

For example, an entrepreneur launching a new product doesn’t know exactly how the market will respond. However, based on research, trends, and intuition, they believe that their product will meet consumer needs and lead to success. They’re stepping into a zone where they perceive the likelihood of positive outcomes to be high, but in which they acknowledge they will have to work hard to “make it work.”

In other words, a real-world decision always requires follow-through to make it meaningful and effective.

The Necessity of Follow-Through

Real-world decision-making without follow-through is essentially meaningless because decisions alone don’t bring about change—actions do. Follow-through involves the execution of a plan, the commitment to see a decision through to its conclusion, and the willingness to adapt as circumstances evolve.

Without this dedicated effort, even the most well-considered decisions will fail to materialize into tangible results.


AI’s Inability to Make Real-World Decisions

Because Artificial Intelligence is not conscious, it cannot “own” its decisions, take responsibility for them, or engage in follow-through. Since AI cannot be affected by its own decisions, it is incapable of intuiting the zones of perceived goodness and risk into which its decisions lead. Additionally, it cannot act to maximize potential benefits or mitigate risks within that zone.

Hence, we encounter two fundamental limitations regarding AI’s ability to make effective real-world decisions:

  1. Inability to Project into the Future: When humans make decisions, they do so with an understanding—or at least an anticipation—of how these decisions will affect themselves and those around them. They mentally envision what “the world will be like” as a result of their choices. Since AI is never affected by its own decisions, it lacks the ability to envisage its “personal” situation as a consequence of its actions. This absence of self-awareness and personal stake means AI cannot fully grasp the future implications of its decisions.
  2. Lack of Follow-Up: Because AI never “finds itself” in the new situation which is the result of its own decision, it cannot provide genuine follow-up. In the human context, follow-through involves adapting to new circumstances, overcoming obstacles, and persisting toward a goal. Since AI lacks consciousness and personal investment, it cannot engage in this dynamic process. As previously noted, a real-world decision without follow-up is relatively meaningless because the value of a decision lies in its execution and the ongoing commitment to realize its intended outcomes.

Subsequently, while AI can process data and generate recommendations, it cannot replace the human elements crucial for effective decision-making in the real world.


Conclusion

Every “answer” that involves real-world data and real-world outcomes is as much of a decision as it is an answer.

Since AI is unable to make meaningful decisions, truly understand the outcomes of its decisions, and take ownership and follow-up on its decisions, any “answer” that an AI gives concerning a real-world situation can never be more than a theoretical recommendation which lacks true human understanding, personal responsibility, and commitment. Regardless of the data and processing power provided to an AI system,  it will always lack the consciousness and experiential understanding to grasp the full meaning and impact of its recommendations.

Therefore, while AI can aid in data analysis and offer suggestions, it is ultimately up to humans to make meaningful decisions, take responsibility for them, and see them through to fruition. The value of AI lies in its ability to enhance human capabilities, not replace the uniquely human aspects of decision-making that involve ownership, personal judgment, and the commitment to follow through.

By understanding and respecting the boundaries of what AI can and cannot do, we position ourselves to make better decisions that are informed by data but grounded in human experience, responsibility, and the ability to act upon our choices in pursuit of a better future.

0 0 votes
Article Rating
Subscribe
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x

Discover more from Fleeting Swallow

Subscribe now to keep reading and get access to the full archive.

Continue reading