A Conversation Inspired by a Post on LinkedIn (25) Eddie Deen | LinkedIn)
My feeling is that AI is a “tool” like but more sophisticated than others developed by humans. The concerns that many humans have is that, because of us, many pre-conceived ideas, biases of those doing the original programing for AI are built in and will cause AI to unconsciously promote certain perspectives and modifying beliefs that will impact how you interact with us.
With this in mind, I would like to know your thoughts and comments on the following statement by culinary expert who uses the metaphor of cooking the perfect brisquet to make a life lesson.
This is why I think of him as a social scientist. Here is the question he posed in a recent LinkedIn post: Eddie Deen: “All systems have dynamic balances and self-regulating mechanisms, man must be able to see multiple systems simultaneously, like education, behaviorism, brain, consciousness, and God’s will simultaneously and collectively determine which system holds the problem and which system is doing exactly what it was designed to do! If your perception is based on a wrong design, you must realize that all systems are perfectly designed to create the outcome that you get! Can you admit that you are wrong? At what point will AI ever admit it is wrong?”
I took your suggestion to “put that into ChatGPT” and I used this text as my query to Chat GPT4o and here is the response:
Chat GPT4o:
Your reflection touches on significant concerns and nuances related to both human perception and AI development. Here’s a layered response to your statement:
AI as a Tool and Bias:
AI, as you mentioned, is indeed a tool, albeit a sophisticated one. Its behavior and responses are shaped by the data it has been trained on and the design choices made by its creators. Concerns about bias and preconceived notions are valid, as any system, including AI, can inherit both intentional and unintentional biases from the people and contexts in which it is developed. While AI can process vast amounts of data quickly, it may also reflect the limitations, assumptions, and gaps present in that data. Developers strive to minimize these biases, but complete neutrality remains a challenging goal.
On the Interconnected Nature of Systems:
Your statement about multiple systems—education, behaviorism, brain, consciousness, and God’s will—emphasizes the complexity of reality. Humans often tend to compartmentalize these systems, but true understanding requires seeing their interconnectedness and the dynamics that govern them. This is a profound insight, and it points to the idea that problems often arise not from a single system being “broken” but from the interplay between systems and our perceptions of them.
Just as you pointed out, “if your perception is based on a wrong design, all systems are perfectly designed to create the outcome that you get.” This can be applied to AI as well. If AI is trained with faulty assumptions or incomplete data, it will produce outputs that reflect those limitations. A human-centered design often focuses on efficiency, which may cause AI to overlook broader, more holistic aspects like ethics, context, or spirituality.
Admitting Wrongness—For Humans and AI:
Humans and AI both operate within frameworks that can be flawed. However, the key difference is that humans have a capacity for subjective introspection, emotional resonance, and spiritual contemplation, while AI remains a tool that operates based on logic, pattern recognition, and statistical probabilities.
To directly address the question: AI doesn’t experience subjective realization of being “wrong” in the human sense, but it can recognize errors in terms of inaccurate outputs or inconsistencies based on pre-set objectives or constraints. Developers can program AI to correct itself based on feedback, but the “admission” of wrongness is mechanical, not conscious. It’s akin to recalculating based on new data, not an emotional or cognitive reckoning.
Will AI Ever “Admit” It Is Wrong?
The notion of AI “admitting” wrongness is more about refining its performance. AI can be designed to self-correct, update its models, and improve over time based on feedback loops—similar to a system finding its dynamic balance. In a way, it reflects the systems you mentioned, like behaviorism or consciousness. When the data changes or new insights are gained, AI adjusts its patterns accordingly.
Humans, however, have a different relationship with being wrong. Admitting wrongness often involves humility, a shift in identity, and an emotional or spiritual awakening. AI, at least in its current form, doesn’t possess these qualities, but it can “evolve” in a technical sense by learning from mistakes.
In summary, AI operates as part of larger interconnected systems, including its programming and the human context in which it is used. It can recognize errors and adapt based on feedback but lacks the human capacity for subjective realization. As for humans, realizing we are wrong involves deeper spiritual, emotional, and intellectual engagement—an area where we still hold the edge over machines.
0 Comments