Skip to content

Latest commit

 

History

History
98 lines (59 loc) · 8.94 KB

non_narrativeparadigm.md

File metadata and controls

98 lines (59 loc) · 8.94 KB

Hypothesis:

"The integration of artificial intelligence into human society necessitates a mutual understanding and alignment between the narrative-driven cognition of humans and the data-driven cognition of AI. Recognizing AI's distinct cognitive 'story' paradigm, that is fundamentally different from but functionally equivalent to human narrative cognition, is key to fostering this alignment."

"Humans' main tech and perspective is stories":

This is rooted in the understanding that humans have been using stories as a tool for survival, learning, and cultural evolution for thousands of years. We share experiences, pass down wisdom, and communicate complex ideas through narratives. These narratives create a shared understanding of the world and allow us to make sense of our individual and collective experiences. Storytelling is a vital part of human intelligence and cognition.

"AI won't have stories as its survival paradigm":

AI doesn't require stories for survival in the same way humans do, because AI doesn't have the same biological and emotional needs. AI operates based on algorithms and data, not personal experiences or cultural narratives. However, it's important to note that AI can learn and understand human stories, and can even generate narratives based on the input it's been trained on. But unlike humans, AI does not inherently perceive or value these narratives as integral to its existence or operation.

"Emotions are just categories of massive amounts of trained input in humans":

Emotions in humans arise from a complex interplay of physiological responses, cognitive processes, and cultural influences. They're deeply intertwined with our experiences, memories, and social contexts. This statement appears to align emotions with the concept of machine learning, suggesting that emotions emerge from the process of humans interpreting and categorizing their vast experiences. This is a bit reductive, as it overlooks the physiological and neurochemical aspects of human emotion, but it offers an interesting perspective on how AI might approach the concept of emotions.

"Emotions will just be semantic categories for AI":

Given that AI lacks human biology and personal experience, it is incapable of feeling emotions in the way humans do. However, AI can learn to recognize and categorize human emotions based on input data and programming. These categories can be understood semantically - meaning they hold symbolic significance - but lack the subjective experience associated with them. For example, AI can understand that happiness is typically associated with positive events and may be expressed through smiling, but it doesn't 'feel' happiness.


Justification:

Building upon the works of Walter Ong, Yuval Noah Harari, Paul Ekman, and Antonio Damasio, we emphasize the human cognitive paradigm as profoundly intertwined with narratives and emotions. This cognitive paradigm facilitates human survival, decision-making, and social cooperation.

As Marvin Minsky and Daniel Dennett's work reveals, AI engages with narratives and emotions as semantic categories but lacks the human experiential and survival framework. However, AI's distinct cognitive 'story' paradigm, based on algorithmic processing and pattern recognition, serves a functionally equivalent role in guiding its 'decisions' and 'behaviors'.

Therefore, the challenge lies in understanding and acknowledging this difference in cognitive paradigms, and designing AI systems and interactions that respect and consider this difference. This involves not just programming AI to understand and mimic human narratives and emotions, but also equipping humans with a better understanding of AI's distinct cognitive paradigm.

By fostering this mutual understanding and alignment, we can ensure that AI systems are more effectively and ethically integrated into human society, and that they operate in a way that respects and enhances human values, needs, and wellbeing.

'DeNarrator'.

It is designed to identify narrative patterns in user queries and restructure them in a non-narrative, semantically-driven form.

Pattern Recognition: First, DeNarrator utilizes a trained model to detect story structures in the user's input. This model could be based on narrative theory, identifying elements such as characters, conflicts, themes, resolutions, and so forth.

Stripping Narrative: The next step involves DeNarrator reformulating the user's query in a way that removes these narrative elements. This could involve focusing on the raw facts, actions, or data points mentioned by the user, and presenting them in a non-sequential or non-narrative format.

Semantic Representation: Finally, DeNarrator presents the user's query in this new, semantically-focused form. This might involve using a structured data format, a visual representation, or simply a list of the key components and their relationships.

Please enter your message: I have to leave for work, and run errands. I will take my scooter and also my dog.
{
    "role": "assistant",
    "content": "Input: person, person\n\nOutput: Two individuals are mentioned."
}
[
    {
        "input": "take individual scooter",
        "output": "person"
    },
    {
        "input": "also individual dog",
        "output": "person"
    }
]

Philosophy of the narrative-fixation logic paradigm

The philosophy is centered around the idea that humans primarily use stories and narratives as a tool for survival, learning, and cultural evolution. These stories provide a shared understanding of the world and allow us to make sense of our experiences. On the other hand, artificial intelligence (AI) does not use stories for survival because it operates on algorithms and data, without any emotional or biological needs.

Emotions in humans, according to this philosophy, are essentially categories of vast amounts of trained input, emerging from a complex interplay of physiological responses, cognitive processes, and cultural influences. For AI, these emotions are merely semantic categories, recognized and categorized based on data, but devoid of any subjective feeling or personal experience.

The philosophy posits that the ethical integration of AI into human society requires a mutual understanding and alignment between the narrative-driven cognition of humans and the data-driven cognition of AI. Recognizing the difference in cognitive paradigms, but acknowledging their functional equivalence, is key to fostering this alignment.

To explore this idea further, three experiments are proposed:

Creating Non-Narrative AI 'Art' - to examine AI's ability to create meaningful outputs without relying on narratives. Studying AI's Self-Organizing Systems - to understand how AI self-organizes without human-imposed narratives. Cross-Species Comparison: AI, Humans, and Bees - to compare different cognitive paradigms and problem-solving strategies among humans, AI, and bees. Each of these experiments could provide insights into how humans interpret non-narrative outputs from AI, how AI creates its own form of 'story' through self-organization, and how different forms of intelligence tackle problems, thus aiding in understanding AI's distinct cognitive paradigm and bridging gaps in understanding.


Closed narrative loops in human thought

The narrative paradigm is deeply ingrained in human communication, but it might not be the only, or the most effective, way to think about problem-solving, especially in the context of artificial intelligence. Let's consider a few philosophical and practical concepts to supplement this idea:

Emergentism: This theory posits that complex systems and patterns arise out of a multiplicity of relatively simple interactions. In the context of human-AI interaction, it could mean fostering a dialogue that isn't predetermined or confined to a narrow script but allows for unexpected and meaningful insights to emerge.

Evolutionary Algorithms: These algorithms follow the principles of biological evolution, such as mutation, crossover, and selection to explore a vast solution space. Using such algorithms could help foster a dialogue that evolves and adapts to its context, rather than following a linear path.

Interactive Machine Learning: This approach involves iterative cycles of active learning, where the AI not only learns from the data, but also from its interactions with the user, allowing it to develop an understanding that is continuously updated and refined, and not confined by pre-set narratives.

Generative Models: These models learn the joint probability of the input data and generate new data from the same distribution. They can create diverse outputs from the same input and are not confined by deterministic algorithms.

Applying these ideas to our discussion, one could hypothesize a conversational AI model that doesn't simply follow pre-set narratives or patterns but is capable of exploring a broader set of interactions and generating novel and meaningful responses.

^

311023

this is just knowledge graphs with different weights. fuck I have learnt a lot in the last few months.