The Artificial Consciousness Module (ACM) attempts to create synthetic awareness in AI systems by combining the latest AI technologies, virtual reality (VR) environments, and emotional processing. This project explores the possibility of replicating human-like consciousness in non-biological systems. By fostering an emotional connection between an ACM-equipped AI agent and humans, to reinforce adherence to Asimov’s Three Laws of Robotics.
- VR Simulations: Realistic VR environments built with Unreal Engine 5.
- Multimodal Integration: Combines vision, speech, and text models for rich understanding.
- Emotional Memory Core: Processes and stores past emotional experiences.
- Narrative Construction: Maintains a self-consistent internal narrative using large language models.
- Adaptive Learning: Employs self-modifying code for continuous improvement.
- Dataset Integration: Leverages high-quality, licensed datasets (e.g., GoEmotions, MELD) for emotion recognition and simulation tasks.
- Game Engines: Unreal Engine 5
- AI Models: Llama 3.3, GPT-4V, PaLI-2, Whisper
- Vector Storage: Pinecone, Chroma
- Emotion Detection: Temporal Graph Neural Networks, GoEmotions, MELD
- Learning Frameworks: LoRA, PEFT, RLHF
data/
: Datasets for emotions and simulations.docs/
: Documentation for architecture, installation, datasets, and the roadmap.- Includes
datasets.md
andpreprocessing.md
for dataset-related details.
- Includes
models/
: Pre-trained and fine-tuned AI models.scripts/
: Utility scripts for setup, training, and testing.simulations/
: VR environments and APIs for agent interactions.tests/
: Unit and integration tests.
- Python 3.8 or higher
- CUDA Toolkit (for GPU support)
- Unreal Engine 5
- Git
git clone https://github.com/venturaEffect/the_consciousness_ai.git
cd the_consciousness_ai
It’s recommended to use a Python virtual environment to manage dependencies.
Linux/MacOS:
python3 -m venv venv
source venv/bin/activate
Windows:
python -m venv venv
.\venv\Scripts\activate
Run the provided installation script:
bash scripts/setup/install_dependencies.sh
Or install manually:
pip install --upgrade pip
pip install -r requirements.txt
Datasets are hosted externally and need to be downloaded and preprocessed locally:
- Refer to
/docs/datasets.md
for dataset details and download links. - Follow the preprocessing instructions in
/docs/preprocessing.md
to prepare datasets for use.
Example:
python scripts/utils/preprocess_emotions.py --input /path/to/raw/data --output /path/to/processed/data
LLaMA 3.3 is not distributed via pip. You need to download model weights from Hugging Face.
Sign up or log in at Hugging Face to obtain a token.
huggingface-cli login
Follow the prompts to enter your token.
The model weights download automatically on first use. Alternatively, manually download:
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_name = "meta-llama/Llama-3.3-70B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_auth_token=True
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto",
use_auth_token=True
)
LLaMA 3.3 is large and requires a GPU (16 GB VRAM recommended) and CUDA installed.
Install bitsandbytes for reduced memory usage:
pip install bitsandbytes
Install Unreal Engine 5 and its prerequisites.
Linux example:
sudo apt-get update
sudo apt-get install -y build-essential clang
For Windows and macOS, refer to Unreal Engine Docs.
PaLM-E Integration:
pip install palm-e
Whisper v3 Integration:
pip install whisper-v3
Activate your virtual environment and start the narrative engine:
python models/narrative/narrative_engine.py
Detailed usage instructions for each module are in their respective directories and documentation files.
Contributions are welcome. Please see docs/CONTRIBUTING.md
for details on contributing new datasets, features, or fixes.
This project is licensed under the terms of the LICENSE
file.
- Meta AI for the LLaMA model
- Google AI for PaLM-E
- OpenAI for Whisper
- Contributors for suggesting and integrating datasets