Artificial intelligence has seen rapid evolution, moving from simple chatbots to highly sophisticated, multimodal systems capable of perception, reasoning, and voice interaction. Among the most iconic fictional representations of advanced AI is GLaDOS from Valve’s Portal series. Her distinctive voice, personality, and dark humor made her one of gaming’s most memorable characters. Today, the open-source GLaDOS Personality Core project aims to bring this character into the real world through a blend of hardware, software, local large language models, and speech synthesis.

Created and maintained by developer dnhkng, the repository is more than a fun experiment. It is a comprehensive attempt to engineer an embodied AI system with real-time responses, vision, personality memory, hardware motion, and customizable voices. The project is built to be lightweight enough to run on consumer hardware, including small single-board computers, while offering the advanced features needed for a truly interactive AI companion. This blog explores the architecture, installation process, features, goals, and challenges of the GLaDOS Personality Core project.
What the GLaDOS Project Aims to Achieve
This project sets out to create an aware, interactive, embodied version of GLaDOS that mimics the behavior, interaction style, and sound of the fictional AI. It blends hardware engineering with machine learning, natural language processing, speech-to-text pipelines, and personality modeling.
Its primary goals include:
1. Developing a GLaDOS Voice Generator
The project includes text-to-speech models and supports popular voice systems like Kokoro. The aim is to generate precise, recognizable GLaDOS-style speech.
2. Crafting a Realistic Personality Core
Through custom prompts and system messages, developers can define how the AI responds and behaves. This helps recreate GLaDOS’s dry wit and iconic dialogue patterns.
3. Adding Medium- and Long-Term Memory
The project experiments with simple vector memory solutions, such as using NumPy arrays to store embeddings. This allows GLaDOS to retain information and evolve over time.
4. Giving GLaDOS Vision
By integrating a Visual Language Model (VLM), GLaDOS can observe the environment, track movement, and identify people or objects.
5. Building a Physical Body
3D-printable parts, servos, and stepper motors are used to build a physical shell, complete with animations for expressive behavior.
6. Designing Low-Latency Performance
The target is sub-600ms response time using efficient local LLMs and an optimized audio pipeline.
Software Architecture Overview
A major achievement of this project is its focus on extremely low latency while running fully offline or on self-hosted servers. The architecture works as follows:
- Audio is continuously recorded into a circular buffer.
- Once voice activity ends, the system transcribes the audio at high speed.
- The text is sent to a local LLM, such as those running via Ollama or an OpenAI-compatible server.
- The model generates streaming text, which is broken into sentences.
- Each sentence is fed into the text-to-speech system as it arrives.
- This pipeline shortens wait times, producing speech while new sentences are still being generated.
Additionally, the project avoids heavy dependencies like PyTorch when possible, reducing resource use and making the system suitable for constrained devices.
Hardware and Animatronics
The hardware plans include:
- Servo motors for movement
- Stepper motors for precision control
- A vision module for tracking users
- 3D-printed components to construct the iconic GLaDOS shape
Once assembled, the unit can turn toward voices, animate facial components, and interact with users in real time.
Installation Guide
While still experimental, installation is reasonably accessible.
Step 1: Install Ollama
This is required to run the local LLM engine. A 3B model is recommended for initial tests:
ollama pull llama3.2
Any OpenAI-compatible endpoint—local or cloud—can also be used by editing glados_config.yaml.
Step 2: Install System Dependencies
Depending on OS, this may include:
- Nvidia drivers and CUDA
- PortAudio (Linux)
- ONNX Runtime packages
- Python 3.12 (Windows Store)
Step 3: Download the Repository
git clone https://github.com/dnhkng/GLaDOS.git
Step 4: Run the Installer
Mac/Linux:
python scripts/install.py
Windows:
python scripts\install.py
Step 5: Launch GLaDOS
uv run glados
For a more advanced interface:
uv run glados tui
Step 6: Generate Speech
uv run glados say "The cake is real"
Customization Options
Change the LLM Model
Any Ollama model can be used:
ollama pull {modelname}
Then update the config file:
model: "{modelname}"
Change the Voice
The project supports Kokoro voices and a wide range of US and UK options. For example:
voice: "af_bella"
Create New Personalities
Duplicate configs/glados_config.yaml and modify:
- Model
- System prompts
- Example user queries
- Example agent responses
Launch with:
uv run glados start --config configs/assistant_config.yaml
Common Issues and Troubleshooting
Audio Feedback Loop
GLaDOS may hear herself and get stuck in response loops. Solutions include:
- Using headphones or a hardware echo-canceling microphone
- Disabling interruption in the config file
ONNX Runtime Errors
Install the latest Visual C++ Redistributable.
Segmentation Faults on macOS
These are known issues. Users are encouraged to report them on Discord or contribute fixes.
Why This Project Matters
Beyond the novelty of recreating a beloved gaming character, the GLaDOS project demonstrates:
- How local AI can function without cloud dependency
- How multimodal systems can be built on consumer hardware
- How personality-driven AI can be engineered and customized
- How open-source efforts can advance embodied AI research
It represents a cutting-edge experiment in the merging of animatronics, language models, and interactive design.
Conclusion
The GLaDOS Personality Core project is an ambitious, innovative attempt to bring a fictional artificial intelligence character into reality. Combining hardware design, voice synthesis, real-time speech-to-text systems, and local LLM processing, it offers a powerful framework for anyone interested in embodied AI. Although still in development, the system already provides highly interactive capabilities, customizable personalities, and an expanding set of features driven by a growing community. As AI continues to move toward embodied systems, projects like this offer a preview of the creative and technical possibilities ahead.
Follow us for cutting-edge updates in AI & explore the world of LLMs, deep learning, NLP and AI agents with us.
Related Reads
- TinyClaw by TinyAGI: A Multi-Agent, Multi-Team AI Assistant for 24/7 Automation
- Memori: The Future of SQL-Native Memory Engines for AI and LLM Applications
- The Rise of Distributed AI Systems: Why Scalable Multi-Agent Frameworks Are the Future of Artificial Intelligence
- Building a Real-Life GLaDOS: Inside the Open-Source Project Bringing Valve’s Iconic AI to Life
- MioCodec-25Hz-24kHz: A High-Efficiency Neural Audio Codec for Modern Spoken Language Modeling
2 thoughts on “Building a Real-Life GLaDOS: Inside the Open-Source Project Bringing Valve’s Iconic AI to Life”