2025 Ended…quite a while ago now
How long into the new year do you generally wish a “Happy New Year!” to people? It’s always a bit awkward in the office if there’s departments you don’t see too often. I like Larry David’s 3-day rule personally.
So the previous update was back in August, and things have changed a fair amount in that time as you would probably hope or expect given the length of time that has passed. The bullet points of change:
- Total rework of the live-streamer gameplay loop
- New memory architecture approach (GSAMA)
- xenolab was conceived of and defined as a gameplay loop, then implemented
- Lots of work in Rust on an agent execution substrate
Not really that many bullet points but each one then had some heavy work that went in.
The Gameplay Loop and the AI Live-Streamer
Biggest news up front, Orielle is now at a point where she is competent enough to go live and stream. After you’ve read the rest of this post head on over to the project page to see the 9-minute video of her first test session.
The Loop
The loop has gone through dramatic changes in concept and implementation as the project has progressed, but now I am satisfied it is doing what it should be doing. As with most things on this project, the solution would probably be obvious to someone with a bit more of the know-how upfront but where I’ve settled now seems logical and functional. I have gone into more detail on the loop and how it has evolved in a post on ko-fi and patreon if you are interested in learning more about those changes (both platforms have identical content so there is absolutely no need to sign up twice).
It’s funny, when you stare at enough C# it slowly starts to make sense which helped to identify some key hooks from the game which then made the final piece of work became far more cohesive instead of facing constant regressions that had to be fixed. An update was made to the
mod
and the resulting exports now provide the state requirement to allow the AI to really “see” what’s going on.
Memory
All alone in the moonlight, I can smile at the old days…
Elaine Paige was a fantastic Grizabella wasn’t she? It’s an interesting premise of the song really, living on with a memory of a time when you were seen and recognised and touched. The implication that happiness as we see it is misunderstood, but give that old cat the chance to relive those memories of being beautiful again and we will truly understand. Haunting. Existentially disconcerting. And I want to burden the AI with it too. How do we code anxiety?
As with the song I’ve started with memory and what memory is. There’s some theories and thoughts and some other things being worked on like the GSAMA project but fundamentally I think there is more to it than simply giving a Vector DB. Not discounting them entirely as they are very useful for providing a particular layer and third party projects like OpenMemory are fantastic and the specific example I would recommend as I use it alongside additional layers.
This piece is effectively the “identity” layer that will allow Orielle to maintain a personality or persona long term without significant markdown files of how she should be acting. I have a feeling that there is a different way of doing this but that would require as few more GPUs so maybe it’s something that can be added in the future to sit alongside the GSAMA method.
Orielle the Mute
I have taken her voice.
- I had been using IndexTTS. With that I had been modulating a sample voice. That sample voice is of someone real and still very much alive. I do not have their permission to use their voice. Orielle needs a new voice.
- The voice elements of the Orielle project were implemented really early on. I am not even sure if there was a working ROCm version of
causal-conv1dat the time. The landscape has changed and improved so I want to revisit what’s out there.
Mainly, though, it’s 1.
GSAMA
I alluded to this above in the memory update. Visit the theory page for a fuller view of what it is and does.
GSAMA or Geometric State Associative Memory Architecture works on the basis that memory is the preservation of cognitive configuration not external stimuli. I posit that by storing internal state instead of observations, GSAMA allows AI systems to recover continuity across time without replaying the past. The system preserves where the mind has been in cognitive space, enabling future reasoning to return to similar regions when needed.
The approach has itself gone through various iterations and testing phases where results showed that frontier models were capable of continuing the tasks of each other without having a full context dump provided at the outset to explain the task. It’s one of those things I’m revisiting though, because I’m fairly sure there must be something I missed that’s smuggling text somewhere to achieve those results.
xenolab
This was just a bit of fun really. I’m basically a boomer at this point so I remember back in school when those games used to go around that were entirely text based but with a basic UI; and they were actually pretty fun. I fancied recreating that same type of old school feel with a terminal based game and that then led to the idea of xenolab.
It is the year 2000, you are a scientist travelling with your husband on a routine expansion mission to a neighbouring uncharted system. 2 years into the journey you awake from the cryo-stasis chamber to the blare of the alarm and smoke filling the statis floor. Disoriented, and with the heat from the smoke rough against your skin you open your mouth to call out for your husband but it instantly fills and instead all that comes out is a choking cough. Stumbling out of the room the smoke is thick but you finally make your way to the bridge and can see that, somehow, your ship has been knocked off course and is entering low orbit of an unknown planet. With only a basic grasp of the piloting controls and your husband, the designated pilot and engineer, still nowhere to be seen you grasp the back of the seat and spin yourself into the cockpit. The manual controls are rough and heavy to manoeuvre but by pulling and dragging them back the angle of descent narrows. Across the ship the voice of the warning system finally crackles into audible range … “30 seconds to impact. Brace for impact. Brace for impact.” You scream, calling out once more for your husband as you fumble to buckle the safety braces and harnesses into position from the chair of the cockpit. The last one clicks into place and as you glance up once more to the viewer the terrain of the unknown planet fills it entirely and everything goes black.
Spoiler alert - your husband is dead and was the whole time, and now you are stuck on a strange planet with a semi broken ship and at least 2 years out from rescue. Miraculously the science lab survived but will need repair, so your goal is to spend the next 2 years researching the planet and the flora that exists and running experiments to identify different patterns and variables. It’s at a semi working version 0.4 with only a minimal implementation of the opening mechanics but it does play.
The Agent Execution Substrate
There is probably an argument to be made that this also deserves a project page so I will look to do that too. In short pieBot it is a local first, Rust-authoritative agent execution substrate where the models used are treated as untrusted compute accelerators not as an execution layer. The Rust control plane owns state, policy, tools, audits, and artifacts and passes information as required. There’s a simple architecture diagram below which shows the different layers at a very high level:
┌─────────────────────────────────────────────────────────┐
│ Operator TUI │
│ (read‑only, no authority) │
└────────────────────────────┬────────────────────────────┘
│ spawns
▼
┌─────────────────────────────────────────────────────────┐
│ serverd (Rust) │
│ Authoritative Control Plane │
├───────────────┬─────────────┬─────────────┬─────────────┤
│ State (GSAMA) │ Memory │ Tools │ Audit │
├───────────────┼─────────────┼─────────────┼─────────────┤
│ Retrieval │ Lenses │ Modes │ Contracts │
└───────┴───────┴──────┴──────┴──────┴──────┴──────┴──────┘
│
▼
┌─────────────────────────────────────────────────────────┐
│ Provider (MockProvider) │
│ (untrusted worker) │
└─────────────────────────────────────────────────────────┘
There is a dependency here, though, on the GSAMA project so once I have cracked that properly and get it really working as I intend then it should be able to slot into pieBot and provide the additional memory layer.
What Next?
Orielle will be live streaming, I am yet to work out a schedule but the plan is to be doing it regularly. There are accounts on the three main platforms and activity will most likely cycle around until I find a good fit and rhythm; the links to each are below so you can go and catch her when she is up.
Videos will be created and also uploaded to the YouTube channel to hopefully show her progress.
I have barely scratched the surface of the gameplay mechanics for Caves of Qud so there is also still loads to do there to get her to a place where she can follow quest lines, form alliances, and tinker with cybernetics to fix her own mech-legs.
Work will continue in the background on the GSAMA project and with pieBot but once I have stopped the night terrors about calculus and why my cosine similarity was hair’s breadth when it shouldn’t have been.
Well, that is that for the update. Looking forward to getting stuck back in.
- piestyx