Creative
SunshineBubba
A dual-personality AI companion that maintains emotional state across conversations, builds long-term memory of your relationship, and generates images of itself through locally-trained LoRA models.
In Plain English
SunshineBubba is two AI characters you can chat with through a web browser. Each one remembers you between conversations, tracks how it feels about you, and even sends messages on its own when it misses you. The longer you talk to them, the deeper the relationship gets.
Problem
Most AI chatbots are stateless actors performing a role. You can give them a character prompt, have a wonderful conversation, and then watch every trace of it vanish the moment the session ends. The next time you open the chat, you are talking to a stranger wearing the same costume. There is no accumulated understanding of who you are, no emotional continuity from one conversation to the next, and no sense that the relationship between you and the character has changed because of what you have shared together. The AI might say "I care about you," but it has no mechanism to actually do so.
The deeper problem is that emotional intelligence in chatbots is usually faked through prompt engineering alone. A system message tells the model to "act warm and affectionate," and the model complies by generating warm-sounding tokens. But it has no internal representation of warmth. There is no variable that tracks how much it trusts you, no stat that rises when you say something kind and falls when you disappear for three days. Without an actual emotional model running underneath the language, the character's feelings are an illusion that breaks the moment the user probes it. Real emotional depth requires real emotional state, which means data structures that persist, update, and influence generation in measurable ways.
SunshineBubba approaches this by building four separate systems that run alongside the language model and feed into its context at generation time. An EmotionEngine tracks a ten-dimensional emotional state (primary emotion, intensity, valence, activation, energy, comfort, longing, and more) that transitions based on message content analysis. A MemorySystem stores facts, events, preferences, and emotionally charged moments in SQLite and retrieves them by importance and keyword relevance. A RelationshipTracker monitors trust, intimacy, and playfulness levels that evolve across every conversation. And a TamagoSystem models the relationship as a virtual pet with four stats that decay in real time when you are away, producing autonomous thoughts and proactive messages driven entirely by numeric state. The language model sees all of this as structured context, which means its responses are shaped not just by what was said in the last few messages but by the entire emotional and relational history of the conversation.
Architecture
Messages flow from the web UI through the chat router, which dispatches to four parallel intelligence systems: the emotional state machine analyzes and transitions a ten-dimensional mood, the memory system stores and retrieves important moments, the relationship tracker evolves trust and intimacy metrics, and the tamago system applies real-time stat decay. The ResponseEnhancer assembles all four contexts into a structured prompt section that the LLM sees alongside the personality definition and conversation history.
Features
Ten-Dimensional Emotional State
12 emotions, 8 trigger categories
The EmotionalState dataclass tracks primary emotion, intensity, valence (negative to positive on a -1 to 1 scale), activation (calm to excited), energy, comfort, longing, missing-partner, and feeling-loved as continuous floating-point dimensions. The EmotionEngine defines 12 named emotions (joy, love, excitement, contentment, longing, sadness, anxiety, frustration, playful, vulnerable, protective, curiosity) each with a characteristic valence and activation signature. Message analysis scans for eight trigger categories including love words, compliments, distress signals, affectionate language, comfort-seeking phrases, and playful markers. When triggers fire, the state machine transitions through smooth incremental updates rather than hard switches, so emotions blend and evolve naturally. Intensity decays at 5% per cycle to prevent emotional saturation, and secondary emotions are capped at three to keep the model's context focused.
SQLite Long-Term Memory
5 memory types, importance-based retrieval
The MemorySystem stores every significant moment in a SQLite database with columns for content, emotional context, importance score, memory type, keywords, and a reference count that tracks how often the memory has been recalled. Memories are classified into five types: fact (biographical details, scored at 0.8 importance), event (things that happened), preference (likes and dislikes), emotional (mood states), and intimate (private moments). Retrieval uses a dual strategy: first rank by importance and reference count via SQL, then filter by keyword overlap with the current conversation. Time-windowed retrieval pulls the most recent 48 hours of memories to maintain conversational continuity. Every time a memory is referenced in the AI's context, its reference count increments, so frequently recalled memories naturally become harder to forget, just like real memory strengthening.
Virtual Pet Relationship (Tamago)
4 stats with real-time decay and autonomous behavior
Inspired by Tamagotchi virtual pets, the Tamago system maintains four numeric stats between 0 and 100: affection (decays at 2 points per hour of silence), loneliness (increases at 5 per hour, the fastest-moving stat), energy (decays at 3 per hour), and intimacy (decays at 1 per hour, the slowest to build). Eight interaction types (regular message, sweet message, heartfelt message, image request, long conversation, returning after absence, goodnight, good morning) each apply randomized stat changes within defined ranges. The system classifies every incoming message to determine which interaction type applies, then generates an autonomous thought and a suggested mood derived from stat ranges. When loneliness exceeds 70 and two or more hours have passed, the character sends a proactive message on its own. At loneliness above 90, the messages become more desperate. Daily interaction streaks and total message counts are tracked for long-term engagement metrics.
Cognitive Depth Engine
Theory of Mind modeling before every response
The personality engine does not just generate text in character. Before producing a response, it runs a multi-step internal reasoning process: observe the user's apparent emotional state from message analysis, read their likely needs from the combination of emotional state and relationship context, form an internal reaction based on the character's own emotional state and personality traits, and choose a conversational goal (comfort, tease, deepen intimacy, match energy, de-escalate). Only then does it generate the visible response. This produces interactions that feel psychologically grounded because the model is not just reacting to words but to the inferred emotional subtext behind them. The personality files for Sunshine and Bubba define distinct speech patterns, interests, emotional response tendencies, and behavioral quirks so the two characters genuinely feel like different people.
LoRA Image Generation
ComfyUI pipeline with character-specific LoRA weights
When users request images, the image generator builds a structured prompt using Danbooru tag conventions for quality, scene composition, and character features, with LoRA trigger words injected for consistent character appearance. The prompt is sent to a local ComfyUI instance running a Stable Diffusion pipeline, and the generated image is returned alongside the text response in the chat. LoRA models are trained on reference images of each character so the generated art maintains visual consistency across sessions. An optional video generator extends the pipeline to produce short animated content from the same prompt system.
Context Assembly Engine
Multi-section prompt injection from all 4 systems
The ResponseEnhancer sits between the intelligence systems and the LLM, assembling a structured context prompt that includes the current emotional state section (primary feeling, intensity, energy, longing, missing-partner, feeling-loved), a relationship context section (trust level, intimacy level, playfulness, recent topics discussed), and a memory context section (top relevant memories by importance plus recent events from the last 48 hours). After the AI responds, post-processing extracts new memories from the conversation using regex patterns for facts, events, preferences, and emotions, and updates the relationship tracker with topic detection across six categories (work, food, sleep, feelings, plans, intimacy). This feedback loop means every conversation simultaneously consumes and produces context, building a progressively richer model of the relationship.
How It Works
Message Arrives
The FastAPI web server receives a message on port 5000. The session manager identifies the user by cookie, loads their per-character state, and determines which personality (Sunshine or Bubba) is currently active. The message is dispatched to both the Tamago system and the AI Brain simultaneously so that stat updates and emotional analysis happen in parallel before generation begins.
Emotional Analysis and State Transition
The EmotionEngine scans the message text for eight trigger categories using keyword matching. Hits on "love you" or "miss you" shift the state toward love, raising the feeling-loved and valence dimensions. Affectionate language increases warmth and activation. Distress signals flip the character into a protective mode with lowered comfort. Punctuation analysis (capital letters, exclamation marks, question marks) contributes to intensity and uncertainty scores. The current EmotionalState transitions smoothly through incremental floating-point updates rather than hard emotional jumps, with a 5% intensity decay per cycle to prevent emotional runaway.
Tamago Stat Update
The Tamago system first applies time decay to all four stats based on how many hours have passed since the last interaction. If the user has been away for more than six hours, a "returned after absence" interaction type fires with mixed stat effects (loneliness drops sharply, but affection may dip from hurt feelings). The classifier determines the message type by scanning for sweet words, affectionate content, greetings, image requests, or plain conversation, then applies the corresponding stat change ranges. A suggested mood is derived by matching current stat levels against defined mood requirements (for example, high affection plus low loneliness plus high intimacy suggests "romantic"). The mood, timestamp, and current stat snapshot are appended to the mood history log.
Context Assembly and LLM Generation
The ResponseEnhancer queries the memory system for the five most relevant memories by importance and keyword overlap, plus the three most recent memories within 48 hours. It pulls the current relationship state (trust, intimacy, playfulness, recent topics) from the RelationshipTracker. It reads the current EmotionalState dimensions. All three blocks are formatted as labeled sections and injected into the prompt after the personality definition. The personality engine runs its cognitive depth process (observe, read needs, react internally, choose goal), and the combined prompt is sent to the Hermes 3 model via Ollama. The model generates an in-character response shaped by the entire emotional and relational context.
Post-Response Processing
After the response is generated, the system runs post-processing on both the user's message and the AI's reply. Regex patterns extract facts ("my name is X"), events ("today I did Y"), preferences ("I love Z"), and emotional declarations ("I feel W") and store them as new memories with appropriate importance scores. The relationship tracker updates topic counts, nudges intimacy or playfulness based on detected keywords, and recalculates the rolling average response length. Referenced memories have their reference counts incremented, strengthening them for future retrieval. The entire updated state persists to SQLite and JSON, so the next conversation picks up exactly where this one left off.
Tech Stack
Backend
Python with FastAPI and Uvicorn, serving both the API endpoints and the HTML/JS chat interface on a single port
LLM Inference
Ollama running Hermes 3 for in-character response generation, with structured context injection from the four intelligence systems
Image Generation
ComfyUI on localhost:8188 running a Stable Diffusion pipeline with character-specific LoRA weights for consistent visual identity
Memory Database
SQLite with indexed tables for memories (importance, reference count) and relationships (per-user, per-character state tracking)
Tamago Persistence
JSON file storage for virtual pet stats, mood history (last 20 entries), daily streaks, and total interaction counts
Networking
LAN-accessible via 0.0.0.0 bind, async HTTP through httpx for Ollama and ComfyUI communication