BD × AI Lab · Operating System Layer · Ljubljana, Slovenia

AIM³
Operating System

Structured human–AI collaboration for long-horizon research, engineering, and judgment.

AIM³ is the operating system behind this portfolio: a practical framework for working with advanced AI systems through defined roles, persistent state, workflow rounds, trust rules, and decision trails. Bojan Dobrečevič remains the human architect and final judge; AI systems act as collaborators, builders, critics, and amplifiers inside the structure.

Role-based reasoning Persistent session state Workflow rounds Decision trails
Why AIM³ Exists

Normal chat breaks on long-horizon work.

Ordinary chat is good for short tasks. It breaks when the work becomes multi-step, novel, adversarial, and cumulative. Decisions get buried. Constraints drift. Different sessions contradict each other. One model gives one answer, another gives a different answer, and the reasoning that produced either answer is rarely preserved in a reusable way.

Failure Mode

Reactive by default

The AIM³ principles document frames the core problem directly: humans carry an internal loop of continuity and self-argument, while standard AI chat is reactive — prompt, response, gone. That makes long-horizon thinking brittle.

Failure Mode

Context evaporates

Long projects lose accepted facts, rejected paths, priorities, and the exact reason one option beat another. Without explicit state and records, every new session risks partial amnesia.

What AIM³ does instead

It replaces loose prompting with an operating model: give the work roles, give it memory, force explicit rounds on serious tasks, keep a trust contract to reduce vagueness and sycophancy, and leave behind trails that the next session can continue instead of re-guessing.

State Structure
Project Factsstable truths and constraints
Session Logwhat happened this round
Next Stepsthe active execution queue
Restart Promptclean handoff into the next session
What AIM³ Actually Is

Infrastructure for AI teamwork at scale.

In the protocol, AIM³ is defined as AI-mediated multi-model meta-collaboration: infrastructure for AI teamwork, not just a bigger prompt. It combines role-based reasoning, persistent session state, workflow rounds for serious tasks, and a trust pact that sets the tone and rigor of the collaboration.

Operating view
Human leadsets scope and final judgment
Rolescreator · skeptic · architect · expert
Roundsdiverge · test · synthesize · audit
State + trailsfacts · logs · next steps
The Core Stack

Five modules that make the collaboration hold together.

Module 01

Dream Team Protocol

A four-voice reasoning method for hard decisions: B reframes and breaks false limits, C formalizes and quantifies, S attacks weak claims, X brings domain knowledge. The point is productive argument, not consensus theater.

Module 02

Session State

One project state file tracks stable facts, session log, current next steps, and a restart prompt. That gives continuity across sessions and models without pretending memory will manage itself.

Module 03

Workflow Rounds

Serious work gets a structured pipeline. The protocol distinguishes simple, medium, large, and wide-funnel tasks, escalating from light verification to full rounds: spec, panel, verification, synthesis, audit, acceptance.

Module 04

Trust Pact

The pact defines how the collaboration should feel and how rigor should work: equal thinking partnership, honesty over agreeableness, blunt but non-demotivating tone, evidence tiers, and tests outranking opinions.

Module 05

Decision Trails / Logs

Non-trivial choices are not left as vibes. AIM³ records criteria, weights, scorecards, sensitivity, and the decision record itself. Session logs keep the timeline short, readable, and recoverable.

How It Works in Practice

Problem → roles → rounds → judgment → memory → next session.

The operational loop is simple enough to run daily and strong enough to support frontier work. A problem enters the system. Roles attack it from different angles. Higher-stakes tasks move through explicit rounds. The human lead decides what counts. The result gets written into state and logs so the next session starts from the actual frontier, not from scratch.

Every serious round ends with a verdict: what stays, what changes, what is rejected, and what must be tested next.

01
Problem
Frame the task, constraints, and acceptance criteria.
02
Roles
Split creator, skeptic, architect, and expert functions.
03
Rounds
Use light or heavy workflow depending on stakes.
04
Conflict
Let disagreement surface gaps, edge cases, and better paths.
05
Judgment
The human architect accepts, rejects, or re-scopes.
06
State
Write stable truths, session log, next steps, restart prompt.
07
Continuity
Resume later without losing the design history.
Compact-first discipline

Not every task needs full ceremony

AIM³ stays compact by default. The protocol explicitly says to expand only when the added detail serves the task. That keeps the system usable in real work instead of turning it into bureaucracy.

Decision discipline

Important choices get scored

When stacks, tools, or architectures are in dispute, AIM³ pushes the team toward criteria, weights, scoring, sensitivity, and a written record of why a choice was made.

Worked Example

The Dream Team method already produced real architectural movement.

The bundle includes a full Dream Team example from the audio roadmap. Five ideas were put through multi-voice argument. The important result is not just that ideas were discussed — it is that the dialogue changed the architecture in concrete ways.

Before

Loose ideas

“Save overhead bytes.” “Try two-pass.” “Maybe use math generators on residuals.” On their own, these are directions, not designs.

After

Architectural decisions

The example turns those directions into specific outcomes: FLAC-minimal as a zero-overhead candidate, informed candidate pruning in two-pass architecture, stronger periodic prediction, a practical 8za2flac path, and a physical audio model library.

The important proof point

The example explicitly states that the key breakthrough emerged through argument: a skeptical attack forced the expert voice to stop hand-waving and reach for concrete physics, which then became a formal architectural path. That is the point of AIM³. It uses structured conflict to convert vague possibility into testable design.

Why This Matters

AIM³ matters because serious human–AI work needs continuity, friction, and judgment.

For research

Better continuity

Long research programs need accepted facts, open questions, and rejected paths to survive across months. AIM³ gives that work a durable shape instead of relying on memory, chance, or a single model’s mood.

For engineering

More disciplined decisions

Engineering improves when disagreements are explicit, tests outrank opinions, and tradeoffs are written down. AIM³ turns that into routine practice rather than leaving it implicit.

For future collaboration

Beyond prompting

The future of human–AI work is not a lone user chatting with a compliant assistant. It is humans directing structured teams of specialized systems with memory, adversarial pressure, and continuity over time.

Already in use

Backbone behind the portfolio

AIM³ is not presented here as a hypothetical framework. It is the operating layer behind the documents, code, reviews, and cross-domain project work shown across the broader BD × AI Lab portfolio.

Public takeaway

AIM³ is a practical answer to a real problem: how to keep advanced AI collaboration coherent when the work becomes serious. Not bigger prompts. Not vague “AI agents.” A working operating model for long-horizon human–AI collaboration.