AALang Probabilistic Turing Completeness Analysis
Executive Summary
Conclusion: AALang is a computational language with a unique LLM-based execution model. This analysis evaluates whether AALang meets Santos’ formal definition of a probabilistic Turing machine. AALang exhibits non-deterministic behavior through LLM reasoning, but whether it strictly satisfies Santos’ formal requirements depends on whether LLM probabilities can be formalized as transition probabilities that satisfy probability normalization and stay transition constraints. AALang programs executed by LLM agents can achieve computational capabilities, subject to practical limitations (context window constraints). File I/O provides unbounded persistent storage, significantly mitigating memory constraints, though active computation remains bounded by context window size.
What is a Probabilistic Turing Machine?
A probabilistic Turing machine (PTM) is an extension of classical Turing machines that incorporates probabilistic behavior. In his 1969 paper “Probabilistic Turing Machines and Computability,” Santos defined a PTM as a tuple \( M = (Q, \Sigma, p) \), where:
\( Q \)is the set of states\( \Sigma \)is the set of symbols\( p: Q \times \Sigma \times V \times Q \to [0, 1] \)is a probability function, with\( V = \{R, L, T\} \)representing the possible head movements: Right, Left, and Stay (denoted as T)
The probability function \( p \) satisfies the following conditions:
- Probability normalization: For every state
\( q \in Q \)and symbol\( \sigma \in \Sigma \), the sum of probabilities over all possible transitions equals 1:[ sum_{v in V} sum_{q' in Q} p(q, sigma, v, q') = 1 ]
2. **Stay transition constraint**: For every symbol `( sigma in Sigma )`, if `( q \neq q' )`, then `( p(q, sigma, T, q') = 0 )`.
The function `( p )` represents the conditional probability of the machine's next action, given its current state `( q )` and the symbol `( sigma )` it is reading. This model allows the machine to exhibit stochastic behavior, enabling it to make random choices during computation.
Unlike deterministic Turing machines, which always produce the same output for a given input, probabilistic Turing machines can produce different outputs for the same input due to their probabilistic transitions.
## AALang Architecture Overview
AALang (Actor-based Agent Language) is fundamentally different from traditional programming languages:
- **Syntax**: JSON-LD graph format (declarative, not imperative)
- **Execution Model**: LLM agents interpret and execute graph-based prompts
- **Core Principle**: Bounded non-determinism
- **Architecture**: n-mode-m-actor pattern (n≥1 modes, m≥1 actors)
- **Communication**: Three-layer architecture (gossip-based P2P, local graph routing, internal reasoning)
- **State Management**: Natural language text storage (context-window native)
## Analysis: Does AALang Meet Santos' Probabilistic Turing Machine Definition?
### Mapping AALang to Santos' Formal Model
To evaluate whether AALang is a probabilistic Turing machine according to Santos' definition, we must map AALang's components to Santos' formal model:
**States (Q):**
- AALang has well-defined states: modes, actor states, mode transitions, actor active modes
- Each mode represents a distinct behavioral state
- Actors have active modes that define their current state
- Mode transitions represent state changes
**Symbols (Σ):**
- Messages in agent shared state can serve as symbols
- State values (confidence scores, satisfaction indicators, etc.) can serve as symbols
- Natural language content in messages and state can be interpreted as symbols
- File content (when file I/O is used) can serve as symbols
**Transitions and Head Movements (V):**
- Mode transitions can be mapped to state transitions
- Message passing can represent head movements (reading/writing to different parts of state)
- File I/O operations can represent head movements (moving to different storage locations)
- The "Stay" movement (T) could represent remaining in the same mode or not moving the computation head
**Probability Function (p):**
- This is the critical component that determines whether AALang meets Santos' definition
- LLMs have internal probability distributions over token sequences
- LLM reasoning produces different outputs probabilistically
- However, these probabilities are implicit in the execution model, not explicitly defined in AALang's specification
### Requirement 1: Probability Normalization
**Santos' Requirement:**
```math
[ sum_{v in V} sum_{q' in Q} p(q, sigma, v, q') = 1 ] For every state \( q \in Q \) and symbol \( \sigma \in \Sigma \), the sum of probabilities over all possible transitions must equal 1.
Does AALang Satisfy This?
Arguments FOR:
- LLMs have probability distributions over token sequences that sum to 1
- When an LLM processes a prompt, it assigns probabilities to all possible next tokens
- These probabilities are normalized (sum to 1) at the token level
- Mode transitions could theoretically be mapped to token probabilities
Arguments AGAINST:
- AALang’s specification doesn’t define an explicit probability function
\( p \) - LLM probabilities are over tokens, not over state transitions in Santos’ formal model
- The mapping from LLM token probabilities to AALang state transitions is not formalized
- Mode constraints describe variance boundaries qualitatively, not as probability distributions
- There’s no guarantee that mode transitions satisfy probability normalization
Analysis: AALang’s execution relies on LLM probabilities, but these probabilities are:
- Implicit - not explicitly defined in the language specification
- Token-level - probabilities are over tokens, not over state transitions
- Not formalized - the mapping from LLM probabilities to Santos’ transition probabilities is not defined
Verdict: ⚠️ Unclear - LLMs have normalized probabilities, but AALang doesn’t explicitly define a probability function \( p \) that satisfies Santos’ normalization requirement. The requirement could potentially be satisfied if LLM token probabilities are formalized as transition probabilities, but this mapping is not currently defined.
Requirement 2: Stay Transition Constraint
Santos’ Requirement: For every symbol \( \sigma \in \Sigma \), if \( q \neq q' \), then \( p(q, \sigma, T, q') = 0 \).
If the head stays in place (T), the state cannot change to a different state.
Does AALang Satisfy This?
Analysis:
- In AALang, “staying” could mean remaining in the same mode
- If an actor stays in the same mode, it should remain in state
\( q \), not transition to\( q' \neq q \) - However, AALang doesn’t explicitly define what “staying” means in terms of head movements
- The mapping from AALang’s execution model to Santos’ head movements (R, L, T) is not formalized
Verdict: ⚠️ Unclear - AALang doesn’t explicitly define head movements or the stay transition constraint. The requirement could potentially be satisfied if the execution model is formalized to map to Santos’ head movements, but this mapping is not currently defined.
Requirement 3: Unbounded Memory/Storage
Santos’ Model: Probabilistic Turing machines require unbounded memory (infinite tape) for Turing completeness.
AALang Specification:
- State is stored as natural language text in:
- Agent shared state (messages, decisions, observable mode states)
- Mode isolated state (private reasoning context per mode)
- Actor isolated context (private state per actor)
- Storage is described as “context-window native” - automatically included in LLM context window
- File I/O Capability: Optional file I/O operations (read, write, list, delete, create_directory) when explicitly declared for actors
Analysis:
In-Memory Storage:
- Theoretical: Natural language text can represent arbitrary data structures and values
- Practical: Limited by LLM context window size (typically 32K-200K tokens)
- Context window provides working memory for active computation
File-Based Storage:
- File I/O provides persistent storage beyond context window limitations
- Files can store arbitrary amounts of data (limited only by disk space)
- Data can be read back into context when needed
- Supports structured data formats (JSON, JSON-LD, text, markdown, SQL, binary)
Does File I/O Resolve the Memory Problem?
✅ Advantages:
- Provides unbounded persistent storage (limited only by disk space, not context window)
- Enables storing large datasets, intermediate results, and computation state
- Allows reading data into context selectively (only what’s needed)
- Supports structured data formats for efficient storage
⚠️ Limitations:
- File I/O is optional - not all actors have this capability (must be explicitly declared)
- Path restrictions may limit where files can be written (security constraints)
- Access pattern: Data must be read from files into context window to be used
- No direct file access in computation - files must be loaded into context first
- Disk space is still a practical limit (though typically much larger than context windows)
Conclusion:
- In-memory storage: Bounded by context window (32K-200K tokens)
- File-based storage: Provides unbounded persistent storage, but data must be loaded into context to be used
- Combined: File I/O significantly mitigates the memory constraint, allowing unbounded storage with selective loading into context
Verdict: ✅ Satisfies (with file I/O) - File I/O provides unbounded persistent storage, satisfying the memory requirement for probabilistic Turing machines, though with practical limitations around access patterns and optional availability.
Requirement 4: Conditional Branching
Santos’ Model: Probabilistic Turing machines require conditional branching based on current state and symbol.
AALang Specification:
- Mode transitions: Runtime-enforced transitions between modes based on conditions
- Message rejection: Actors can reject messages violating constraints
- Mode transition validation: Personas enforce transition requirements (e.g.,
overallConfidence >= 0.8) - Conditional logic: Expressed through mode-specific message interpretation rules
Analysis:
- Mode transitions provide conditional control flow
- Message interpretation rules enable conditional behavior
- State-based conditions (e.g., confidence scores, satisfaction indicators) drive branching
- LLM reasoning can implement complex conditional logic
- Probabilistic transitions can be implemented through LLM’s probabilistic token selection
Verdict: ✅ Satisfies - Conditional branching available through mode transitions and message interpretation, with probabilistic behavior through LLM reasoning.
Requirement 5: Loops/Iteration
Santos’ Model: Probabilistic Turing machines require the ability to repeat operations (loops).
AALang Specification:
- No explicit loop constructs defined in the language
- Mode transitions can create cycles (e.g., Clarification → Discussion → Formalization → Generation, or back to earlier modes)
- Message passing enables recursive communication patterns
- Persona deliberation can involve multiple message exchanges
Analysis:
- Loops can be simulated through:
- Recursive mode transitions (returning to previous modes)
- Message-based iteration (actors sending messages back and forth)
- LLM reasoning can implement iterative algorithms through natural language instructions
- Probabilistic loops can be implemented through probabilistic mode transitions
Verdict: ✅ Satisfies - Loops can be simulated through mode transitions and message passing, with probabilistic behavior possible through LLM reasoning.
Requirement 6: Basic Operations (Read, Write, Modify Memory)
Santos’ Model: Probabilistic Turing machines require the ability to read, write, and modify memory.
AALang Specification:
- State reading: Actors read from shared state via context window (semantic filtering)
- State writing: Actors write messages to shared state
- State modification: State updates through state management personas
- File I/O: Optional file I/O capability (read, write, list, delete, create_directory)
Analysis:
- State operations are available but operate on natural language text
- No explicit arithmetic operations defined
- Computation happens through LLM reasoning, not language primitives
- File I/O provides external storage capabilities
- Probabilistic operations can be implemented through LLM’s probabilistic reasoning
Verdict: ✅ Satisfies - Read/write/modify operations available, though computation is LLM-mediated. Probabilistic behavior is inherent in LLM reasoning.
Critical Analysis: The Probability Function
The key question is whether AALang defines a probability function \( p: Q \times \Sigma \times V \times Q \to [0, 1] \) that satisfies Santos’ constraints.
Current State
What AALang Has:
- Non-deterministic behavior through LLM reasoning
- Well-defined states (modes, actors, mode transitions)
- Symbols (messages, state values, natural language content)
- Implicit probabilities in LLM token distributions
What AALang Lacks:
- An explicit probability function
\( p \)defined in the specification - Formal mapping from LLM token probabilities to state transition probabilities
- Explicit definition of head movements (R, L, T) in terms of AALang operations
- Guarantee that probability normalization is satisfied
- Guarantee that stay transition constraint is satisfied
Can AALang Be Formalized as a Probabilistic Turing Machine?
Potential Formalization:
States (Q): Map AALang modes and actor states to Santos’ state set
- Each mode could be a state
- Actor active modes could be states
- Mode transitions could be state transitions
Symbols (Σ): Map AALang messages and state values to Santos’ symbol set
- Messages could be symbols
- State values could be symbols
- Natural language content could be encoded as symbols
Head Movements (V): Map AALang operations to Santos’ head movements
- Reading from shared state could be “Right” or “Left”
- Writing to shared state could be “Right” or “Left”
- Staying in the same mode could be “Stay” (T)
- File I/O operations could map to head movements
Probability Function (p): Map LLM token probabilities to transition probabilities
- LLM token probabilities could be aggregated to state transition probabilities
- Mode transition probabilities could be derived from LLM reasoning probabilities
- This would require formalizing how LLM probabilities map to AALang state transitions
Challenges:
- LLM probabilities are over tokens, not over state transitions
- The mapping from tokens to state transitions is complex and not straightforward
- Mode constraints are qualitative (variance boundaries) rather than probabilistic
- No explicit mechanism to ensure probability normalization
- No explicit mechanism to ensure stay transition constraint
Conclusion
Is AALang a Probabilistic Turing Machine According to Santos’ Definition?
Short Answer: AALang exhibits non-deterministic behavior that could potentially be formalized as a probabilistic Turing machine, but it does not currently meet Santos’ formal definition because it lacks an explicit probability function \( p \) that satisfies Santos’ constraints (probability normalization and stay transition constraint).
Detailed Answer:
AALang’s Non-Deterministic Behavior:
- AALang is clearly non-deterministic - same input can produce different outputs
- LLM reasoning provides probabilistic behavior
- Bounded non-determinism provides variance boundaries
- This makes AALang suitable for probabilistic computation models
Santos’ Formal Requirements:
- States (Q): ✅ AALang has well-defined states (modes, actors, mode transitions)
- Symbols (Σ): ✅ AALang has symbols (messages, state values, natural language content)
- Head Movements (V): ⚠️ Not explicitly defined, but could be mapped from AALang operations
- Probability Function (p): ❌ Not explicitly defined in AALang’s specification
- Probability Normalization: ⚠️ LLMs have normalized probabilities, but mapping to state transitions is not formalized
- Stay Transition Constraint: ⚠️ Not explicitly defined or guaranteed
What Would Be Required:
- Explicit definition of probability function
\( p: Q \times \Sigma \times V \times Q \to [0, 1] \) - Formal mapping from LLM token probabilities to state transition probabilities
- Guarantee that probability normalization is satisfied for all states and symbols
- Guarantee that stay transition constraint is satisfied
- Explicit definition of head movements (R, L, T) in terms of AALang operations
- Explicit definition of probability function
Practical Reality:
- AALang exhibits probabilistic behavior in practice through LLM reasoning
- The underlying LLMs have probability distributions that could theoretically be formalized
- However, the formalization is not currently part of AALang’s specification
- AALang is designed for agent orchestration, not as a formal probabilistic Turing machine
Final Verdict
AALang exhibits non-deterministic behavior that is compatible with probabilistic computation models, but it does not currently meet Santos’ formal definition of a probabilistic Turing machine because:
- It lacks an explicit probability function
\( p \)defined in the specification - The mapping from LLM probabilities to state transition probabilities is not formalized
- There’s no guarantee that probability normalization and stay transition constraints are satisfied
However, AALang could potentially be formalized as a probabilistic Turing machine if:
- LLM token probabilities are formalized as transition probabilities
- The mapping from AALang states/transitions to Santos’ formal model is explicitly defined
- Probability normalization and stay transition constraints are proven to be satisfied
For practical purposes, AALang’s non-deterministic behavior makes it suitable for probabilistic computation, even if it doesn’t strictly meet Santos’ formal definition without additional formalization.
References
- AALang Design Specification (
aalang-design.jsonld) - GAB Runtime Behaviors (
gab-runtime.jsonld) - Agent-to-Actor Communication (
agent-to-actor.jsonld) - Message Protocol (
message-protocol.jsonld) - GAB Specification (
gab.jsonld) - Santos, E. S. (1969). Probabilistic Turing Machines and Computability. Proceedings of the American Mathematical Society, 22(3), 704–710. https://doi.org/10.2307/2037463