Network Architecture

Network Architecture

Distributed Temporal Infrastructure

Five layers. Hardware at the bottom providing nanosecond precision. P2P gossip spreading time proofs. Consensus ordering transactions by when they were signed. Services exposing temporal primitives. Applications building on proven time.

Network Overview

Applications Layer
DApps • Smart Contracts • Nexus
Service Layer
TimeRPC • Oracle • Bridge • Gateway
Consensus Layer
Temporal PoS • Block Production • GRANDPA Finality
Network Layer
P2P Gossip • Beacon Propagation • Time Sync
Infrastructure Layer
Validators • OCP TAP Hardware • NVMe Storage

Node Types

Validator Nodes

The backbone. Full state, time cards, staked tokens. 32,000 ROKO minimum stake keeps Sybil attacks expensive. 180-day lock period ensures skin in the game. Hardware requirements enforce temporal precision - no time card, no validation.

validator_node:
      requirements:
        hardware:
          cpu: "16+ cores"
          ram: "32GB+"
          storage: "2TB NVMe"
          network: "1Gbps"
          time_card: "OCP TAP 2.0"
        stake:
          minimum: "32,000 ROKO"
          lock_period: "180 days"
        responsibilities:
          - Block production
          - Transaction validation
          - Consensus participation
          - Time attestation

Archive Nodes

History keepers. Every block, every transaction, every temporal proof since genesis. 10TB minimum. Analytics, audits, chain indexing - archive nodes serve the queries validators won't bother with.

archive_node:
      storage: "10TB+"
      data:
        - All blocks
        - All transactions
        - All state transitions
        - Temporal proofs
      api:
        - Historical queries
        - Data analytics
        - Chain indexing

Light Nodes

Wallets and mobile apps. 50GB storage, header verification only. Submit transactions, check balances, trust the validator set for heavy lifting. Checkpoint sync gets you running in minutes, not days.

light_node:
      storage: "50GB"
      functions:
        - Header verification
        - Transaction submission
        - Basic queries
      sync_mode: "checkpoint"

Network Topology

Geographic Distribution

Three regions. North America, Europe, Asia-Pacific. Each region runs primary validators with dedicated time sources, secondary nodes for redundancy. Cross-region links maintain global consensus. Light speed sets the floor on latency - 100ms across the Pacific is physics, not engineering.

Region          Primary Nodes       Secondary Nodes
    ─────────────────────────────────────────────────────
    NA              GPS grandmasters    Archive, relay
    EU              GPS grandmasters    Archive, relay
    APAC            GPS grandmasters    Archive, relay

Regional validators sync within 10ms. Cross-region gossip within 150ms. Block finality in 2-3 seconds regardless of where the transaction originated.

P2P Network Protocol

Gossip with latency awareness. Peers sorted by round-trip time. Blocks propagate to fastest peers first, ripple outward. Every message carries a NanoMoment - network latency becomes observable, measurable, optimizable.

type P2PNetwork struct {
        LocalNode    *Node
        Peers        map[NodeID]*Peer
        MessageQueue chan Message
        TimeSyncService *TimeSyncService
    }

    func (n *P2PNetwork) BroadcastBlock(block *Block) {
        msg := Message{
            Type:      MessageTypeBlock,
            Timestamp: NanoMoment.Now(),
            Payload:   block.Serialize(),
            Signature: n.LocalNode.Sign(block.Hash()),
        }
        
        // Prioritize peers by latency
        sortedPeers := n.SortPeersByLatency()
        
        for _, peer := range sortedPeers {
            go n.SendMessage(peer, msg)
        }
    }

Time Synchronization Layer

PTP Implementation

IEEE 1588 Precision Time Protocol. GPS as time source. Sub-microsecond sync across the network. Domain 0, standard priority, hardware timestamping on every packet. Grandmasters anchor each region to UTC.

struct ptp_config {
        int domain_number;
        int priority1;
        int priority2;
        enum time_source source;
        uint64_t clock_identity;
        int announce_interval;
        int sync_interval;
        int delay_req_interval;
    };

    int initialize_ptp_grandmaster() {
        struct ptp_config config = {
            .domain_number = 0,
            .priority1 = 128,
            .priority2 = 128,
            .source = TIME_SOURCE_GPS,
            .announce_interval = 1,
            .sync_interval = 0,
            .delay_req_interval = 0
        };
        
        return ptp_clock_create(&config);
    }

Network Time Protocol

Marzullo's algorithm for consensus on time. Sample multiple peers. Find the intersection of confidence intervals. Adjust local clock to match. Drift detected, drift corrected. Continuous synchronization, not periodic polling.

class NetworkTimeSync:
        def __init__(self):
            self.peers = []
            self.offset = 0
            self.drift = 0
            self.accuracy = float('inf')
            
        def sync_with_network(self):
            """Synchronize with network time sources"""
            time_samples = []
            
            for peer in self.peers:
                sample = self.measure_peer_time(peer)
                time_samples.append(sample)
            
            # Calculate network time using Marzullo's algorithm
            network_time = self.marzullo_algorithm(time_samples)
            
            # Adjust local clock
            self.adjust_clock(network_time)
            
            return network_time

Consensus Network

Block Propagation

Three strategies. Flooding for critical announcements - every node gets it immediately. Gossip for normal blocks - random fanout of 8 peers, exponential spread. Structured routing for targeted delivery. Strategy selected per message type.

pub struct BlockPropagation {
        validators: Vec<ValidatorNode>,
        propagation_strategy: PropagationStrategy,
    }

    impl BlockPropagation {
        pub async fn propagate_block(&self, block: Block) -> Result<()> {
            match self.propagation_strategy {
                PropagationStrategy::Flooding => {
                    self.flood_propagate(block).await
                },
                PropagationStrategy::Gossip => {
                    self.gossip_propagate(block).await
                },
                PropagationStrategy::Structured => {
                    self.structured_propagate(block).await
                },
            }
        }
        
        async fn gossip_propagate(&self, block: Block) -> Result<()> {
            let fanout = 8;
            let selected_peers = self.select_random_peers(fanout);
            
            for peer in selected_peers {
                tokio::spawn(async move {
                    peer.send_block(block.clone()).await
                });
            }
            
            Ok(())
        }
    }

Transaction Pool

Not a queue - a temporal index. Transactions sorted by NanoMoment, not gas price. Validity windows reject stale submissions. Priority queue for block assembly. Temporal uniqueness enforced - one transaction per nanosecond slot.

class TransactionPool {
        constructor() {
            this.pending = new Map();      // Unconfirmed transactions
            this.temporal = new Map();     // Time-ordered transactions
            this.priority = new PriorityQueue();
        }
        
        addTransaction(tx) {
            // Verify temporal validity
            if (!this.verifyTemporalValidity(tx)) {
                throw new Error('Transaction outside validity window');
            }
            
            // Add to temporal ordering
            const nanoMoment = tx.timestamp;
            if (!this.temporal.has(nanoMoment)) {
                this.temporal.set(nanoMoment, []);
            }
            this.temporal.get(nanoMoment).push(tx);
            
            // Add to priority queue
            this.priority.insert(tx, this.calculatePriority(tx));
            
            // Broadcast to network
            this.broadcastTransaction(tx);
        }
        
        getTransactionsForBlock(maxCount) {
            const transactions = [];
            const usedNanoMoments = new Set();
            
            while (transactions.length < maxCount && !this.priority.isEmpty()) {
                const tx = this.priority.extractMax();
                
                // Ensure temporal uniqueness
                if (!usedNanoMoments.has(tx.timestamp)) {
                    transactions.push(tx);
                    usedNanoMoments.add(tx.timestamp);
                }
            }
            
            return transactions;
        }
    }

Network Security

DDoS Protection

Rate limiting per IP. Suspicious activity tracking. Automatic blacklisting above threshold. Standard defense-in-depth. Nothing novel - proven patterns, reliable execution.

class DDoSProtection:
        def __init__(self):
            self.rate_limiter = RateLimiter()
            self.blacklist = set()
            self.suspicious_activity = defaultdict(int)
            
        def check_connection(self, peer_ip):
            # Check blacklist
            if peer_ip in self.blacklist:
                return False
                
            # Check rate limits
            if not self.rate_limiter.allow(peer_ip):
                self.suspicious_activity[peer_ip] += 1
                
                if self.suspicious_activity[peer_ip] > THRESHOLD:
                    self.blacklist.add(peer_ip)
                    
                return False
                
            return True

Sybil Attack Prevention

Proof of stake as Sybil resistance. 32,000 ROKO to register a validator. 180-day lock. Want to spin up 1,000 fake validators? You need 32 million tokens locked for six months. Economics makes attacks expensive. Hardware requirements add friction - can't fake a time card.

contract SybilResistance {
        uint256 constant MIN_STAKE = 32000 * 10**18;
        uint256 constant STAKE_LOCK_PERIOD = 180 days;
        
        mapping(address => uint256) public stakes;
        mapping(address => uint256) public lockUntil;
        
        modifier sybilResistant() {
            require(stakes[msg.sender] >= MIN_STAKE, "Insufficient stake");
            require(block.timestamp < lockUntil[msg.sender], "Stake locked");
            _;
        }
        
        function registerValidator() external payable sybilResistant {
            // Validator registration logic
        }
    }

Network Performance

Bandwidth Optimization

Compress everything. Prioritize by message type. Batch when possible. Block announcements get priority headers. Beacon gossip compresses well - repetitive structure. Bandwidth is money when you're running global infrastructure.

type BandwidthOptimizer struct {
        compression   CompressionAlgorithm
        prioritizer   MessagePrioritizer
        batchSize     int
    }

    func (b *BandwidthOptimizer) OptimizeMessage(msg Message) []byte {
        // Compress message
        compressed := b.compression.Compress(msg.Serialize())
        
        // Add priority header
        prioritized := b.prioritizer.AddHeader(compressed, msg.Priority)
        
        return prioritized
    }

    func (b *BandwidthOptimizer) BatchMessages(messages []Message) []byte {
        batch := MessageBatch{
            Count:     len(messages),
            Messages:  messages,
            Timestamp: NanoMoment.Now(),
        }
        
        return b.OptimizeMessage(batch.ToMessage())
    }

Latency Reduction

Dijkstra on latency weights. Cache optimal paths. Measure round-trip times continuously. Route around slow peers. LRU cache prevents recalculation. Milliseconds matter when you're ordering transactions by nanoseconds.

pub struct LatencyOptimizer {
        peer_latencies: HashMap<NodeId, Duration>,
        route_cache: LruCache<(NodeId, NodeId), Vec<NodeId>>,
    }

    impl LatencyOptimizer {
        pub fn find_optimal_path(&self, from: NodeId, to: NodeId) -> Vec<NodeId> {
            // Check cache first
            if let Some(cached_path) = self.route_cache.get(&(from, to)) {
                return cached_path.clone();
            }
            
            // Use Dijkstra's algorithm with latency weights
            let path = self.dijkstra_shortest_path(from, to);
            
            // Cache the result
            self.route_cache.put((from, to), path.clone());
            
            path
        }
    }

Monitoring & Metrics

Network Health Dashboard

Three categories. Network health: peer count, message throughput, bandwidth, latency percentiles. Consensus health: block time, finality, validator participation, fork frequency. Temporal health: drift from UTC, sync accuracy, attestation rates. All observable. All alertable.

metrics:
      network:
        - peer_count
        - message_rate
        - bandwidth_usage
        - latency_p50
        - latency_p99
        
      consensus:
        - block_time
        - finality_time
        - validator_participation
        - fork_count
        
      temporal:
        - time_drift
        - sync_accuracy
        - attestation_rate

Alerting System

Thresholds trigger alerts. Peer count drops below 100? HIGH alert. Time drift exceeds 1 microsecond? CRITICAL. Validator participation below 66%? Network is at risk. Automated monitoring, automated response. Operators get paged for the exceptions.

class NetworkAlerting:
        def __init__(self):
            self.thresholds = {
                'peer_count_min': 100,
                'latency_max_ms': 500,
                'time_drift_max_ns': 1000,
                'validator_participation_min': 0.66
            }
            
        def check_network_health(self, metrics):
            alerts = []
            
            if metrics['peer_count'] < self.thresholds['peer_count_min']:
                alerts.append(Alert(
                    severity='HIGH',
                    message='Low peer count',
                    value=metrics['peer_count']
                ))
                
            if metrics['time_drift'] > self.thresholds['time_drift_max_ns']:
                alerts.append(Alert(
                    severity='CRITICAL',
                    message='Excessive time drift',
                    value=metrics['time_drift']
                ))
                
            return alerts

Scalability Solutions

State Channels

Off-chain scaling with on-chain settlement. Open a channel, transact infinitely between two parties, close with final state. Channel opening records the NanoMoment. Disputes reference temporal ordering. State channels inherit ROKO's time guarantees even off-chain.

contract StateChannel {
        struct Channel {
            address participant1;
            address participant2;
            uint128 openTime;
            uint128 closeTime;
            uint256 deposit1;
            uint256 deposit2;
            bytes32 stateRoot;
        }
        
        mapping(bytes32 => Channel) public channels;
        
        function openChannel(address counterparty) external payable {
            bytes32 channelId = keccak256(abi.encode(msg.sender, counterparty, Time.nanoNow()));
            
            channels[channelId] = Channel({
                participant1: msg.sender,
                participant2: counterparty,
                openTime: Time.nanoNow(),
                closeTime: 0,
                deposit1: msg.value,
                deposit2: 0,
                stateRoot: 0
            });
        }
    }

Future Network Enhancements

Quantum-safe networking when RSA breaks. Satellite nodes for validators in remote locations - Starlink already enables this. 5G integration for mobile light nodes with sub-10ms latency. ML-driven routing optimization - let the network learn its own topology.


See Also