HRAN Chatbot Model Card

(Haykin Resonant Attention Network) is a ~1.01M parameter, custom-built sequence-to-sequence model. Rather than relying on standard deep learning frameworks like PyTorch or TensorFlow, HRAN is engineered entirely in NumPy to explore the mathematical first principles of computation, information theory, and adaptation.

The architecture is strictly derived from concepts in Simon Haykin's Neural Networks and Learning Machines (3rd Ed.), actively challenging modern transformer defaults by replacing dot-product attention and standard activations with biologically and mathematically grounded alternatives.

  • Developer: Soham Pal
  • Model Type: Custom Sequence-to-Sequence Language Model
  • Parameters: ~1.01 Million
  • Framework: Pure NumPy
  • License: MIT

Architectural Innovations

HRAN abandons several standard transformer conventions in favor of experimental mechanics:

  • RBF Attention (Ch.5): Replaces standard dot-product attention with a Gaussian kernel (A_{ij} = ext{softmax}(-\gamma |q_i - k_j|^2)). This forces attention heads to localize in representation space based on physical distance rather than inner product maximization.
  • Hebbian Seed Initialization (Ch.2): Pre-seeds embeddings with co-occurrence statistics using Oja's rule before gradient descent, attempting to bridge unsupervised geometry with supervised learning.
  • Infomax Activation (Ch.10): Utilizes f(x) = anh(x) + lpha x (derived from Bell-Sejnowski ICA) to maximize mutual information throughput and strictly avoid information bottlenecks in hidden layers.
  • Lateral Inhibition Gate (Ch.9): Introduces competitive learning where winning activations are amplified and weak ones suppressed, producing sparse, discriminative representations.
  • Wiener-SNR Gradient Scaling (Ch.3): Scales parameter updates by local signal-to-noise ratio, allowing high-signal weights to learn quickly while suppressing noisy weight updates.

Loss Graph

HRAN Training Loss

Training Data

The model was trained on a highly curated, 100% original dataset of 235 question-answer pairs (augmented to 1,040 samples). The dataset spans deep topics including neural network architecture, philosophy, physics, mathematics, and Haykin's specific theories.

Performance & Limitations

Disclaimer: This model is for architectural research and educational purposes only. It is not a functional conversational AI.

During training, the model experienced severe mathematical divergence.

  • Final Training Loss: ~5.85
  • Perplexity: ~347.9
  • Output State: The current weights (hran_best.pkl) exhibit severe vocabulary degradation and mode collapse, largely outputting repetitive stop-words (e.g., "is is define is is").

This failure state provides a valuable case study in the difficulties of applying continuous-space RBF kernels to discrete language tokens, as well as the instability introduced by custom dynamic gradient scaling (Wiener-SNR) on small datasets.

Future Roadmap

This experimental build serves as a foundational testbed for understanding the deep mechanics of sequence modeling. Future iterations and related projects aim to:

  • Replace the basic word-level tokenizer with the highly optimized Crayon tokenizer to drastically improve subword processing, vocabulary stability, and sequence compression.
  • Integrate these first-principles architectural learnings into the broader RootFlow+ framework, specifically exploring how alternative attention mechanisms might inform the Heart-Head-Hands Transformer (H3T) approach to solving the AI grounding problem.

How to Use

Because HRAN is a custom NumPy architecture, it cannot be loaded via the standard transformers library. You must download both the architecture script and the weights.

from huggingface_hub import hf_hub_download
import sys

# 1. Download the architecture and weights
script_path = hf_hub_download(repo_id="Phase-Technologies/hran-chatbot", filename="hran_chatbot.py")
weights_path = hf_hub_download(repo_id="Phase-Technologies/hran-chatbot", filename="hran_best.pkl")

# 2. Add the downloaded script's directory to your path
import os
sys.path.append(os.path.dirname(script_path))
import hran_chatbot as hran

# 3. Initialize config and rebuild tokenizer
config = hran.CFG
tokenizer = hran.HRANTokenizer(max_vocab=config.vocab_size)
tokenizer.build(hran.FULL_DATASET)
config.vocab_size = tokenizer.vocab_size

# 4. Load Model
model = hran.HRANModel(config)
model.load(weights_path)

# 5. Generate Text
response = hran.generate_response(model, tokenizer, "What is attention?")
print(response)

Downloads

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support