non-LLM’s
Outline
- The OG Parrots: The reading on parrots revisited.
- 2 prompts
- non-LLM AI
Some Contemporary Discussion Items
Most people are beginning to realize that humans are the OG stochastic parrots. https://t.co/LhKkQkY6PP
— Hollis Robbins (@anecdotal) April 12, 2026
Watching
$META TO INSTALL TRACKING SOFTWARE ON U.S. EMPLOYEE COMPUTERS TO CAPTURE WORKFLOW DATA FOR AI TRAINING -INTERNAL MEMO
— *Walter Bloomberg (@DeItaone) April 21, 2026
META TRACKING TOOL TO CAPTURE MOUSE MOVEMENTS, KEYSTROKES AND SNAPSHOTS OF WHAT EMPLOYEES SEE ON THEIR SCREENS -INTERNAL MEMO
As people increasingly rely on AI tools for learning, decision-making, and creativity, could this dependency erode human critical thinking skills? Does convenience justify the risk of losing fundamental cognitive abilities?
As AI systems become more sophisticated at simulating human conversation and emotion, could people form genuine emotional attachments to them? If an AI convincingly mimics empathy and understanding without actually feeling anything, is that ethical? What does this mean for human identity and relationships?
Niantic
The Are and the Aren’t
Define the boundary: Large Language Models (LLMs) are specifically large-scale neural networks trained on massive text corpora using next-token prediction to understand and generate human language. AI systems that operate on different modalities, architectures, or optimization goals fall outside this definition. It is certainly less clean with multi-modal models.
Some may integrate LLMs in peripheral features (e.g., customer support chatbots), their primary value and operational architecture remain fundamentally non-LLM. There’s a good reason why…
jensen explains
1. Palantir Foundry Analytics Platform
Paradigm: Knowledge graphs + traditional ML + deterministic rule engines
Why not an LLM: Palantir explicitly designs Foundry to avoid LLM dependency for core data integration and predictive modeling. It uses symbolic AI, graph traversal, and gradient-boosted trees to ensure auditable, deterministic outcomes for defense, healthcare, and logistics.
2. DeepMind AlphaFold (Protein Structure Prediction)
Paradigm: Evoformer architecture (biological-sequence-specific transformer) + diffusion-based 3D folding
Why not an LLM: Though it uses transformer-like attention, it processes amino acid sequences and physical constraints, not natural language. Its outputs are 3D protein coordinates, not text.
3. Spotify’s Recommendation Engine (Discover Weekly, Release Radar)
Paradigm: Collaborative filtering + matrix factorization + CNNs analyzing audio spectrograms
Why not an LLM: Core ranking and content matching rely on user-item interaction graphs and acoustic feature extraction. No language modeling or text generation is involved.
4. Netflix Content Recommendation System
Paradigm: Deep learning for metadata tagging + reinforcement learning for ranking + collaborative filtering
Why not an LLM: Optimizes for watch-time and engagement using behavioral signals and multimedia features. Its architecture is built for temporal sequence modeling and content-based filtering, not linguistic prediction.
5. Stripe Fraud Detection AI
Paradigm: Gradient boosting (XGBoost/LightGBM) + graph neural networks + anomaly detection
Why not an LLM: Analyzes transactional metadata, device fingerprints, and network graphs in real-time. Operates on numerical/time-series data with strict latency and interpretability requirements unsuited to LLMs.
6. DeepMind AlphaGo & AlphaZero (Game AI)
Paradigm: Deep reinforcement learning + Monte Carlo Tree Search (MCTS) + CNNs for board/state evaluation
Why not an LLM: Learns through self-play in rule-based environments. Uses value/policy networks for spatial-state evaluation, not textual token prediction.
7. Apple FaceID & Core ML (On-Device AI)
Paradigm: Specialized neural networks for 3D facial mapping, health metrics, and on-device sensor fusion
Why not an LLM: Optimized for low-power, real-time vision and physiological signal processing. Runs on Apple’s Neural Engine with architectures designed for pixel/depth analysis, not language.
8. Waymo Autonomous Driving Perception & Control Stack
Paradigm: Computer vision (CNNs/Vision Transformers) + LiDAR/radar sensor fusion + control theory
Why not an LLM: Core deployment focuses on real-time spatial reasoning, object tracking, and trajectory planning. While some competitors experiment with LLMs for high-level reasoning, Waymo’s commercial stack remains vision-control driven.
9. DeepMind AlphaStar (StarCraft II AI)
Paradigm: Deep reinforcement learning + CNNs for minimap and unit-state processing
Why not an LLM: Operates in a discrete action space with raw game-state inputs. Uses hierarchical policy networks and attention over game entities, not linguistic corpora.
10. Amazon Rekognition (Enterprise Computer Vision)
*Paradigm:* CNNs + object detection models + facial embedding networks
*Why not an LLM:* Processes images/video for face recognition, scene classification, and OCR. Its OCR component uses vision-based text detection (e.g., CRNN architectures), not language modeling.
11. Insilico Medicine / Generative Chemistry Platforms
*Paradigm:* Diffusion models + Graph Neural Networks (GNNs) for molecular design
*Why not an LLM:* Works with chemical graphs, bond topology, and physicochemical properties. Generates novel drug candidates by optimizing molecular fitness functions, not linguistic tokens.
12. Boston Dynamics Spot & Advanced Robotics Control
*Paradigm:* Reinforcement learning + inverse kinematics + sensor fusion + control theory
*Why not an LLM:* Core AI handles locomotion, balance, and manipulation through physics-based simulation and real-time feedback loops. Relies on deterministic control algorithms, not probabilistic text generation.
13. Google Photos Object/Scene Detection
*Paradigm:* Computer vision models (Inception variants, Vision Transformers) for image indexing
*Why not an LLM:* Extracts pixel-level features to classify scenes, faces, and objects. Uses embedding spaces for similarity search, fundamentally different from semantic language modeling.
14. DeepMind AlphaGeometry (Mathematical Reasoning)
*Paradigm:* Hybrid symbolic AI + neural networks + automated theorem provers
*Why not an LLM:* Explicitly designed to avoid LLMs to ensure formal, verifiable proofs. Combines neural pattern recognition with symbolic logic engines for geometry problem-solving.
15. Google Cloud Speech-to-Text / Microsoft Azure AI Speech
*Paradigm:* Acoustic models (Conformers, RNNs) + phonetic mapping + language-independent signal processing
*Why not an LLM:* Core deployment converts audio waveforms to text using deep acoustic modeling and phoneme-level alignment. While newer versions may add post-processing LLMs, the foundational speech recognition stack relies on signal processing and acoustic neural networks, not language generation.
Key Takeaway
The AI landscape is highly modular. While LLMs dominate text-centric applications, prominent real-world AI systems in enterprise analytics, biotech, robotics, finance, autonomous systems, and multimedia rely on computer vision, reinforcement learning, traditional ML, symbolic reasoning, acoustic modeling, or specialized generative architectures (e.g., diffusion/GNNs). These paradigms are chosen for determinism, latency, domain specificity, or hardware efficiency that LLMs do not natively provide.