Digital Metabolism: Decoupling Logic from Facts via Regenerative Unlearning -- Towards a Pure Neural Logic Core
arXiv:2601.10810v1 Announce Type: new
Abstract: Large language models (LLMs) currently suffer from parameter entanglement, where general reasoning capabilities (logic) and specific factual knowledge (facts) exist in a superposition state within shared weights. This coupling leads to the "memory wal...
Towards Reliable ML Feature Engineering via Planning in Constrained-Topology of LLM Agents
arXiv:2601.10820v1 Announce Type: new
Abstract: Recent advances in code generation models have unlocked unprecedented opportunities for automating feature engineering, yet their adoption in real-world ML teams remains constrained by critical challenges: (i) the scarcity of datasets capturing the it...
Japanese AI Agent System on Human Papillomavirus Vaccination: System Design
arXiv:2601.10718v1 Announce Type: new
Abstract: Human papillomavirus (HPV) vaccine hesitancy poses significant public health challenges, particularly in Japan where proactive vaccination recommendations were suspended from 2013 to 2021. The resulting information gap is exacerbated by misinformation...
Do You Trust Me? Cognitive-Affective Signatures of Trustworthiness in Large Language Models
arXiv:2601.10719v1 Announce Type: new
Abstract: Perceived trustworthiness underpins how users navigate online information, yet it remains unclear whether large language models (LLMs),increasingly embedded in search, recommendation, and conversational systems, represent this construct in psychologic...
Building AI Agents to Improve Job Referral Requests to Strangers
arXiv:2601.10726v1 Announce Type: new
Abstract: This paper develops AI agents that help job seekers write effective requests for job referrals in a professional online community. The basic workflow consists of an improver agent that rewrites the referral request and an evaluator agent that measures...
ORBITFLOW: SLO-Aware Long-Context LLM Serving with Fine-Grained KV Cache Reconfiguration
arXiv:2601.10729v1 Announce Type: new
Abstract: Serving long-context LLMs is challenging because request lengths and batch composition vary during token generation, causing the memory footprint to fluctuate significantly at runtime. Offloading KV caches to host memory limits effective memory usage,...
CTHA: Constrained Temporal Hierarchical Architecture for Stable Multi-Agent LLM Systems
arXiv:2601.10738v1 Announce Type: new
Abstract: Recently, multi-time-scale agent architectures have extended the ubiquitous single-loop paradigm by introducing temporal hierarchies with distinct cognitive layers. While yielding substantial performance gains, this diversification fundamentally compr...
The breakthrough that makes robot faces feel less creepy
Humans pay enormous attention to lips during conversation, and robots have struggled badly to keep up. A new robot developed at Columbia Engineering learned realistic lip movements by watching its own reflection and studying human videos online. This allowed it to speak and sing with synchronized fa...
Social Determinants of Health Prediction for ICD-9 Code with Reasoning Models
arXiv:2601.09709v1 Announce Type: new
Abstract: Social Determinants of Health correlate with patient outcomes but are rarely captured in structured data. Recent attention has been given to automatically extracting these markers from clinical text to supplement diagnostic systems with knowledge of p...
The Geometry of Thought: Disclosing the Transformer as a Tropical Polynomial Circuit
arXiv:2601.09775v1 Announce Type: new
Abstract: We prove that the Transformer self-attention mechanism in the high-confidence regime ($\beta \to \infty$, where $\beta$ is an inverse temperature) operates in the tropical semiring (max-plus algebra). In particular, we show that taking the tropical li...
TimeSAE: Sparse Decoding for Faithful Explanations of Black-Box Time Series Models
arXiv:2601.09776v1 Announce Type: new
Abstract: As black box models and pretrained models gain traction in time series applications, understanding and explaining their predictions becomes increasingly vital, especially in high-stakes domains where interpretability and trust are essential. However, ...
arXiv:2601.09809v1 Announce Type: new
Abstract: Organizations and enterprises across domains such as healthcare, finance, and scientific research are increasingly required to extract collective intelligence from distributed, siloed datasets while adhering to strict privacy, regulatory, and sovereig...
arXiv:2601.09825v1 Announce Type: new
Abstract: We establish a lower bound on the eluder dimension of generalised linear model classes, showing that standard eluder dimension-based analysis cannot lead to first-order regret bounds. To address this, we introduce a localisation method for the eluder ...
AI Survival Stories: a Taxonomic Analysis of AI Existential Risk
arXiv:2601.09765v1 Announce Type: new
Abstract: Since the release of ChatGPT, there has been a lot of debate about whether AI systems pose an existential risk to humanity. This paper develops a general framework for thinking about the existential risk of AI systems. We analyze a two premise argumen...
GUI-Eyes: Tool-Augmented Perception for Visual Grounding in GUI Agents
arXiv:2601.09770v1 Announce Type: new
Abstract: Recent advances in vision-language models (VLMs) and reinforcement learning (RL) have driven progress in GUI automation. However, most existing methods rely on static, one-shot visual inputs and passive perception, lacking the ability to adaptively de...
PCN-Rec: Agentic Proof-Carrying Negotiation for Reliable Governance-Constrained Recommendation
arXiv:2601.09771v1 Announce Type: new
Abstract: Modern LLM-based recommenders can generate compelling ranked lists, but they struggle to reliably satisfy governance constraints such as minimum long-tail exposure or diversity requirements. We present PCN-Rec, a proof-carrying negotiation pipeline th...
Antisocial behavior towards large language model users: experimental evidence
arXiv:2601.09772v1 Announce Type: new
Abstract: The rapid spread of large language models (LLMs) has raised concerns about the social reactions they provoke. Prior research documents negative attitudes toward AI users, but it remains unclear whether such disapproval translates into costly action. W...
Improving Chain-of-Thought for Logical Reasoning via Attention-Aware Intervention
arXiv:2601.09805v1 Announce Type: new
Abstract: Modern logical reasoning with LLMs primarily relies on employing complex interactive frameworks that decompose the reasoning process into subtasks solved through carefully designed prompts or requiring external resources (e.g., symbolic solvers) to ex...
ParaRNN: Unlocking Parallel Training of Nonlinear RNNs for Large Language Models
Recurrent Neural Networks (RNNs) laid the foundation for sequence modeling, but their intrinsic sequential nature restricts parallel computation, creating a fundamental barrier to scaling. This has led to the dominance of parallelizable architectures like Transformers and, more recently, State Space...
The Data-Quality Illusion: Rethinking Classifier-Based Quality Filtering for LLM Pretraining
Large-scale models are pretrained on massive web-crawled datasets containing documents of mixed quality, making data filtering essential. A popular method is Classifier-based Quality Filtering (CQF), which trains a binary classifier to distinguish between pretraining data and a small, high-quality s...
OptiMind: A small language model with optimization expertise
OptiMind is a small language model that converts business operation challenges, described naturally, into mathematical formulations that optimization software can solve. It reduces formulation time & errors & enables fast, privacy-preserving local use.
The post OptiMind: A small language model with ...
Spectral Generative Flow Models: A Physics-Inspired Replacement for Vectorized Large Language Models
arXiv:2601.08893v1 Announce Type: new
Abstract: We introduce Spectral Generative Flow Models (SGFMs), a physics-inspired alternative to transformer-based large language models. Instead of representing text or video as sequences of discrete tokens processed by attention, SGFMs treat generation as th...