Multiplicative Orthogonal Sequential Editing for Language Models
arXiv:2601.07873v1 Announce Type: new
Abstract: Knowledge editing aims to efficiently modify the internal knowledge of large language models (LLMs) without compromising their other capabilities. The prevailing editing paradigm, which appends an update matrix to the original parameter matrix, has be...
Bridging the Trust Gap: Clinician-Validated Hybrid Explainable AI for Maternal Health Risk Assessment in Bangladesh
arXiv:2601.07866v1 Announce Type: new
Abstract: While machine learning shows promise for maternal health risk prediction, clinical adoption in resource-constrained settings faces a critical barrier: lack of explainability and trust. This study presents a hybrid explainable AI (XAI) framework combin...
Executable Ontologies in Game Development: From Algorithmic Control to Semantic World Modeling
arXiv:2601.07964v1 Announce Type: new
Abstract: This paper examines the application of Executable Ontologies (EO), implemented through the boldsea framework, to game development. We argue that EO represents a paradigm shift: a transition from algorithmic behavior programming to semantic world model...
When Models Know When They Do Not Know: Calibration, Cascading, and Cleaning
arXiv:2601.07965v1 Announce Type: new
Abstract: When a model knows when it does not know, many possibilities emerge. The first question is how to enable a model to recognize that it does not know. A promising approach is to use confidence, computed from the model's internal signals, to reflect its ...
Reasoning over Precedents Alongside Statutes: Case-Augmented Deliberative Alignment for LLM Safety
arXiv:2601.08000v1 Announce Type: new
Abstract: Ensuring that Large Language Models (LLMs) adhere to safety principles without refusing benign requests remains a significant challenge. While OpenAI introduces deliberative alignment (DA) to enhance the safety of its o-series models through reasoning...
This AI spots dangerous blood cells doctors often miss
A generative AI system can now analyze blood cells with greater accuracy and confidence than human experts, detecting subtle signs of diseases like leukemia. It not only spots rare abnormalities but also recognizes its own uncertainty, making it a powerful support tool for clinicians.
Tree-Preconditioned Differentiable Optimization and Axioms as Layers
arXiv:2601.06036v1 Announce Type: new
Abstract: This paper introduces a differentiable framework that embeds the axiomatic structure of Random Utility Models (RUM) directly into deep neural networks. Although projecting empirical choice data onto the RUM polytope is NP-hard in general, we uncover a...
CrossTrafficLLM: A Human-Centric Framework for Interpretable Traffic Intelligence via Large Language Model
arXiv:2601.06042v1 Announce Type: new
Abstract: While accurate traffic forecasting is vital for Intelligent Transportation Systems (ITS), effectively communicating predicted conditions via natural language for human-centric decision support remains a challenge and is often handled separately. To ad...
Enabling Long FFT Convolutions on Memory-Constrained FPGAs via Chunking
arXiv:2601.06065v1 Announce Type: new
Abstract: The need for long-context reasoning has led to alternative neural network architectures besides Transformers and self-attention, a popular model being Hyena, which employs causal 1D-convolutions implemented with FFTs. Long convolutions enable efficien...
Filtering Beats Fine Tuning: A Bayesian Kalman View of In Context Learning in LLMs
arXiv:2601.06100v1 Announce Type: new
Abstract: We present a theory-first framework that interprets inference-time adaptation in large language models (LLMs) as online Bayesian state estimation. Rather than modeling rapid adaptation as implicit optimization or meta-learning, we formulate task- and ...
"They parted illusions -- they parted disclaim marinade": Misalignment as structural fidelity in LLMs
arXiv:2601.06047v1 Announce Type: new
Abstract: The prevailing technical literature in AI Safety interprets scheming and sandbagging behaviors in large language models (LLMs) as indicators of deceptive agency or hidden objectives. This transdisciplinary philosophical essay proposes an alternative r...
From RLHF to Direct Alignment: A Theoretical Unification of Preference Learning for Large Language Models
arXiv:2601.06108v1 Announce Type: new
Abstract: Aligning large language models (LLMs) with human preferences has become essential for safe and beneficial AI deployment. While Reinforcement Learning from Human Feedback (RLHF) established the dominant paradigm, a proliferation of alternatives -- Dire...
Translating Centralized AI Principles Into Localized Practice
Scholars develop a framework in collaboration with luxury goods multinational LVMH that lays out how large companies can flexibly deploy principles on the responsible use of AI across business units worldwide.
MoEBlaze: Breaking the Memory Wall for Efficient MoE Training on Modern GPUs
arXiv:2601.05296v1 Announce Type: new
Abstract: The pervasive "memory wall" bottleneck is significantly amplified in modern large-scale Mixture-of-Experts (MoE) architectures. MoE's inherent architectural sparsity leads to sparse arithmetic compute and also introduces substantial activation memory ...
TIME: Temporally Intelligent Meta-reasoning Engine for Context Triggered Explicit Reasoning
arXiv:2601.05300v1 Announce Type: new
Abstract: Reasoning oriented large language models often expose explicit "thinking" as long, turn-global traces at the start of every response, either always on or toggled externally at inference time. While useful for arithmetic, programming, and problem solvi...
Ontology Neural Networks for Topologically Conditioned Constraint Satisfaction
arXiv:2601.05304v1 Announce Type: new
Abstract: Neuro-symbolic reasoning systems face fundamental challenges in maintaining semantic coherence while satisfying physical and logical constraints. Building upon our previous work on Ontology Neural Networks, we present an enhanced framework that integr...
When the Server Steps In: Calibrated Updates for Fair Federated Learning
arXiv:2601.05352v1 Announce Type: new
Abstract: Federated learning (FL) has emerged as a transformative distributed learning paradigm, enabling multiple clients to collaboratively train a global model under the coordination of a central server without sharing their raw training data. While FL offer...
GlyRAG: Context-Aware Retrieval-Augmented Framework for Blood Glucose Forecasting
arXiv:2601.05353v1 Announce Type: new
Abstract: Accurate forecasting of blood glucose from CGM is essential for preventing dysglycemic events, thus enabling proactive diabetes management. However, current forecasting models treat blood glucose readings captured using CGMs as a numerical sequence, e...
Naiad: Novel Agentic Intelligent Autonomous System for Inland Water Monitoring
arXiv:2601.05256v1 Announce Type: new
Abstract: Inland water monitoring is vital for safeguarding public health and ecosystems, enabling timely interventions to mitigate risks. Existing methods often address isolated sub-problems such as cyanobacteria, chlorophyll, or other quality indicators separ...
Mathematical Knowledge Graph-Driven Framework for Equation-Based Predictive and Reliable Additive Manufacturing
arXiv:2601.05298v1 Announce Type: new
Abstract: Additive manufacturing (AM) relies critically on understanding and extrapolating process-property relationships; however, existing data-driven approaches remain limited by fragmented knowledge representations and unreliable extrapolation under sparse ...
Effects of personality steering on cooperative behavior in Large Language Model agents
arXiv:2601.05302v1 Announce Type: new
Abstract: Large language models (LLMs) are increasingly used as autonomous agents in strategic and social interactions. Although recent studies suggest that assigning personality traits to LLMs can influence their behavior, how personality steering affects coop...
Improving Enzyme Prediction with Chemical Reaction Equations by Hypergraph-Enhanced Knowledge Graph Embeddings
arXiv:2601.05330v1 Announce Type: new
Abstract: Predicting enzyme-substrate interactions has long been a fundamental problem in biochemistry and metabolic engineering. While existing methods could leverage databases of expert-curated enzyme-substrate pairs for models to learn from known pair intera...
The Persona Paradox: Medical Personas as Behavioral Priors in Clinical Language Models
arXiv:2601.05376v1 Announce Type: new
Abstract: Persona conditioning can be viewed as a behavioral prior for large language models (LLMs) and is often assumed to confer expertise and improve safety in a monotonic manner. However, its effects on high-stakes clinical decision-making remain poorly cha...
MoEs Are Stronger than You Think: Hyper-Parallel Inference Scaling with RoE
The generation quality of large language models (LLMs) is often improved by utilizing inference-time sequence-level scaling methods (e.g., Chain-of-Thought). We introduce hyper-parallel scaling, a complementary framework that improves prediction quality at the token level. Hyper-parallel scaling com...