Evoke Security Raises $4M Pre-Seed Round to Secure the Agentic Workforce
Crosspoint Capital Partners leads investment to give security teams visibility and control over AI agents operating across the enterprise Evoke Security, a cybersecurity startup securing the agentic workforce, today announced $4 million in pre-seed funding led by Crosspoint Capital Partners, LP (Cro...
Company unveils new Agentic AI capabilities in Invoice Lifecycle Management Platform, paving way to autonomous AP The future of finance is intelligent, and Basware, the global leader in Invoice Lifecycle Management is delivering it today. The pioneer and leader in accounts payable unveiled new AI a...
Tensor Network Generator-Enhanced Optimization for Traveling Salesman Problem
arXiv:2602.20175v1 Announce Type: new
Abstract: We present an application of the tensor network generator-enhanced optimization (TN-GEO) framework to address the traveling salesman problem (TSP), a fundamental combinatorial optimization challenge. Our approach employs a tensor network Born machine ...
MoBiQuant: Mixture-of-Bits Quantization for Token-Adaptive Elastic LLMs
arXiv:2602.20191v1 Announce Type: new
Abstract: Changing runtime complexity on cloud and edge devices necessitates elastic large language model (LLM) deployment, where an LLM can be inferred with various quantization precisions based on available computational resources. However, it has been observ...
FedAvg-Based CTMC Hazard Model for Federated Bridge Deterioration Assessment
arXiv:2602.20194v1 Announce Type: new
Abstract: Bridge periodic inspection records contain sensitive information about public infrastructure, making cross-organizational data sharing impractical under existing data governance constraints. We propose a federated framework for estimating a Continuous...
Controllable Exploration in Hybrid-Policy RLVR for Multi-Modal Reasoning
arXiv:2602.20197v1 Announce Type: new
Abstract: Reinforcement Learning with verifiable rewards (RLVR) has emerged as a primary learning paradigm for enhancing the reasoning capabilities of multi-modal large language models (MLLMs). However, during RL training, the enormous state space of MLLM and s...
IMOVNO+: A Regional Partitioning and Meta-Heuristic Ensemble Framework for Imbalanced Multi-Class Learning
arXiv:2602.20199v1 Announce Type: new
Abstract: Class imbalance, overlap, and noise degrade data quality, reduce model reliability, and limit generalization. Although widely studied in binary classification, these issues remain underexplored in multi-class settings, where complex inter-class relati...
An artificial intelligence framework for end-to-end rare disease phenotyping from clinical notes using large language models
arXiv:2602.20324v1 Announce Type: new
Abstract: Phenotyping is fundamental to rare disease diagnosis, but manual curation of structured phenotypes from clinical notes is labor-intensive and difficult to scale. Existing artificial intelligence approaches typically optimize individual components of p...
DMCD: Semantic-Statistical Framework for Causal Discovery
arXiv:2602.20333v1 Announce Type: new
Abstract: We present DMCD (DataMap Causal Discovery), a two-phase causal discovery framework that integrates LLM-based semantic drafting from variable metadata with statistical validation on observational data. In Phase I, a large language model proposes a spar...
Diffusion Modulation via Environment Mechanism Modeling for Planning
arXiv:2602.20422v1 Announce Type: new
Abstract: Diffusion models have shown promising capabilities in trajectory generation for planning in offline reinforcement learning (RL). However, conventional diffusion-based planning methods often fail to account for the fact that generating trajectories in ...
Implicit Intelligence -- Evaluating Agents on What Users Don't Say
arXiv:2602.20424v1 Announce Type: new
Abstract: Real-world requests to AI agents are fundamentally underspecified. Natural human communication relies on shared context and unstated constraints that speakers expect listeners to infer. Current agentic benchmarks test explicit instruction-following bu...
Meta AI Open Sources GCM for Better GPU Cluster Monitoring to Ensure High Performance AI Training and Hardware Reliability
While the tech folks obsesses over the latest Llama checkpoints, a much grittier battle is being fought in the basements of data centers. As AI models scale to trillions of parameters, the clusters required to train them have become some of the most complex—and fragile—machines on the planet. Meta A...
Closing the Gap Between Text and Speech Understanding in LLMs
Large Language Models (LLMs) can be adapted to extend their text capabilities to speech inputs. However, these speech-adapted LLMs consistently underperform their text-based counterparts—and even cascaded pipelines—on language understanding tasks. We term this shortfall the text-speech understanding...
Our latest threat report examines how malicious actors combine AI models with websites and social platforms—and what it means for detection and defense.
A.R.I.S.: Automated Recycling Identification System for E-Waste Classification Using Deep Learning
Traditional electronic recycling processes suffer from significant resource loss due to inadequate material separation and identification capabilities, limiting material recovery. We present A.R.I.S. (Automated Recycling Identification System), a low-cost, portable sorter for shredded e-waste that a...
Constructive Circuit Amplification: Improving Math Reasoning in LLMs via Targeted Sub-Network Updates
Prior studies investigating the internal workings of LLMs have uncovered sparse subnetworks, often referred to as circuits, that are responsible for performing specific tasks. Additionally, it has been shown that model performance improvement through fine-tuning often results from the strengthening ...
Reusing Pre-Training Data at Test Time is a Compute Multiplier
Large language models learn from their vast pre-training corpora, gaining the ability to solve an ever increasing variety of tasks; yet although researchers work to improve these datasets, there is little effort to understand how efficient the pre-training apparatus is at extracting ideas and knowle...
A Coding Implementation to Simulate Practical Byzantine Fault Tolerance with Asyncio, Malicious Nodes, and Latency Analysis
In this tutorial, we implement an end-to-end Practical Byzantine Fault Tolerance (PBFT) simulator using asyncio. We model a realistic distributed network with asynchronous message passing, configurable delays, and Byzantine nodes that intentionally deviate from the protocol. By explicitly implementi...
Uber CEO Dara Khosrowshahi said the company’s employees have gone all in on AI, going so far as to build a chatbot of him that they use to practice their pitches.
Optimizing Token Generation in PyTorch Decoder Models
Hiding host-device synchronization via CUDA stream interleaving
The post Optimizing Token Generation in PyTorch Decoder Models appeared first on Towards Data Science.
Alibaba Qwen Team Releases Qwen 3.5 Medium Model Series: A Production Powerhouse Proving that Smaller AI Models are Smarter
The development of large language models (LLMs) has been defined by the pursuit of raw scale. While increasing parameter counts into the trillions initially drove performance gains, it also introduced significant infrastructure overhead and diminishing marginal utility. The release of the Qwen 3.5 M...
Vouched Launches Agent Checkpoint to Establish Trust in the Age of AI Agents
Agent Checkpoint brings transparency, security, and control to the new era of AI-to-AI commerce. Vouched, the leader in AI-powered identity verification, today announced Agent Checkpoint, a groundbreaking platform that helps organizations confidently engage in the next frontier of digital transforma...