Explore our curated directory of top-rated AI tools, in-depth courses, and latest research papers.

Learn to build working web applications by describing ideas in words and letting AI transform them into apps, no coding required.

Learn how generative AI works, its real-world uses, and how to apply it effectively in your life and career.

Explore the foundations of deep learning in this introductory lecture from MIT's 6.S191 course, led by Alexander Amini.

Learn how Indian engineers can overcome cultural and communication hurdles to secure high-paying US remote jobs, even with zero experience.

Learn to build a private AI assistant using OpenClaw and Ollama, enabling local, powerful conversational AI solutions.

Explore the cutting edge of AI, from groundbreaking research to real-world applications and future implications.
by anthropics
A comprehensive collection of notebooks and recipes, offering copy-able code and guides to build fun and effective AI applications with Claude.
by microsoft
Learn AI fundamentals with Microsoft's 12-week, 24-lesson curriculum covering neural networks, deep learning, computer vision, and NLP.
by microsoft
A free Microsoft course offering 12 lessons to help beginners learn to build intelligent AI agents and master agentic design patterns.
by openai
Provides example code and guides for efficiently accomplishing common tasks using the OpenAI API, primarily in Python.




"Automated interpretability systems aim to reduce the need for human labor and scale analysis to increasingly large models and diverse tasks. Recent efforts toward this goal leverage large language models (LLMs) at increasing levels of autonomy, ranging from fixed one-shot workflows to fully autonomous interpretability agents. This shift creates a corresponding need to scale evaluation approaches to keep pace with both the volume and complexity of generated explanations. We investigate this challenge in the context of automated circuit analysis -- explaining the roles of model components when performing specific tasks. To this end, we build an agentic system in which a research agent iteratively designs experiments and refines hypotheses. When evaluated against human expert explanations across six circuit analysis tasks in the literature, the system appears competitive. However, closer examination reveals several pitfalls of replication-based evaluation: human expert explanations can be subjective or incomplete, outcome-based comparisons obscure the research process, and LLM-based systems may reproduce published findings via memorization or informed guessing. To address some of these pitfalls, we propose an unsupervised intrinsic evaluation based on the functional interchangeability of model components. Our work demonstrates fundamental challenges in evaluating complex automated interpretability systems and reveals key limitations of replication-based evaluation."
"Theory of Mind (ToM) reasoning with Large Language Models (LLMs) requires inferring how people's implicit, evolving beliefs shape what they seek and how they act under uncertainty -- especially in high-stakes settings such as disaster response, emergency medicine, and human-in-the-loop autonomy. Prior approaches either prompt LLMs directly or use latent-state models that treat beliefs as static and independent, often producing incoherent mental models over time and weak reasoning in dynamic contexts. We introduce a structured cognitive trajectory model for LLM-based ToM that represents mental state as a dynamic belief graph, jointly inferring latent beliefs, learning their time-varying dependencies, and linking belief evolution to information seeking and decisions. Our model contributes (i) a novel projection from textualized probabilistic statements to consistent probabilistic graphical model updates, (ii) an energy-based factor graph representation of belief interdependencies, and (iii) an ELBO-based objective that captures belief accumulation and delayed decisions. Across multiple real-world disaster evacuation datasets, our model significantly improves action prediction and recovers interpretable belief trajectories consistent with human reasoning, providing a principled module for augmenting LLMs with ToM in high-uncertainty environment. [this https URL](https://anonymous.4open.science/r/ICML_submission-6373/)"