Discoveai
ToolsVideosCoursesResearchPodcastsGitHubProductsNews
Submit
Discoveai

Discover curated AI tools, courses, research papers & tutorials. Your gateway to staying ahead in artificial intelligence.

Join our community

Get weekly updates on the newest AI tools. No spam.

Resources

  • Tools
  • Products
  • Courses
  • Videos
  • Prompts

Company

  • About Us
  • News & Updates
  • Blog
  • Donate
  • Contact

Legal & Connect

  • Privacy Policy
  • Terms of Service

© 2026 Discoveai. All rights reserved.

Built with ❤️ for the AI community
Your Ultimate AI Resource Hub

Discover the Best AI Tools & Resources

Explore our curated directory of top-rated AI tools, in-depth courses, and latest research papers.

Featured Courses

View All
Build with Andrew
Certificate Included
beginner
0.5h

Build with Andrew

Andrew Ng

Learn to build working web applications by describing ideas in words and letting AI transform them into apps, no coding required.

0.0(0)
Details
Generative AI for Everyone
Certificate Included
beginner
3h

Generative AI for Everyone

Andrew Ng

Learn how generative AI works, its real-world uses, and how to apply it effectively in your life and career.

0.0(0)
Details

Trending Videos

View All
MIT Introduction to Deep Learning | 6.S191
56 min

MIT Introduction to Deep Learning | 6.S191

Alexander Amini•
0 views

Explore the foundations of deep learning in this introductory lecture from MIT's 6.S191 course, led by Alexander Amini.

YouTube
How to land US Remote Job with 0 Experience?
15 min

How to land US Remote Job with 0 Experience?

Ayush Singh•
0 views

Learn how Indian engineers can overcome cultural and communication hurdles to secure high-paying US remote jobs, even with zero experience.

YouTube
Build Your Own Private Assistant With OpenClaw And Ollama
22 min

Build Your Own Private Assistant With OpenClaw And Ollama

Krish Naik•
0 views

Learn to build a private AI assistant using OpenClaw and Ollama, enabling local, powerful conversational AI solutions.

YouTube

Popular Podcasts

View All
The Artificial Intelligence Show

The Artificial Intelligence Show

Dr. Alex Smarter

Explore the cutting edge of AI, from groundbreaking research to real-world applications and future implications.

Top GitHub Repositories

View All
Owner
Claude Cookbooks

by anthropics

A comprehensive collection of notebooks and recipes, offering copy-able code and guides to build fun and effective AI applications with Claude.

36,662
3,980
Jupyter Notebook
Owner
Microsoft AI for beginners

by microsoft

Learn AI fundamentals with Microsoft's 12-week, 24-lesson curriculum covering neural networks, deep learning, computer vision, and NLP.

46,317
9,474
Jupyter Notebook
Owner
Microsoft AI Agents for Beginners

by microsoft

A free Microsoft course offering 12 lessons to help beginners learn to build intelligent AI agents and master agentic design patterns.

55,373
19,124
Jupyter Notebook
Owner
OpenAI Cookbook

by openai

Provides example code and guides for efficiently accomplishing common tasks using the OpenAI API, primarily in Python.

72,427
12,215
Jupyter Notebook

Latest AI Tools

View All
Claude
Verified New
Claude
subscription
Claude is an advanced AI assistant by Anthropic, designed to be a thinking partner for complex problem-solving.
llmai-assistanttext-generation+5
Claude Code by Anthropic
Verified New
Claude Code by Anthropic
subscription
An AI coding agent for developers, available across terminal, IDE, and web for building, debugging, and shipping code.
ai-coding-agentcode-assistantdeveloper-tools+5
Midjourney
Verified New
Midjourney
subscription
Midjourney generates stunning AI-powered images and art, helping amplify the human spirit through imagination and beauty.
ai-artimage-generationcreative-ai+4
FLUX.2
Verified New
FLUX.2
freemium
Production-grade AI model for 4MP photorealistic image generation and multi-reference editing by Black Forest Labs.
image-generationimage-editingai-model+5

Research Papers

View All
Research Paper
Pitfalls in Evaluating Interpretability Agents

"Automated interpretability systems aim to reduce the need for human labor and scale analysis to increasingly large models and diverse tasks. Recent efforts toward this goal leverage large language models (LLMs) at increasing levels of autonomy, ranging from fixed one-shot workflows to fully autonomous interpretability agents. This shift creates a corresponding need to scale evaluation approaches to keep pace with both the volume and complexity of generated explanations. We investigate this challenge in the context of automated circuit analysis -- explaining the roles of model components when performing specific tasks. To this end, we build an agentic system in which a research agent iteratively designs experiments and refines hypotheses. When evaluated against human expert explanations across six circuit analysis tasks in the literature, the system appears competitive. However, closer examination reveals several pitfalls of replication-based evaluation: human expert explanations can be subjective or incomplete, outcome-based comparisons obscure the research process, and LLM-based systems may reproduce published findings via memorization or informed guessing. To address some of these pitfalls, we propose an unsupervised intrinsic evaluation based on the functional interchangeability of model components. Our work demonstrates fundamental challenges in evaluating complex automated interpretability systems and reveals key limitations of replication-based evaluation."

Read AbstractView PDF
Research Paper
Learning Dynamic Belief Graphs for Theory-of-mind Reasoning

"Theory of Mind (ToM) reasoning with Large Language Models (LLMs) requires inferring how people's implicit, evolving beliefs shape what they seek and how they act under uncertainty -- especially in high-stakes settings such as disaster response, emergency medicine, and human-in-the-loop autonomy. Prior approaches either prompt LLMs directly or use latent-state models that treat beliefs as static and independent, often producing incoherent mental models over time and weak reasoning in dynamic contexts. We introduce a structured cognitive trajectory model for LLM-based ToM that represents mental state as a dynamic belief graph, jointly inferring latent beliefs, learning their time-varying dependencies, and linking belief evolution to information seeking and decisions. Our model contributes (i) a novel projection from textualized probabilistic statements to consistent probabilistic graphical model updates, (ii) an energy-based factor graph representation of belief interdependencies, and (iii) an ELBO-based objective that captures belief accumulation and delayed decisions. Across multiple real-world disaster evacuation datasets, our model significantly improves action prediction and recovers interpretable belief trajectories consistent with human reasoning, providing a principled module for augmenting LLMs with ToM in high-uncertainty environment. [this https URL](https://anonymous.4open.science/r/ICML_submission-6373/)"

Read AbstractView PDF