Back to Blog
GitHub Trending Weekly Digest — March 30 — April 4, 2026

GitHub Trending Weekly Digest — March 30 — April 4, 2026

By Tommy Zhang
10 min read
GitHubTrendingOpen SourceAIDeveloper Tools

Welcome to this week's GitHub Trending roundup, covering March 30 — April 4, 2026. This digest aggregates daily trending repos, deduplicates them, and ranks by persistence—projects that stayed hot for multiple days bubble to the top.

This week brought 15 unique projects spanning AI coding assistants, speech AI, time-series forecasting, multi-agent orchestration, and developer tooling. The standout theme: AI-powered development workflows are maturing from experimental prototypes to production-grade toolchains.

🔥 Persistent Leaders (4+ Days)

These projects dominated the trending charts for at least 4 days this week—clear signals of sustained developer interest.

Claude How-To (4 days)

Visual, example-driven complete guide to Claude Code
🔗 luongnv89/claude-howto

What it does: An interactive learning path that takes you from Claude Code basics to advanced agent orchestration in 11-13 hours. Includes 10 tutorial modules, Mermaid diagrams, self-assessment quizzes, and production-ready templates you can copy-paste directly into your projects.

Why it matters: Official docs describe features but don't show you how to combine them. This fills the gap with structured learning, visual workflows, and real-world examples that go beyond Hello World. Perfect for developers who want to master Claude Code's 90% capability fast.

Tech: Markdown + Mermaid charts, Python test suite (uv + pytest), EPUB generation, MIT licensed, covers slash commands, memory, skills, subagents, MCP, hooks, plugins, checkpoints, and CLI mastery


VibeVoice (4 days)

Microsoft's frontier speech AI models for TTS and ASR
🔗 microsoft/VibeVoice

What it does: A family of open-source speech AI models from Microsoft Research that revolutionize how we handle long-form audio. VibeVoice-ASR processes up to 60 minutes in a single pass, generating structured transcripts with speaker identification and timestamps. VibeVoice-TTS synthesizes conversations up to 90 minutes with multiple speakers, and VibeVoice-Realtime delivers ultra-low latency (~300ms) text-to-speech for real-time applications.

Why it matters: Traditional ASR models chop audio into chunks, losing context and speaker consistency. VibeVoice keeps the full picture, understands who said what and when, supports custom terminology, and works across 50+ languages. It's already powering production apps like Vibing voice input.

Tech: Continuous speech tokenizers at 7.5 Hz, next-token diffusion framework, LLM-powered context understanding, native Hugging Face Transformers integration, models ranging from 0.5B to 7B parameters


⚡ Multi-Day Momentum (2-3 Days)

Projects that held trending status for multiple days, showing strong community traction.

TimesFM (3 days)

Google's time-series foundation model
🔗 google-research/timesfm

What it does: A pre-trained decoder-only model for zero-shot time series forecasting. Version 2.5 packs 200M parameters (down from 500M), supports up to 16k context length and 1k forecast horizon, with continuous quantile prediction for uncertainty estimation.

Why it matters: Training custom models for each dataset is expensive and slow. TimesFM provides ready-to-use forecasting with no fine-tuning required. Already deployed in BigQuery as an official Google product.

Tech: Python, PyTorch/JAX (Flax), decoder-only Transformer, Hugging Face integration, ICML 2024 paper, supports exogenous regressors (XReg)


Claude Code Best Practices (3 days)

practice makes claude perfect
🔗 shanraisshan/claude-code-best-practice

What it does: A comprehensive collection of Claude Code wisdom from core team members (Boris Cherny, Thariq, Cat) and the community. Features 50+ tips, workflow orchestration templates, 9-way comparison of top coding workflows, and production-grade configuration examples.

Why it matters: Knowing features exist is different from knowing when and how to use them. This repo distills battle-tested patterns for subagents, commands, skills, hooks, and more. Includes feature comparison tables and real-world workflow breakdowns.

Tech: Markdown documentation, copy-paste configs for commands/skills/hooks/MCP, weather-orchestrator workflow demo, coverage of all Claude Code 2.1+ capabilities


oh-my-claudecode (2 days)

Multi-agent orchestration framework for Claude Code
🔗 Yeachan-Heo/oh-my-claudecode

What it does: A workflow enhancement layer that transforms Claude Code into a team-based coding system. Features standardized pipelines (team-plan → team-prd → team-exec → team-verify → team-fix), 32 specialized agents, intelligent model routing, and real-time HUD monitoring.

Why it matters: Complex tasks need decomposition, parallelization, and smart coordination. OMC automates this with Team mode, supports Claude/Codex/Gemini mixing, routes simple tasks to Haiku (saves 30-50% tokens), and learns reusable skills from experience. Zero learning curve—just use natural language.

Tech: TypeScript/JavaScript npm package, tmux-based CLI workers, autopilot/ultrawork/ralph/pipeline execution modes, OpenClaw integration for cross-session events


Claude Code (2 days)

Intelligent coding tool that lives in your terminal
🔗 anthropics/claude-code

What it does: An AI coding assistant from Anthropic that understands your entire codebase, executes natural language commands, explains complex code, and handles git workflows. Available as a CLI, IDE integration, and GitHub bot (@claude).

Why it matters: Moves beyond autocomplete to workflow automation. Handles refactoring, testing, documentation, code review—all through conversational commands. Extensible via plugins and MCP (Model Context Protocol).

Tech: Node.js 18+, cross-platform (macOS/Linux/Windows), installable via Homebrew/npm/WinGet, plugin system, Claude API integration


oh-my-codex (2 days)

Workflow enhancement layer for OpenAI Codex CLI
🔗 Yeachan-Heo/oh-my-codex

What it does: Transforms the raw Codex CLI into a structured development system. Four core workflows: $deep-interview (requirements clarification), $ralplan (plan review), $ralph (persistent completion), and $team (parallel multi-agent execution).

Why it matters: Vanilla Codex lacks workflow scaffolding. OMX provides standardized task flows, specialized roles, state persistence, and team collaboration runtime via tmux. Integrates with OpenClaw for cross-session events.

Tech: Node.js 20+, TypeScript, tmux (team runtime), OpenAI Codex CLI, OpenClaw integration support


Onyx (2 days)

Open-source AI platform with RAG, search, code execution
🔗 onyx-dot-app/onyx

What it does: An enterprise-ready self-hosted AI chat platform. Agentic RAG with hybrid indexing, deep research workflows (ranked #1 on leaderboards), custom agents, web search, code sandbox, voice mode, MCP integration, and 50+ knowledge connectors. Supports all major LLMs (Anthropic, OpenAI, Gemini, Ollama).

Why it matters: Teams need feature-rich AI platforms without vendor lock-in. Onyx provides SSO, RBAC, usage analytics, query auditing, and single-command deployment. Deep research capabilities surpass cloud alternatives.

Tech: Python, Docker/Kubernetes, Redis, MinIO, vector + keyword hybrid search, MCP protocol, multi-provider LLM integration


OpenScreen (2 days)

Free open-source Screen Studio alternative
🔗 siddharthvaddem/openscreen

What it does: A polished screen recorder for product demos and walkthroughs. Features auto/manual zoom, mic + system audio capture, motion blur, annotations, trimming, variable speed, multiple export ratios/resolutions. No watermarks, no subscription, MIT licensed.

Why it matters: Screen Studio costs $29/month. OpenScreen delivers the core workflow for free with commercial use allowed. Perfect for indie devs, educators, and content creators who need professional-looking demos without recurring costs.

Tech: Electron, React, TypeScript, Vite, PixiJS (rendering engine), dnd-timeline (timeline UI)


🌟 Single-Day Standouts

Projects that caught fire for a day—sometimes niche, sometimes ahead of the curve, always worth a look.

axios

Promise-based HTTP client for browser and Node.js
🔗 axios/axios

What it does: The de facto standard for HTTP requests in JavaScript. Handles XMLHttpRequests in browsers, http requests in Node.js, with Promise API, request/response interceptors, automatic JSON transforms, and built-in XSRF protection.

Why it matters: Battle-tested, widely adopted, consistent API across platforms, rich middleware ecosystem. Supports form serialization (multipart/form-data, x-www-form-urlencoded), request cancellation via AbortController, and TypeScript definitions.

Tech: JavaScript/TypeScript, browser (Chrome/Firefox/Safari/Opera/Edge) + Node.js, npm/yarn/pnpm/bun install, CDN support (jsDelivr/unpkg), Fetch API adapter


Hermes Agent

Self-improving AI agent with closed-loop learning
🔗 NousResearch/hermes-agent

What it does: An autonomous agent that runs on $5 VPS, GPU clusters, or serverless platforms. Features self-curated memory, periodic reminders, automatic skill creation from complex tasks, FTS5 conversation search, and Honcho dialectic user modeling. Control it from anywhere via Telegram, Discord, Slack, or Email.

Why it matters: Traditional agents are laptop-bound and don't learn. Hermes runs 24/7, improves skills through use, searches its own history, builds cross-session user models, and supports remote control from any device. One-click migration from OpenClaw.

Tech: Python 3.11+, Node.js (optional Gateway), 6 runtime backends (local/Docker/SSH/Daytona/Singularity/Modal), 40+ tools, multi-model support (Nous Portal, OpenRouter, OpenAI, GLM, Kimi, MiniMax), full TUI with multi-line editing


Microsoft Agent Framework

Multi-language AI agent framework from Microsoft
🔗 microsoft/agent-framework

What it does: Microsoft's official agent framework for Python and .NET. Graph-based workflow orchestration with streaming, checkpoints, human-in-the-loop, and time travel. DevUI for interactive development, built-in OpenTelemetry observability, middleware system, multi-provider LLM support. Migration guides from Semantic Kernel and AutoGen.

Why it matters: Provides the full spectrum from simple chat agents to complex multi-agent workflows. Enterprise-grade observability, distributed tracing, and production-ready patterns. Official Microsoft support.

Tech: Python, C#/.NET, Azure AI Foundry, OpenAI, OpenTelemetry, distributed tracing, graph-based orchestration


fff.nvim

Blazing fast file finder for AI agents and Neovim
🔗 dmtrKovalenko/fff.nvim

What it does: An intelligent file search tool with frecency (frequency + recency) scoring, git status awareness, file size weighting, and definition matching. MCP protocol integration for Claude Code / Codex / OpenCode. Fuzzy matching, grep, glob, and multi-search capabilities.

Why it matters: AI agents waste tokens reading wrong files. FFF intelligently ranks files based on usage patterns, reducing unnecessary reads and repeat searches. For Neovim users, it's a super-fast, typo-tolerant fuzzy finder.

Tech: Rust (core search engine), Lua (Neovim plugin), C / Node.js (cross-platform bindings), MCP protocol, Smith-Waterman fuzzy matching


Deep-Live-Cam

Real-time face swap and one-click video deepfake
🔗 hacksider/Deep-Live-Cam

What it does: A real-time face-swapping tool that works with just one photo. Supports live camera input, video files, multiple faces simultaneously, and includes mouth mask mode (preserves original mouth movements) plus GFPGAN face enhancement.

Why it matters: Traditional deepfake tools require massive training data and time. Deep-Live-Cam delivers instant results for content creation, character animation, and creative projects. Built-in ethical safeguards prevent misuse (nudity/violence detection, consent requirements, deepfake labeling).

Tech: Python, insightface inswapper_128_fp16.onnx model, CUDA/CoreML/DirectML/OpenVINO providers for cross-platform GPU acceleration


MLX-VLM

Run and fine-tune vision-language models on Mac
🔗 Blaizzy/mlx-vlm

What it does: A toolkit for running multimodal LLMs (image/audio/video understanding) natively on Mac, especially Apple Silicon. Supports inference, fine-tuning, multi-image chat, audio input, video analysis. TurboQuant KV cache compression slashes memory usage by 76% for 128k context.

Why it matters: Makes state-of-the-art VLMs accessible on consumer hardware. No cloud, no GPU cluster—just your MacBook. Supports Qwen2-VL, DeepSeek-OCR, Phi-4, Gemma 4, and more.

Tech: Python, MLX (Apple's ML framework), Gradio UI, OpenTelemetry, TurboQuant compression


📊 This Week's Themes

🤖 AI Coding Ecosystems Mature

  • Claude Code went mainstream this week, spawning three major companion projects (claude-howto, oh-my-claudecode, claude-code-best-practice)
  • Multi-agent orchestration (OMC, OMX, Microsoft Agent Framework) signals a shift from "AI autocomplete" to "AI team collaboration"
  • Workflow standardization (team-plan → exec → verify → fix) is becoming the new normal

🎙️ Open-Source Speech AI Challenges Closed Models

  • Microsoft's VibeVoice delivers 60-minute long-form ASR and 90-minute multi-speaker TTS, rivaling commercial offerings
  • Ethical guardrails (deepfake labeling, consent requirements) now table stakes in speech synthesis tools

🔍 Vertical Foundation Models Prove Value

  • Google's TimesFM demonstrates that domain-specific pre-trained models (time-series forecasting) outperform generic fine-tuning
  • Expect more vertical models for specific data modalities (code, medical imaging, sensor data)

🛠️ Developer Experience = Product

  • High-quality guides like claude-howto accumulate 16k+ stars—proof that education is infrastructure
  • Tools like OpenScreen thrive by offering "90% of Screen Studio for $0"—the open-source value prop evolves from "technically possible" to "delightful UX"

🧠 Intelligence at the Edge

  • MLX-VLM brings vision-language models to consumer Macs with 76% memory savings
  • Hermes Agent runs on $5 VPS with self-improving skills—the future is decentralized compute

💡 Key Takeaways

  1. AI coding is graduating from tools to workflows. The winners aren't just smart autocomplete—they're orchestrators, planners, verifiers, and learners.

  2. Open source is eating proprietary AI. VibeVoice, TimesFM, MLX-VLM, and Onyx demonstrate that open models + smart engineering can match or exceed cloud services.

  3. Documentation is differentiation. Projects with killer guides (claude-howto) outperform technically superior but poorly explained alternatives.

  4. Multi-agent is the new single-agent. Complex tasks increasingly split across specialized sub-agents with different models, runtimes, and skill sets.

  5. Privacy and cost matter. Tools that run locally (MLX-VLM), cost nothing (OpenScreen), or self-host (Onyx, Hermes) win mindshare in a cloud-fatigued market.


Compiled by Tommy Zhang |

Share this article