Transformer 相关的热门 GitHub AI项目仓库
发现与 Transformer 相关的最受欢迎的开源项目和工具,了解最新的开发趋势和创新。
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

🧑🏫 60+ Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠

A high-throughput and memory-efficient inference and serving engine for LLMs

The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT), MobileNetV4, MobileNet-V3 & V2, RegNet, DPN, CSPNet, Swin Transformer, MaxViT, CoAtNet, ConvNeXt, and more

Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch

A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training

🏆 A ranked list of awesome machine learning Python libraries. Updated weekly.

[NeurIPS 2022] Towards Robust Blind Face Restoration with Codebook Lookup Transformer

《李宏毅深度学习教程》(李宏毅老师推荐👍,苹果书🍎),PDF下载地址:https://github.com/datawhalechina/leedl-tutorial/releases

This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows".

Natural Language Processing Tutorial for Deep Learning Researchers

pix2tex: Using a ViT to convert images of equations into LaTeX code.

RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable). We are at RWKV-7 "Goose". So it's combining the best of RNN and transformer - great performance, linear time, constant space (no kv-cache), fast training, infinite ctx_len, and free sentence embedding.

State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!

Train transformer language models with reinforcement learning.

A powerful HTTP client for Dart and Flutter, which supports global settings, Interceptors, FormData, aborting and canceling a request, files uploading and downloading, requests timeout, custom adapters, etc.

Easy-to-use Speech Toolkit including Self-Supervised Learning model, SOTA/Streaming ASR with punctuation, Streaming TTS with text frontend, Speaker Verification System, End-to-End Speech Translation and Keyword Spotting. Won NAACL2022 Best Demo Award.

Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.

This repository contains demos I made with the Transformers library by HuggingFace.

Semantic segmentation models with 500+ pretrained convolutional and transformer-based backbones.

MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba. Full multimodal LLM Android App:[MNN-LLM-Android](./apps/Android/MnnLlmChat/README.md)

Large Language Model Text Generation Inference

🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading

Hackable and optimized Transformers building blocks, supporting a composable construction.

A PyTorch implementation of the Transformer model in "Attention is All You Need".

Easy-to-use image segmentation library with awesome pre-trained model zoo, supporting wide-range of practical tasks in Semantic Segmentation, Interactive Segmentation, Panoptic Segmentation, Image Matting, 3D Segmentation, etc.

OpenMMLab Semantic Segmentation Toolbox and Benchmark.

A framework for few-shot evaluation of language models.

An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.

Chinese version of GPT2 training code, using BERT tokenizer.

Translate manga/image 一键翻译各类图片内文字 https://cotrans.touhou.ai/

BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)

Official PyTorch Implementation of "Scalable Diffusion Models with Transformers"

An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries

Decorator-based transformation, serialization, and deserialization between objects and classes.

A Jest transformer with source map support that lets you use Jest to test projects written in TypeScript.

An extremely fast CSS parser, transformer, bundler, and minifier written in Rust.

PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO

Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"

🔍 An LLM-based Multi-agent Framework of Web Search Engine (like Perplexity.ai Pro and SearchGPT)

Official Implementation of OCR-free Document Understanding Transformer (Donut) and Synthetic Document Generator (SynthDoG), ECCV 2022

An annotated implementation of the Transformer paper.

Taming Transformers for High-Resolution Image Synthesis

[ICCV 2023] ProPainter: Improving Propagation and Transformer for Video Inpainting

Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.

GoGoCode is a transformer for JavaScript/Typescript/HTML based on AST but providing a more intuitive API.

The GitHub repository for the paper "Informer" accepted by AAAI 2021.

[CVPR 2025 Oral] VGGT: Visual Geometry Grounded Transformer

Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch

Time series Timeseries Deep Learning Machine Learning Python Pytorch fastai | State-of-the-art Deep Learning library for Time Series and Sequences in Pytorch / fastai

中文文本分类,TextCNN,TextRNN,FastText,TextRCNN,BiLSTM_Attention,DPCNN,Transformer,基于pytorch,开箱即用。

Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText.

A concise but complete full-attention transformer with a set of promising experimental features from various papers

JavaScript syntax tree transformer, nondestructive pretty-printer, and automatic source map generator

An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites

An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.

SwinIR: Image Restoration Using Swin Transformer (official repository)

【🔞🔞🔞 内含不适合未成年人阅读的图片】基于我擅长的编程、绘画、写作展开的 AI 探索和总结:StableDiffusion 是一种强大的图像生成模型,能够通过对一张图片进行演化来生成新的图片。ChatGPT 是一个基于 Transformer 的语言生成模型,它能够自动为输入的主题生成合适的文章。而 Github Copilot 是一个智能编程助手,能够加速日常编程活动。

:zap: Primus, the creator god of the transformers & an abstraction layer for real-time to prevent module lock-in.

Production First and Production Ready End-to-End Speech Recognition Toolkit

This repository contains a hand-curated resources for Prompt Engineering with a focus on Generative Pre-trained Transformer (GPT), ChatGPT, PaLM etc

Jupyter notebooks for the Natural Language Processing with Transformers book

Transformer Explained Visually: Learn How LLM Transformer Models Work with Interactive Visualization

Efficient AI Backbones including GhostNet, TNT and MLP, developed by Huawei Noah's Ark Lab.

Transformers for Information Retrieval, Text Classification, NER, QA, Language Modelling, Language Generation, T5, Multi-Modal, and Conversational AI

Hunyuan-DiT : A Powerful Multi-Resolution Diffusion Transformer with Fine-Grained Chinese Understanding

SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer

Source transformer enabling ECMAScript 6 generator functions in JavaScript-of-today.

<Beat AI> 又名 <零生万物> , 是一本专属于软件开发工程师的 AI 入门圣经,手把手带你上手写 AI。从神经网络到大模型,从高层设计到微观原理,从工程实现到算法,学完后,你会发现 AI 也并不是想象中那么高不可攀、无法战胜,Just beat it !

Transformer: PyTorch Implementation of "Attention Is All You Need"

Transformer: PyTorch Implementation of "Attention Is All You Need"

[CVPR 2024] Official RT-DETR (RTDETR paddle pytorch), Real-Time DEtection TRansformer, DETRs Beat YOLOs on Real-time Object Detection. 🔥 🔥 🔥

Deformable DETR: Deformable Transformers for End-to-End Object Detection.

Collect some papers about transformer with vision. Awesome Transformer with Computer Vision (CV)

Scalable and user friendly neural :brain: forecasting algorithms.

Towhee is a framework that is dedicated to making neural data processing pipelines simple and fast.
