Transformer 相关的热门 GitHub AI项目仓库
发现与 Transformer 相关的最受欢迎的开源项目和工具,了解最新的开发趋势和创新。
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
huggingface🧑🏫 60+ Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠
labmlaiA high-throughput and memory-efficient inference and serving engine for LLMs
vllm-projectThe largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT), MobileNetV4, MobileNet-V3 & V2, RegNet, DPN, CSPNet, Swin Transformer, MaxViT, CoAtNet, ConvNeXt, and more
huggingfaceImplementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
lucidrainsA minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training
karpathy🏆 A ranked list of awesome machine learning Python libraries. Updated weekly.
ml-tooling[NeurIPS 2022] Towards Robust Blind Face Restoration with Codebook Lookup Transformer
sczhou《李宏毅深度学习教程》(李宏毅老师推荐👍,苹果书🍎),PDF下载地址:https://github.com/datawhalechina/leedl-tutorial/releases
datawhalechinaThis is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows".
microsoftNatural Language Processing Tutorial for Deep Learning Researchers
graykodepix2tex: Using a ViT to convert images of equations into LaTeX code.
lukas-blecherTrain transformer language models with reinforcement learning.
huggingfaceRWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable). We are at RWKV-7 "Goose". So it's combining the best of RNN and transformer - great performance, linear time, constant space (no kv-cache), fast training, infinite ctx_len, and free sentence embedding.
BlinkDLState-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!
huggingfaceA powerful HTTP client for Dart and Flutter, which supports global settings, Interceptors, FormData, aborting and canceling a request, files uploading and downloading, requests timeout, custom adapters, etc.
cfugEasy-to-use Speech Toolkit including Self-Supervised Learning model, SOTA/Streaming ASR with punctuation, Streaming TTS with text frontend, Speaker Verification System, End-to-End Speech Translation and Keyword Spotting. Won NAACL2022 Best Demo Award.
PaddlePaddleAdvanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
jacobgilMNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba. Full multimodal LLM Android App:[MNN-LLM-Android](./apps/Android/MnnLlmChat/README.md)
alibabaThis repository contains demos I made with the Transformers library by HuggingFace.
NielsRoggeSemantic segmentation models with 500+ pretrained convolutional and transformer-based backbones.
qubvel-orgLarge Language Model Text Generation Inference
huggingface🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
bigscience-workshopHackable and optimized Transformers building blocks, supporting a composable construction.
facebookresearchA PyTorch implementation of the Transformer model in "Attention is All You Need".
jadore801120Easy-to-use image segmentation library with awesome pre-trained model zoo, supporting wide-range of practical tasks in Semantic Segmentation, Interactive Segmentation, Panoptic Segmentation, Image Matting, 3D Segmentation, etc.
PaddlePaddleA framework for few-shot evaluation of language models.
EleutherAIOpenMMLab Semantic Segmentation Toolbox and Benchmark.
open-mmlabAn Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
OptimalScaleTranslate manga/image 一键翻译各类图片内文字 https://cotrans.touhou.ai/
zyddnysChinese version of GPT2 training code, using BERT tokenizer.
MorizeyaoBertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)
jessevigOfficial PyTorch Implementation of "Scalable Diffusion Models with Transformers"
facebookresearchAn implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries
EleutherAIDecorator-based transformation, serialization, and deserialization between objects and classes.
typestackA Jest transformer with source map support that lets you use Jest to test projects written in TypeScript.
kulshekharAn extremely fast CSS parser, transformer, bundler, and minifier written in Rust.
parcel-bundlerPyTorch code for Vision Transformers training with the Self-Supervised learning method DINO
facebookresearch[CVPR 2025 Oral] VGGT: Visual Geometry Grounded Transformer
facebookresearch🔍 An LLM-based Multi-agent Framework of Web Search Engine (like Perplexity.ai Pro and SearchGPT)
InternLMCode for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"
google-researchOfficial Implementation of OCR-free Document Understanding Transformer (Donut) and Synthetic Document Generator (SynthDoG), ECCV 2022
clovaaiAn annotated implementation of the Transformer paper.
harvardnlpTaming Transformers for High-Resolution Image Synthesis
CompVis[ICCV 2023] ProPainter: Improving Propagation and Transformer for Video Inpainting
sczhouSimple and efficient pytorch-native transformer text generation in <1000 LOC of python.
pytorch-labsGoGoCode is a transformer for JavaScript/Typescript/HTML based on AST but providing a more intuitive API.
thxThe GitHub repository for the paper "Informer" accepted by AAAI 2021.
zhouhaoyiTime series Timeseries Deep Learning Machine Learning Python Pytorch fastai | State-of-the-art Deep Learning library for Time Series and Sequences in Pytorch / fastai
timeseriesAIImplementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch
lucidrains中文文本分类,TextCNN,TextRNN,FastText,TextRCNN,BiLSTM_Attention,DPCNN,Transformer,基于pytorch,开箱即用。
649453932Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText.
bentrevettA concise but complete full-attention transformer with a set of promising experimental features from various papers
lucidrainsJavaScript syntax tree transformer, nondestructive pretty-printer, and automatic source map generator
benjamnAn ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites
cmhungsteveSwinIR: Image Restoration Using Swin Transformer (official repository)
JingyunLiangAn easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.
AutoGPTQ【🔞🔞🔞 内含不适合未成年人阅读的图片】基于我擅长的编程、绘画、写作展开的 AI 探索和总结:StableDiffusion 是一种强大的图像生成模型,能够通过对一张图片进行演化来生成新的图片。ChatGPT 是一个基于 Transformer 的语言生成模型,它能够自动为输入的主题生成合适的文章。而 Github Copilot 是一个智能编程助手,能够加速日常编程活动。
phodalProduction First and Production Ready End-to-End Speech Recognition Toolkit
wenet-e2e:zap: Primus, the creator god of the transformers & an abstraction layer for real-time to prevent module lock-in.
primusThis repository contains a hand-curated resources for Prompt Engineering with a focus on Generative Pre-trained Transformer (GPT), ChatGPT, PaLM etc
promptslabTransformer Explained Visually: Learn How LLM Transformer Models Work with Interactive Visualization
poloclubJupyter notebooks for the Natural Language Processing with Transformers book
nlp-with-transformersEfficient AI Backbones including GhostNet, TNT and MLP, developed by Huawei Noah's Ark Lab.
huawei-noahTransformers for Information Retrieval, Text Classification, NER, QA, Language Modelling, Language Generation, T5, Multi-Modal, and Conversational AI
ThilinaRajapakseSANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer
NVlabsHunyuan-DiT : A Powerful Multi-Resolution Diffusion Transformer with Fine-Grained Chinese Understanding
Tencent<Beat AI> 又名 <零生万物> , 是一本专属于软件开发工程师的 AI 入门圣经,手把手带你上手写 AI。从神经网络到大模型,从高层设计到微观原理,从工程实现到算法,学完后,你会发现 AI 也并不是想象中那么高不可攀、无法战胜,Just beat it !
ibeataiSource transformer enabling ECMAScript 6 generator functions in JavaScript-of-today.
facebookTransformer: PyTorch Implementation of "Attention Is All You Need"
hyunwoongkoTransformer: PyTorch Implementation of "Attention Is All You Need"
hyunwoongko[CVPR 2024] Official RT-DETR (RTDETR paddle pytorch), Real-Time DEtection TRansformer, DETRs Beat YOLOs on Real-time Object Detection. 🔥 🔥 🔥
lyuwenyuDeformable DETR: Deformable Transformers for End-to-End Object Detection.
fundamentalvisionScalable and user friendly neural :brain: forecasting algorithms.
NixtlaCollect some papers about transformer with vision. Awesome Transformer with Computer Vision (CV)
dk-liangTowhee is a framework that is dedicated to making neural data processing pipelines simple and fast.
towhee-io














