-
open source task force
- San Jose
- https://github.com/wenhuizhang
Lists (1)
Sort Name ascending (A-Z)
Stars
- All languages
- AGS Script
- Assembly
- BitBake
- C
- C#
- C++
- CSS
- Clojure
- Coq
- Cuda
- Elixir
- G-code
- GLSL
- Go
- Groff
- Groovy
- HTML
- Haskell
- Isabelle
- Java
- JavaScript
- Jupyter Notebook
- Lua
- MATLAB
- Makefile
- OCaml
- PHP
- Pascal
- Perl
- PowerShell
- Python
- RobotFramework
- Roff
- Rust
- Scala
- Shell
- TLA
- TeX
- Thrift
- TypeScript
- Vim Script
- Zeek
[ECCV2024] Grounded Multimodal Large Language Model with Localized Visual Tokenization
[NeurIPS 2024 Oral][GPT beats diffusion🔥] [scaling laws in visual generation📈] Official impl. of "Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction". An *ultra-sim…
Run your own AI cluster at home with everyday devices 📱💻 🖥️⌚
Official Repo for the 30DaysOfFLCode Challenge Initiative
Extend KubeVirt capability of managing CVM as a deployment flavor of confidential computing cloud native use cases.
A containerd snapshotter with data deduplication and lazy loading in P2P fashion
Free, ultrafast Copilot alternative for Vim and Neovim
A domain specific language to express machine learning workloads.
This is a collection of sidecar containers that can be incorporated within confidential container groups on Azure Container Instances.
Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
Linux Kernel Programming 2E - published by Packt
A machine learning compiler for GPUs, CPUs, and ML accelerators
NVIDIA Linux open GPU kernel module source
CUDA C 编程权威指南代码实现 包含了书上第二章到第八章的大部分代码实现和作者笔记,全由作者本人手动实现,难免有错误的地方,请大家谨慎参考,非常欢迎对错误的指正。 如果有帮助的话请Star一下,对作者帮助很大,谢谢!
GLake: optimizing GPU memory management and IO transmission.
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU su…
A cloud-native vector database, storage for next generation AI applications
Universal LLM Deployment Engine with ML Compilation
A tool for checking the security hardening options of the Linux kernel
Serverless computing platform with process-based lightweight function execution and container-based application isolation. Works in Knative and bare metal/VM environments.
SLEdge: a serverless runtime designed for the Edge.