Skip to content
/ alpa Public
forked from alpa-projects/alpa

Auto parallelization for large-scale neural networks

License

Notifications You must be signed in to change notification settings

zlwu92/alpa

 
 

Repository files navigation

Alpa

Documentation | Slack

Build Jaxlib and Jax CI

Alpa automatically parallelizes tensor computation graphs and runs them on a distributed cluster.

Quick Start

Use Alpa's single line API @parallelize to scale your single-node training code to distributed clusters, even though your model is much bigger than a single device memory.

import alpa

@alpa.parallelize
def train_step(model_state, batch):
    def loss_func(params):
        out = model_state.forward(params, batch["x"])
        return jnp.mean((out - batch["y"]) ** 2)

    grads = grad(loss_func)(state.params)
    new_model_state = model_state.apply_gradient(grads)
    return new_model_state

# The training loop now automatically runs on your designated cluster.
model_state = create_train_state()
for batch in data_loader:
    model_state = train_step(model_state, batch)

Check out the Alpa Documentation site for installation instructions, tutorials, examples, and more.

More Information

Contributing

Please read the contributor guide if you are interested in contributing to Alpa. Please connect to Alpa contributors via the Alpa slack.

License

Alpa is licensed under the Apache-2.0 license.

About

Auto parallelization for large-scale neural networks

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 94.4%
  • Jupyter Notebook 5.0%
  • Shell 0.4%
  • Dockerfile 0.2%
  • C++ 0.0%
  • CMake 0.0%