Skip to main content

Taichi Blogs

Taichi & PyTorch 03: Accelerate PyTorch with Taichi - Data Preprocessing & High-performance ML Operator Customization
September 15, 2022 | Ailing Zhang, Haidong Lan
Our previous blogs (Taichi & PyTorch 01 and 02) pointed out that Taichi and Torch serve different application scenarios can they complement each other? And the answer is an unequivocal yes! In this blog, we will use two simple examples to explain how to use Taichi kernel to implement data preprocessing operators or custom ML operators. With Taichi, you can accelerate your ML model development with ease and get rid of the tedious low-level parallel programming (CUDA for example) for good.
Read More
Taichi & PyTorch 02: Data containers
August 15, 2022 | Ailing Zhang
In my last blog, I compared the purposes and design philosophies of Taichi Lang and PyTorch. Now, it's time to take a closer look at their data containers - the most essential part of any easy-to-use programming language.
Read More
Taichi & PyTorch 01: Underlying difference
August 8, 2022 | Ailing Zhang
"How does Taichi differ from PyTorch? They are both embedded in Python and can run on GPU! And when should I choose Taichi over PyTorch or the other way around?"
Read More
Subscribe to our updates

Get the latest news from the Taichi Lang community in a monthly email: Groundbreaking releases, upcoming events, new insights, community updates, and more!

We'll never share your information with anyone else and you can opt out at any time.