跳转至主要内容

Taichi Blogs

Taichi & PyTorch 03: Accelerate PyTorch with Taichi - Data Preprocessing & High-performance ML Operator Customization
2022年9月15日 | Ailing Zhang, Haidong Lan
Our previous blogs (Taichi & PyTorch 01 and 02) pointed out that Taichi and Torch serve different application scenarios can they complement each other? And the answer is an unequivocal yes! In this blog, we will use two simple examples to explain how to use Taichi kernel to implement data preprocessing operators or custom ML operators. With Taichi, you can accelerate your ML model development with ease and get rid of the tedious low-level parallel programming (CUDA for example) for good.
了解更多
Is Taichi Lang comparable to or even faster than CUDA?
2022年5月6日 | Haidong Lan
In our recently published blog Is Taichi Lang able to make better use of the underlying hardware than other native, low-level programming languages? With this question in mind, we kick-started the benchmark project in an attempt to provide a comprehensive and accurate performance evaluation of Taichi Lang.
了解更多
Subscribe to our updates

Get the latest news from the Taichi Lang community in a monthly email: Groundbreaking releases, upcoming events, new insights, community updates, and more!

We'll never share your information with anyone else and you can opt out at any time.