Skip to main content

Taichi Blogs

Improving Gradient Computation for Differentiable Physics Simulation with Contacts
21 juin 2023 | Yaofeng "Desmond" Zhong
Note: If you have any comments or suggestions regarding the content of this article, you can contact the author of the original post.
Read More
Taichi NeRF (Part 1): Develop and Deploy Instant NGP without writing CUDA
21 mars 2023
Imagine this: when you flip through a photo album and see pictures of past family trips, do you want to revisit those places and relive those warm moments? When browsing an online museum, do you want to freely adjust your perspective, observe the details of the exhibits up close, and enjoy a full interaction with the cultural relics? When doctors face patients, can they significantly improve diagnostic accuracy and efficiency by synthesizing a 3D perspective of the affected area based on images and providing estimates of lesion size and volume?
Read More
GPU-Accelerated Collision Detection and Taichi DEM Optimization Challenge
22 décembre 2022 | Yuanming Hu, Qian Bao
Numerical simulation and computer graphics usually involve collision detection of a massive number of particles (in many cases, millions of particles). Regular operations, such as particle movement and boundary handling, can be handled in O(N) time complexity (N refers to the number of particles). But the complexity of collision detection can easily escalate to O(N^2) if no optimization is made, imposing an algorithmic bottleneck. A commonly-used technique is grid-based neighborhood search. By confining the search for collision-prone particles to a small area, we can reduce the computational complexity of collision detection back to O(N). This article takes a minimal 2D discrete element method (DEM) solver as an example and presents a highly efficient implementation of neighborhood search using Taichi's data structures.
Read More
Pythonic Supercomputing: Scaling Taichi Programs with MPI4Py
7 décembre 2022 | Haidong Lan
Nvidia unveiled its Tesla V100 GPU accelerator, which has since become a must-have model for deep learning, at GTC (GPU Technology Conference) 2017 in Beijing. It was on the same occasion that Jensen Huang, Nvidia's CEO, solemnly gave us the most sincere advice, which kept resonating in our heads for years to come:
Read More
Taichi's Quantized Data Types: Same Computational Code, Optimized GPU Memory Usage
18 novembre 2022 | Yi Xu
Starting from v1.1.0, Taichi provides quantized data types. But why is quantization important, especially in scenarios where Taichi stands out, such as physical simulation? This blog demonstrates how this new feature reduces your GPU memory usage significantly and requires zero change to your computational code.
Read More
How Taichi Fuels GPU-accelerated Image Processing: A Beginner to Expert Guide
4 novembre 2022 | Yuanming Hu, Liang Zhao
GPU-accelerated image processing tutorial
Read More
How does Taichi Compare to CUB/CuPy/Numba in Numerical Computation?
25 octobre 2022 | Qian Bao, Haidong Lan
In the previous blog, we learned that Taichi, a high-performance computing language embedded in Python, goes beyond a development tool for computer graphics and renderers but also comes in handy for numerical computation that involves massive operations on 2D and 3D arrays. Computational fluid dynamics (CFD) is a typical scenario where Taichi can play a part.
Read More
Can Taichi play a role in CFD?
22 septembre 2022 | Qian Bao
Computational fluid dynamics (CFD) is a branch of fluid mechanics that endeavors to precisely reproduce the behavior of liquid/gas flows and their interaction with solid boundaries. It plays a vital role in such sectors as visual effects, virtual reality, and industrial design.
Read More
Taichi & PyTorch 03: Accelerate PyTorch with Taichi - Data Preprocessing & High-performance ML Operator Customization
15 septembre 2022 | Ailing Zhang, Haidong Lan
Our previous blogs (Taichi & PyTorch 01 and 02) pointed out that Taichi and Torch serve different application scenarios can they complement each other? And the answer is an unequivocal yes! In this blog, we will use two simple examples to explain how to use Taichi kernel to implement data preprocessing operators or custom ML operators. With Taichi, you can accelerate your ML model development with ease and get rid of the tedious low-level parallel programming (CUDA for example) for good.
Read More
Accelerate Python code 100x by import taichi as ti
23 août 2022 | Yuanming Hu
Python has become the most popular language in many rapidly evolving sectors, such as deep learning and data sciences. Yet its easy readability comes at the cost of performance. Of course, we all complain about program performance from time to time, and Python should certainly not take all the blame. Still, it's fair to say that Python's nature as an interpreted language does not help, especially in computation-intensive scenarios (e.g., when there are multiple nested for loops).
Read More
Subscribe to our updates

Get the latest news from the Taichi Lang community in a monthly email: Groundbreaking releases, upcoming events, new insights, community updates, and more!

We'll never share your information with anyone else and you can opt out at any time.