Skip to main content
Version: v1.1.3

Operators

Here we present the supported operators in Taichi for both primitive types and compound types such as matrices.

Supported operators for primitive types

Arithmetic operators

OperationResult
-aa negated
+aa unchanged
a + bsum of a and b
a - bdifference of a and b
a * bproduct of a and b
a / bquotient of a and b
a // bfloored quotient of a and b
a % bremainder of a / b
a ** ba to the power of b
note

The % operator in Taichi follows the Python style instead of C style, e.g.,

# In Taichi-scope or Python-scope:
print(2 % 3) # 2
print(-2 % 3) # 1

For C-style mod (%), please use ti.raw_mod. This function also receives floating points as arguments.

ti.raw_mod(a, b) returns a - b * int(float(a) / b).

print(ti.raw_mod(2, 3))      # 2
print(ti.raw_mod(-2, 3)) # -2
print(ti.raw_mod(3.5, 1.5)) # 0.5
note

Python3 distinguishes / (true division) and // (floor division), e.g., 1.0 / 2.0 = 0.5, 1 / 2 = 0.5, 1 // 2 = 0, 4.2 // 2 = 2. Taichi follows the same design:

  • True divisions on integral types first cast their operands to the default floating point type.
  • Floor divisions on floating point types first cast their operands to the default integral type.

To avoid such implicit casting, you can manually cast your operands to desired types, using ti.cast. Please see Default precisions for more details on default numerical types.

Taichi also provides ti.raw_div function which performs true division if one of the operands is floating point type and performs floor division if both operands are integral types.

print(ti.raw_div(5, 2))    # 2
print(ti.raw_div(5, 2.0)) # 2.5

Comparison operators

OperationResult
a == bif a is equal to b, then True, else False
a != bif a is not equal to b, then True, else False
a > bif a is strictly greater than b, then True, else False
a < bif a is strictly less than b, then True, else False
a >= bif a is greater than or equal to b, then True, else False
a <= bif a is less than or equal to b, then True, else False

Logical operators

OperationResult
not aif a is False, then True, else False
a or bif a is False, then b, else a
a and bif a is False, then a, else b

Conditional operations

The result of conditional expression a if cond else b is a if cond is True, or b otherwise. a and b must have a same type.

The conditional expression does short-circuit evaluation, which means the branch not chosen is not evaluated.

a = ti.field(ti.i32, shape=(10,))
for i in range(10):
a[i] = i

@ti.kernel
def cond_expr(ind: ti.i32) -> ti.i32:
return a[ind] if ind < 10 else 0

cond_expr(3) # returns 3
cond_expr(10) # returns 0, a[10] is not evaluated

For element-wise conditional operations on Taichi vectors and matrices, Taichi provides ti.select(cond, a, b) which does not do short-circuit evaluation.

cond = ti.Vector([1, 0])
a = ti.Vector([2, 3])
b = ti.Vector([4, 5])
ti.select(cond, a, b) # ti.Vector([2, 5])

Bitwise operators

OperationResult
~athe bits of a inverted
a & bbitwise and of a and b
a ^ bbitwise exclusive or of a and b
a | bbitwise or of a and b
a << bleft-shift a by b bits
a >> bright-shift a by b bits
note

The >> operation denotes the Shift Arithmetic Right (SAR) operation. For the Shift Logical Right (SHR) operation, consider using ti.bit_shr(). For left shift operations, SAL and SHL are the same.

Trigonometric functions

ti.sin(x)
ti.cos(x)
ti.tan(x)
ti.asin(x)
ti.acos(x)
ti.atan2(x, y)
ti.tanh(x)

Other arithmetic functions

ti.sqrt(x)
ti.rsqrt(x) # A fast version for `1 / ti.sqrt(x)`.
ti.exp(x)
ti.log(x)
ti.round(x, dtype=None)
ti.floor(x, dtype=None)
ti.ceil(x, dtype=None)
ti.sum(x)
ti.max(x, y, ...)
ti.min(x, y, ...)
ti.abs(x) # Same as `abs(x)`
ti.pow(x, y) # Same as `pow(x, y)` and `x ** y`

The dtype argument in round, floor and ceil functions specifies the data type of the returned value. The default None means the returned type is the same as input x.

Builtin-alike functions

abs(x)  # Same as `ti.abs(x, y)`
pow(x, y) # Same as `ti.pow(x, y)` and `x ** y`.

Random number generator

ti.random(dtype=float)
note

ti.random supports u32, i32, u64, i64, and all floating point types. The range of the returned value is type-specific.

TypeRange
i32-2,147,483,648 to 2,147,483,647
u320 to 4,294,967,295
i64-9,223,372,036,854,775,808 to 9,223,372,036,854,775,807
u640 to 18,446,744,073,709,551,615
floating point0.0 to 1.0

Supported atomic operations

In Taichi, augmented assignments (e.g., x[i] += 1) are automatically atomic.

caution

When modifying global variables in parallel, make sure you use atomic operations. For example, to sum up all the elements in x,

@ti.kernel
def sum():
for i in x:
# Approach 1: OK
total[None] += x[i]

# Approach 2: OK
ti.atomic_add(total[None], x[i])

# Approach 3: Wrong result since the operation is not atomic.
total[None] = total[None] + x[i]
note

When atomic operations are applied to local values, the Taichi compiler will try to demote these operations into their non-atomic counterparts.

Apart from the augmented assignments, explicit atomic operations, such as ti.atomic_add, also do read-modify-write atomically. These operations additionally return the old value of the first argument. For example,

x[i] = 3
y[i] = 4
z[i] = ti.atomic_add(x[i], y[i])
# now x[i] = 7, y[i] = 4, z[i] = 3

Below is a list of all explicit atomic operations:

OperationBehavior
ti.atomic_add(x, y)atomically compute x + y, store the result in x, and return the old value of x
ti.atomic_sub(x, y)atomically compute x - y, store the result in x, and return the old value of x
ti.atomic_and(x, y)atomically compute x & y, store the result in x, and return the old value of x
ti.atomic_or(x, y)atomically compute x | y, store the result in x, and return the old value of x
ti.atomic_xor(x, y)atomically compute x ^ y, store the result in x, and return the old value of x
ti.atomic_max(x, y)atomically compute max(x, y), store the result in x, and return the old value of x
ti.atomic_min(x, y)atomically compute min(x, y), store the result in x, and return the old value of x
note

Supported atomic operations on each backend:

typeCPUCUDAOpenGLMetalC source
i32✔️✔️✔️✔️✔️
f32✔️✔️✔️✔️✔️
i64✔️✔️✔️
f64✔️✔️✔️

(⭕ Requiring extensions for the backend.)

Supported operators for matrices

The previously mentioned operations on primitive types can also be applied on compound types such as matrices. In these cases, they are applied in an element-wise manner. For example:

B = ti.Matrix([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
C = ti.Matrix([[3.0, 4.0, 5.0], [6.0, 7.0, 8.0]])

A = ti.sin(B)
# is equivalent to
for i in ti.static(range(2)):
for j in ti.static(range(3)):
A[i, j] = ti.sin(B[i, j])

A = B ** 2
# is equivalent to
for i in ti.static(range(2)):
for j in ti.static(range(3)):
A[i, j] = B[i, j] ** 2

A = B ** C
# is equivalent to
for i in ti.static(range(2)):
for j in ti.static(range(3)):
A[i, j] = B[i, j] ** C[i, j]

A += 2
# is equivalent to
for i in ti.static(range(2)):
for j in ti.static(range(3)):
A[i, j] += 2

A += B
# is equivalent to
for i in ti.static(range(2)):
for j in ti.static(range(3)):
A[i, j] += B[i, j]

In addition, the following methods are supported matrices operations:

a = ti.Matrix([[2, 3], [4, 5]])
a.transpose() # the transposed matrix of `a`, will not effect the data in `a`.
a.trace() # the trace of matrix `a`, the returned scalar value can be computed as `a[0, 0] + a[1, 1] + ...`.
a.determinant() # the determinant of matrix `a`.
a.inverse() # (ti.Matrix) the inverse of matrix `a`.
a@a # @ denotes matrix multiplication
note

For now, determinant() and inverse() only works in Taichi-scope, and the size of the matrix must be 1x1, 2x2, 3x3 or 4x4.

Supported SIMT intrinsics

For CUDA backend, Taichi now supports warp-level and block-level intrinsics that are needed for writing high-performance SIMT kernels. You can use them in Taichi similar to the usage in CUDA kernels. Currently, the following functions are supported:

OperationMapped CUDA intrinsic
ti.simt.warp.all_nonzero__all_sync
ti.simt.warp.any_nonzero__any_sync
ti.simt.warp.unique__uni_sync
ti.simt.warp.ballot__ballot_sync
ti.simt.warp.shfl_sync_i32__shfl_sync
ti.simt.warp.shfl_sync_f32__shfl_sync
ti.simt.warp.shfl_up_i32__shfl_up_sync
ti.simt.warp.shfl_up_f32__shfl_up_sync
ti.simt.warp.shfl_down_i32__shfl_down_sync
ti.simt.warp.shfl_down_f32__shfl_down_sync
ti.simt.warp.shfl_xor_i32__shfl_xor_sync
ti.simt.warp.match_any__match_any_sync
ti.simt.warp.match_all__match_all_sync
ti.simt.warp.active_mask__activemask
ti.simt.warp.sync__syncwarp

Please refer to our API docs for more information on each function.

Here is an example to perform data exchange within a warp in Taichi:

a = ti.field(dtype=ti.i32, shape=32)

@ti.kernel
def foo():
ti.loop_config(block_dim=32)
for i in range(32):
a[i] = ti.simt.warp.shfl_up_i32(ti.u32(0xFFFFFFFF), a[i], 1)

for i in range(32):
a[i] = i * i

foo()

for i in range(1, 32):
assert a[i] == (i - 1) * (i - 1)