Skip to main content

taichi.lang.misc#

taichi.lang.misc.arm64#

The ARM CPU backend.

taichi.lang.misc.assume_in_range(val, base, low, high)#

Hints the compiler that a value is between a specified range, for the compiler to perform scatchpad optimization, and return the value untouched.

The assumed range is [base + low, base + high).

Parameters:
  • val (Number) – The input value.

  • base (Number) – The base point for the range interval.

  • low (Number) – The lower offset relative to base (included).

  • high (Number) – The higher offset relative to base (excluded).

Returns:

Return the input value untouched.

Example:

>>> # hint the compiler that x is in range [8, 12).
>>> x = ti.assume_in_range(x, 10, -2, 2)
>>> x
10
taichi.lang.misc.block_local(*args)#

Hints Taichi to cache the fields and to enable the BLS optimization.

Please visit https://docs.taichi-lang.org/docs/performance for how BLS is used.

Parameters:

*args (List[Field]) – A list of sparse Taichi fields.

taichi.lang.misc.cache_read_only(*args)#
taichi.lang.misc.cc#
taichi.lang.misc.cpu#

A list of CPU backends supported on the current system. Currently contains ‘x64’, ‘x86_64’, ‘arm64’, ‘cc’, ‘wasm’.

When this is used, Taichi automatically picks the matching CPU backend.

taichi.lang.misc.cuda#

The CUDA backend.

taichi.lang.misc.dx11#

The DX11 backend.

taichi.lang.misc.dx12#

The DX11 backend.

taichi.lang.misc.extension#

An instance of Taichi extension.

The list of currently available extensions is [‘sparse’, ‘quant’, ‘mesh’, ‘quant_basic’, ‘data64’, ‘adstack’, ‘bls’, ‘assertion’, ‘extfunc’].

taichi.lang.misc.get_compute_stream_device_time_elapsed_us() float#
taichi.lang.misc.gles#

The OpenGL ES backend. OpenGL ES 3.1 required.

taichi.lang.misc.global_thread_idx()#

Returns the global thread id of this running thread, only available for cpu and cuda backends.

For cpu backends this is equal to the cpu thread id, For cuda backends this is equal to block_id * block_dim + thread_id.

Example:

>>> f = ti.field(ti.f32, shape=(16, 16))
>>> @ti.kernel
>>> def test():
>>>     for i in ti.grouped(f):
>>>         print(ti.global_thread_idx())
>>>
test()
taichi.lang.misc.gpu#

A list of GPU backends supported on the current system. Currently contains ‘cuda’, ‘metal’, ‘opengl’, ‘vulkan’, ‘dx11’, ‘dx12’, ‘gles’.

When this is used, Taichi automatically picks the matching GPU backend. If no GPU is detected, Taichi falls back to the CPU backend.

taichi.lang.misc.i#

Axis 0. For multi-dimensional arrays it’s the direction downward the rows. For a 1d array it’s the direction along this array.

taichi.lang.misc.ij#

Axes (0, 1).

taichi.lang.misc.ijk#

Axes (0, 1, 2).

taichi.lang.misc.ijkl#

Axes (0, 1, 2, 3).

taichi.lang.misc.ijl#

Axes (0, 1, 3).

taichi.lang.misc.ik#

Axes (0, 2).

taichi.lang.misc.ikl#

Axes (0, 2, 3).

taichi.lang.misc.il#

Axes (0, 3).

taichi.lang.misc.init(arch=None, default_fp=None, default_ip=None, _test_mode=False, enable_fallback=True, require_version=None, **kwargs)#

Initializes the Taichi runtime.

This should always be the entry point of your Taichi program. Most importantly, it sets the backend used throughout the program.

Parameters:
  • arch – Backend to use. This is usually cpu or gpu.

  • default_fp (Optional[type]) – Default floating-point type.

  • default_ip (Optional[type]) – Default integral type.

  • require_version (Optional[string]) – A version string.

  • **kwargs

    Taichi provides highly customizable compilation through kwargs, which allows for fine grained control of Taichi compiler behavior. Below we list some of the most frequently used ones. For a complete list, please check out https://github.com/taichi-dev/taichi/blob/master/taichi/program/compile_config.h.

    • cpu_max_num_threads (int): Sets the number of threads used by the CPU thread pool.

    • debug (bool): Enables the debug mode, under which Taichi does a few more things like boundary checks.

    • print_ir (bool): Prints the CHI IR of the Taichi kernels.

    *offline_cache (bool): Enables offline cache of the compiled kernels. Default to True. When this is enabled Taichi will cache compiled kernel on your local disk to accelerate future calls. *random_seed (int): Sets the seed of the random generator. The default is 0.

taichi.lang.misc.j#

Axis 1. For multi-dimensional arrays it’s the direction across the columns.

taichi.lang.misc.jk#

Axes (1, 2).

taichi.lang.misc.jkl#

Axes (1, 2, 3).

taichi.lang.misc.jl#

Axes (1, 3).

taichi.lang.misc.k#

Axis 2. For arrays of dimension d >= 3, view each cell as an array of lower dimension d-2, it’s the first axis of this cell.

taichi.lang.misc.kl#

Axes (2, 3).

taichi.lang.misc.l#

Axis 3. For arrays of dimension d >= 4, view each cell as an array of lower dimension d-2, it’s the second axis of this cell.

taichi.lang.misc.loop_config(block_dim=None, serialize=False, parallelize=None, block_dim_adaptive=True, bit_vectorize=False)#

Sets directives for the next loop

Parameters:
  • block_dim (int) – The number of threads in a block on GPU

  • serialize (bool) – Whether to let the for loop execute serially, serialize=True equals to parallelize=1

  • parallelize (int) – The number of threads to use on CPU

  • block_dim_adaptive (bool) – Whether to allow backends set block_dim adaptively, enabled by default

  • bit_vectorize (bool) – Whether to enable bit vectorization of struct fors on quant_arrays.

Examples:

@ti.kernel
def break_in_serial_for() -> ti.i32:
    a = 0
    ti.loop_config(serialize=True)
    for i in range(100):  # This loop runs serially
        a += i
        if i == 10:
            break
    return a

break_in_serial_for()  # returns 55

n = 128
val = ti.field(ti.i32, shape=n)
@ti.kernel
def fill():
    ti.loop_config(parallelize=8, block_dim=16)
    # If the kernel is run on the CPU backend, 8 threads will be used to run it
    # If the kernel is run on the CUDA backend, each block will have 16 threads.
    for i in range(n):
        val[i] = i

u1 = ti.types.quant.int(bits=1, signed=False)
x = ti.field(dtype=u1)
y = ti.field(dtype=u1)
cell = ti.root.dense(ti.ij, (128, 4))
cell.quant_array(ti.j, 32).place(x)
cell.quant_array(ti.j, 32).place(y)
@ti.kernel
def copy():
    ti.loop_config(bit_vectorize=True)
    # 32 bits, instead of 1 bit, will be copied at a time
    for i, j in x:
        y[i, j] = x[i, j]
taichi.lang.misc.mesh_local(*args)#

Hints the compiler to cache the mesh attributes and to enable the mesh BLS optimization, only available for backends supporting ti.extension.mesh and to use with mesh-for loop.

Related to https://github.com/taichi-dev/taichi/issues/3608

Parameters:

*args (List[Attribute]) – A list of mesh attributes or fields accessed as attributes.

Examples:

# instantiate model
mesh_builder = ti.Mesh.tri()
mesh_builder.verts.place({
    'x' : ti.f32,
    'y' : ti.f32
})
model = mesh_builder.build(meta)

@ti.kernel
def foo():
    # hint the compiler to cache mesh vertex attribute `x` and `y`.
    ti.mesh_local(model.verts.x, model.verts.y)
    for v0 in model.verts: # mesh-for loop
        for v1 in v0.verts:
            v0.x += v1.y
taichi.lang.misc.mesh_patch_idx()#

Returns the internal mesh patch id of this running thread, only available for backends supporting ti.extension.mesh and to use within mesh-for loop.

Related to https://github.com/taichi-dev/taichi/issues/3608

taichi.lang.misc.metal#

The Apple Metal backend.

taichi.lang.misc.no_activate(*args)#

Deactivates a SNode pointer.

taichi.lang.misc.opengl#

The OpenGL backend. OpenGL 4.3 required.

taichi.lang.misc.reset()#

Resets Taichi to its initial state. This will destroy all the allocated fields and kernels, and restore the runtime to its default configuration.

Example:

>>> a = ti.field(ti.i32, shape=())
>>> a[None] = 1
>>> print("before reset: ", a)
before rest: 1
>>>
>>> ti.reset()
>>> print("after reset: ", a)
# will raise error because a is unavailable after reset.
taichi.lang.misc.vulkan#

The Vulkan backend.

taichi.lang.misc.wasm#

The WebAssembly backend.

taichi.lang.misc.x64#

The X64 CPU backend.

taichi.lang.misc.x86_64#

The x64 CPU backend.