This document describes the multi-threading model of the engine.
A thread is an independent execution path through a program. In a single-threaded program a single execution path is followed at all time. In a multi-threaded program, multiple execution paths of the program can be followed in parallel.
Without going into too much details a multi-threaded program can still execute on a hardware platform with a single hardware thread (HW thread) available. The HW thread will be shared by the program different execution paths which will execute in turn one after the other.
It is the responsibility of the platform operating system (OS) to arbitrate between all the program execution paths to determine which one is going to run next. This process is called scheduling and on most modern systems preemptive scheduling is the norm (you can look into cooperative scheduling as well for the sake of completeness). Program execution paths or OS threads are executed on HW threads.
When not specified, OS threads are implied.
Finally, on a system with multiple HW threads the OS is free to schedule multiple OS threads to run in parallel on different HW threads.
At the lower level, the engine breaks most of its work in small execution units (tasks) that get submitted to a pool of worker OS threads.
While the engine is built to take advantage of multi-threading it can still run on a single thread and that is the default execution mode. To take advantage of parallel execution you need to initialize the worker thread pool by calling the CreateWorkers function.
The number of worker thread is usually equal to the number of HW threads the host processor supports minus one to account for the main program thread.
Warning: You cannot enable worker threads at runtime. Your program should either use the worker threads from the start or run single-threaded.
Asynchronous operations describes how to safely call the engine from multiple threads.