API
Types
Functions and macros
ParallelProcessingTools.@critical
ParallelProcessingTools.@mp_async
ParallelProcessingTools.@mt_async
ParallelProcessingTools.@mt_out_of_order
ParallelProcessingTools.@onprocs
ParallelProcessingTools.@onthreads
ParallelProcessingTools.allthreads
ParallelProcessingTools.getallvalues
ParallelProcessingTools.getlocalvalue
ParallelProcessingTools.workpart
Documentation
ParallelProcessingTools.AbstractThreadLocal
— Typeabstract type AbstractThreadLocal{T} end
Abstract type for thread-local values of type T
.
The value for the current thread is accessed via getindex(::AbstractThreadLocal)
and `setindex(::AbstractThreadLocal, x).
To access both regular and thread-local values in a unified manner, use the function getlocalvalue
.
To get the all values across all threads, use the function getallvalues
.
Default implementation is ThreadLocal
.
ParallelProcessingTools.ThreadLocal
— TypeThreadLocal{T} <: AbstractThreadLocal{T}
Represents a thread-local value. See AbstractThreadLocal
for the API.
Constructors:
ThreadLocal{T}() where {T}
ThreadLocal(value::T) where {T}
ThreadLocal{T}(f::Base.Callable) where {T}
Examples:
tlvalue = ThreadLocal(0)
@onthreads allthreads() tlvalue[] = Base.Threads.threadid()
getallvalues(tlvalue) == allthreads()
rand_value_on_each_thread = ThreadLocal{Float64}(rand)
all(x -> 0 < x < 1, getallvalues(rand_value_on_each_thread))
ParallelProcessingTools.@critical
— Macro@critical expr
Mark code in expr
as a critical section. Code in critical sections will never be executed in parallel (via multithreading) to any other critical section.
@critical
is very useful to mark non-threadsafe code.
Example:
@onthreads allthreads() begin
@critical @info Base.Threads.threadid()
end
Without `@critical`, the above will typically crash Julia.
ParallelProcessingTools.@mp_async
— Macro@mp_async expr
Run expr
asynchronously on a worker process.
Compatible with @sync
.
Equivalent to Distributed.@spawn expr
on Julia <= v1.2, equivalent to Distributed.@spawn :any expr
on Julia >= v1.3.
ParallelProcessingTools.@mt_async
— Macro@mt_async expr
Spawn a Julia task running expr
asynchronously.
Compatible with @sync
. Uses a multi-threaded task scheduler if available (on Julia >= v1.3).
Equivalent to Base.@async
on Julia <= v1.2, equivalent to Base.Threads.@spawn
on Julia >= v1.3.
ParallelProcessingTools.@mt_out_of_order
— Macro@mt_out_of_order begin expr... end
Runs all top-level expressions in begin expr... end
on parallel tasks. On Julia >= v1.3, the tasks will run multi-threaded.
Example:
``` @mtoutof_order begin a = foo() bar() c = baz() end
will run a = foo()
, bar()
and c = baz()
in parallel and in arbitrary order, results of assignments will appear in the outside scope.
ParallelProcessingTools.@onprocs
— Macro@onprocs procsel expr
Executes expr
in parallel on all processes in procsel
. Waits until all processes are done. Returns all results as a vector (or as a single scalar value, if procsel
itself is a scalar).
Example:
using Distributed
addprocs(2)
workers() == @onprocs workers() myid()
ParallelProcessingTools.@onthreads
— Macro@onthreads threadsel expr
Execute code in expr
in parallel on the threads in threadsel
.
threadsel
should be a single thread-ID or a range (or array) of thread-ids. If threadsel == Base.Threads.threadid()
, expr
is run on the current tread with only minimal overhead.
Note: Currently, multiple @onthreads
sections will not run in parallel to each other, even if they use disjunct sets of threads, due to limitations of the Julia multithreading implementation. This restriction is likely to disappear in future Julia versions.
In contrast to Base.Threads.@threads
, @onthreads
does forward exceptions to the caller.
Example 1:
tlsum = ThreadLocal(0.0)
data = rand(100)
@onthreads allthreads() begin
tlsum[] = sum(workpart(data, allthreads(), Base.Threads.threadid()))
end
sum(getallvalues(tlsum)) ≈ sum(data)
Example 2:
# Assuming 4 threads:
tl = ThreadLocal(42)
threadsel = 2:3
@onthreads threadsel begin
tl[] = Base.Threads.threadid()
end
getallvalues(tl)[threadsel] == [2, 3]
getallvalues(tl)[[1,4]] == [42, 42]
ParallelProcessingTools.allthreads
— Methodallthreads()
Convencience function, returns 1:Base.Threads.nthreads()
.
ParallelProcessingTools.getallvalues
— Functiongetallvalues(v::AbstractThreadLocal{T})::AbstractVector{T}
Access the all values (one for each thread) of a thread-local value as a vector. Can only be called in single-threaded code sections.
ParallelProcessingTools.getlocalvalue
— Functiongetlocalvalue(x::Any) = x
getlocalvalue(x::ThreadLocal) = x[]
Access plain values and thread-local values in a unified fashion.
ParallelProcessingTools.workpart
— Functionworkpart(data::AbstractArray, workersel::AbstractVector{W}, current_worker::W) where {W}
Get the part of data
that the execution unit current_worker
is responsible for. Implies a partition of data
across the workers listed in workersel
.
For generic data
arrays, workpart
will return a view. If data
is a Range
(e.g. indices to be processed), a sub-range will be returned.
Type W
will typically be Int
and workersel
will usually be a range/array of thread/process IDs.
Note: workersel
is required to be sorted in ascending order and to contain no duplicate entries.
Examples:
using Distributed, Base.Threads
A = rand(100)
# ...
sub_A = workpart(A, workers(), myid())
# ...
idxs = workpart(eachindex(sub_A), allthreads(), threadid())
for i in idxs
# ...
end