Warning
This documentation is for an old version of IPython. You can find docs for newer versions here.
Module: parallel.client.view
¶
Views of remote engines.
Authors:
- Min RK
3 Classes¶
-
class
IPython.parallel.client.view.
View
(client=None, socket=None, **flags)¶ Bases:
IPython.utils.traitlets.HasTraits
Base View class for more convenint apply(f,*args,**kwargs) syntax via attributes.
Don’t use this class, use subclasses.
Methods
spin
()spin the client, and sync wait
([jobs, timeout])waits on one or more jobs, for up to timeout seconds. execution methods apply legacy: execute, run data movement push, pull, scatter, gather query methods get_result, queue_status, purge_results, result_status control methods abort, shutdown -
__init__
(client=None, socket=None, **flags)¶
-
abort
(jobs=None, targets=None, block=None)¶ Abort jobs on my engines.
Parameters: jobs : None, str, list of strs, optional
if None: abort all jobs. else: abort specific msg_id(s).
-
apply
(f, *args, **kwargs)¶ calls
f(*args, **kwargs)
on remote engines, returning the result.This method sets all apply flags via this View’s attributes.
Returns
AsyncResult
instance ifself.block
is False, otherwise the return value off(*args, **kwargs)
.
-
apply_async
(f, *args, **kwargs)¶ calls
f(*args, **kwargs)
on remote engines in a nonblocking manner.Returns
AsyncResult
instance.
-
apply_sync
(f, *args, **kwargs)¶ calls
f(*args, **kwargs)
on remote engines in a blocking manner, returning the result.
-
get_result
(indices_or_msg_ids=None)¶ return one or more results, specified by history index or msg_id.
See
IPython.parallel.client.client.Client.get_result()
for details.
-
imap
(f, *sequences, **kwargs)¶ Parallel version of
itertools.imap()
.See self.map for details.
-
map
(f, *sequences, **kwargs)¶ override in subclasses
-
map_async
(f, *sequences, **kwargs)¶ Parallel version of builtin
map()
, using this view’s engines.This is equivalent to
map(...block=False)
.See self.map for details.
-
map_sync
(f, *sequences, **kwargs)¶ Parallel version of builtin
map()
, using this view’s engines.This is equivalent to
map(...block=True)
.See self.map for details.
-
parallel
(dist='b', block=None, **flags)¶ Decorator for making a ParallelFunction
-
purge_results
(jobs=[], targets=[])¶ Instruct the controller to forget specific results.
-
queue_status
(targets=None, verbose=False)¶ Fetch the Queue status of my engines
-
remote
(block=None, **flags)¶ Decorator for making a RemoteFunction
-
set_flags
(**kwargs)¶ set my attribute flags by keyword.
Views determine behavior with a few attributes (block, track, etc.). These attributes can be set all at once by name with this method.
Parameters: block : bool
whether to wait for results
track : bool
whether to create a MessageTracker to allow the user to safely edit after arrays and buffers during non-copying sends.
-
shutdown
(targets=None, restart=False, hub=False, block=None)¶ Terminates one or more engine processes, optionally including the hub.
-
spin
()¶ spin the client, and sync
-
temp_flags
(*args, **kwds)¶ temporarily set flags, for use in with statements.
See set_flags for permanent setting of flags
Examples
>>> view.track=False ... >>> with view.temp_flags(track=True): ... ar = view.apply(dostuff, my_big_array) ... ar.tracker.wait() # wait for send to finish >>> view.track False
-
wait
(jobs=None, timeout=-1)¶ waits on one or more jobs, for up to timeout seconds.
Parameters: jobs : int, str, or list of ints and/or strs, or one or more AsyncResult objects
ints are indices to self.history strs are msg_ids default: wait on all outstanding messages
timeout : float
a time in seconds, after which to give up. default is -1, which means no timeout
Returns: True : when all msg_ids are done
False : timeout reached, some msg_ids still outstanding
-
-
class
IPython.parallel.client.view.
DirectView
(client=None, socket=None, targets=None)¶ Bases:
IPython.parallel.client.view.View
Direct Multiplexer View of one or more engines.
These are created via indexed access to a client:
>>> dv_1 = client[1] >>> dv_all = client[:] >>> dv_even = client[::2] >>> dv_some = client[1:3]
This object provides dictionary access to engine namespaces:
# push a=5: >>> dv[‘a’] = 5 # pull ‘foo’: >>> dv[‘foo’]
-
__init__
(client=None, socket=None, targets=None)¶
-
activate
(suffix='')¶ Activate IPython magics associated with this View
Defines the magics %px, %autopx, %pxresult, %%px, %pxconfig
Parameters: suffix: str [default: ‘’]
The suffix, if any, for the magics. This allows you to have multiple views associated with parallel magics at the same time.
e.g.
rc[::2].activate(suffix='_even')
will give you the magics%px_even
,%pxresult_even
, etc. for running magics on the even engines.
-
clear
(targets=None, block=None)¶ Clear the remote namespaces on my engines.
-
execute
(code, silent=True, targets=None, block=None)¶ Executes code on targets in blocking or nonblocking manner.
execute
is always bound (affects engine namespace)Parameters: code : str
the code string to be executed
block : bool
whether or not to wait until done to return default: self.block
-
gather
(key, dist='b', targets=None, block=None)¶ Gather a partitioned sequence on a set of engines as a single local seq.
-
get
(key_s)¶ get object(s) by key_s from remote namespace
see pull for details.
-
importer
¶ sync_imports(local=True) as a property.
See sync_imports for details.
-
map
(f, *sequences, **kwargs)¶ view.map(f, *sequences, block=self.block)
=> list|AsyncMapResultParallel version of builtin map, using this View’s targets.
There will be one task per target, so work will be chunked if the sequences are longer than targets.
Results can be iterated as they are ready, but will become available in chunks.
Parameters: f : callable
function to be mapped
*sequences: one or more sequences of matching length
the sequences to be distributed and passed to f
block : bool
whether to wait for the result or not [default self.block]
Returns: If block=False
An
AsyncMapResult
instance. An object like AsyncResult, but which reassembles the sequence of results into a single list. AsyncMapResults can be iterated through before all results are complete.else
A list, the result of
map(f,*sequences)
-
pull
(names, targets=None, block=None)¶ get object(s) by name from remote namespace
will return one object if it is a key. can also take a list of keys, in which case it will return a list of objects.
-
push
(ns, targets=None, block=None, track=None)¶ update remote namespace with dict ns
Parameters: ns : dict
dict of keys with which to update engine namespace(s)
block : bool [default
whether to wait to be notified of engine receipt
-
run
(filename, targets=None, block=None)¶ Execute contents of filename on my engine(s).
This simply reads the contents of the file and calls execute.
Parameters: filename : str
The path to the file
targets : int/str/list of ints/strs
the engines on which to execute default : all
block : bool
whether or not to wait until done default: self.block
-
scatter
(key, seq, dist='b', flatten=False, targets=None, block=None, track=None)¶ Partition a Python sequence and send the partitions to a set of engines.
-
sync_imports
(*args, **kwds)¶ Context Manager for performing simultaneous local and remote imports.
‘import x as y’ will not work. The ‘as y’ part will simply be ignored.
If local=True, then the package will also be imported locally.
If quiet=True, no output will be produced when attempting remote imports.
Note that remote-only (local=False) imports have not been implemented.
>>> with view.sync_imports(): ... from numpy import recarray importing recarray from numpy on engine(s)
-
update
(ns)¶ update remote namespace with dict ns
See push for details.
-
use_dill
()¶ Expand serialization support with dill
adds support for closures, etc.
This calls IPython.utils.pickleutil.use_dill() here and on each engine.
-
-
class
IPython.parallel.client.view.
LoadBalancedView
(client=None, socket=None, **flags)¶ Bases:
IPython.parallel.client.view.View
An load-balancing View that only executes via the Task scheduler.
Load-balanced views can be created with the client’s view method:
>>> v = client.load_balanced_view()
or targets can be specified, to restrict the potential destinations:
>>> v = client.client.load_balanced_view([1,3])
which would restrict loadbalancing to between engines 1 and 3.
-
__init__
(client=None, socket=None, **flags)¶
-
map
(f, *sequences, **kwargs)¶ view.map(f, *sequences, block=self.block, chunksize=1, ordered=True)
=> list|AsyncMapResultParallel version of builtin map, load-balanced by this View.
block, and chunksize can be specified by keyword only.
Each chunksize elements will be a separate task, and will be load-balanced. This lets individual elements be available for iteration as soon as they arrive.
Parameters: f : callable
function to be mapped
*sequences: one or more sequences of matching length
the sequences to be distributed and passed to f
block : bool [default self.block]
whether to wait for the result or not
track : bool
whether to create a MessageTracker to allow the user to safely edit after arrays and buffers during non-copying sends.
chunksize : int [default 1]
how many elements should be in each task.
ordered : bool [default True]
Whether the results should be gathered as they arrive, or enforce the order of submission.
Only applies when iterating through AsyncMapResult as results arrive. Has no effect when block=True.
Returns: if block=False
An
AsyncMapResult
instance. An object like AsyncResult, but which reassembles the sequence of results into a single list. AsyncMapResults can be iterated through before all results are complete.else
A list, the result of
map(f,*sequences)
-
set_flags
(**kwargs)¶ set my attribute flags by keyword.
A View is a wrapper for the Client’s apply method, but with attributes that specify keyword arguments, those attributes can be set by keyword argument with this method.
Parameters: block : bool
whether to wait for results
track : bool
whether to create a MessageTracker to allow the user to safely edit after arrays and buffers during non-copying sends.
after : Dependency or collection of msg_ids
Only for load-balanced execution (targets=None) Specify a list of msg_ids as a time-based dependency. This job will only be run after the dependencies have been met.
follow : Dependency or collection of msg_ids
Only for load-balanced execution (targets=None) Specify a list of msg_ids as a location-based dependency. This job will only be run on an engine where this dependency is met.
timeout : float/int or None
Only for load-balanced execution (targets=None) Specify an amount of time (in seconds) for the scheduler to wait for dependencies to be met before failing with a DependencyTimeout.
retries : int
Number of times a task will be retried on failure.
-
3 Functions¶
-
IPython.parallel.client.view.
save_ids
(f)¶ Keep our history and outstanding attributes up to date after a method call.
-
IPython.parallel.client.view.
sync_results
(f)¶ sync relevant results from self.client to our results attribute.
-
IPython.parallel.client.view.
spin_after
(f)¶ call spin after the method.