IPython Documentation

Table Of Contents

Previous topic

parallel.apps.winhpcjob

Next topic

parallel.client.client

This Page

parallel.client.asyncresult

Module: parallel.client.asyncresult

Inheritance diagram for IPython.parallel.client.asyncresult:

AsyncResult objects for the client

Authors:

  • MinRK

Classes

AsyncHubResult

class IPython.parallel.client.asyncresult.AsyncHubResult(client, msg_ids, fname='unknown', targets=None, tracker=None)

Bases: IPython.parallel.client.asyncresult.AsyncResult

Class to wrap pending results that must be requested from the Hub.

Note that waiting/polling on these objects requires polling the Hubover the network, so use AsyncHubResult.wait() sparingly.

__init__(client, msg_ids, fname='unknown', targets=None, tracker=None)
abort()

abort my tasks.

display_outputs(groupby='type')

republish the outputs of the computation

Parameters :

groupby : str [default: type]

if ‘type’:

Group outputs by type (show all stdout, then all stderr, etc.):

[stdout:1] foo [stdout:2] foo [stderr:1] bar [stderr:2] bar

if ‘engine’:

Display outputs for each engine before moving on to the next:

[stdout:1] foo [stderr:1] bar [stdout:2] foo [stderr:2] bar

if ‘order’:

Like ‘type’, but further collate individual displaypub outputs. This is meant for cases of each command producing several plots, and you would like to see all of the first plots together, then all of the second plots, and so on.

elapsed

elapsed time since initial submission

get(timeout=-1)

Return the result when it arrives.

If timeout is not None and the result does not arrive within timeout seconds then TimeoutError is raised. If the remote call raised an exception then that exception will be reraised by get() inside a RemoteError.

get_dict(timeout=-1)

Get the results as a dict, keyed by engine_id.

timeout behavior is described in get().

metadata

property for accessing execution metadata.

msg_ids = None
progress

the number of tasks which have been completed at this point.

Fractional progress would be given by 1.0 * ar.progress / len(ar)

r

result property wrapper for get(timeout=0).

ready()

Return whether the call has completed.

result

result property wrapper for get(timeout=0).

result_dict

result property as a dict.

sent

check whether my messages have been sent.

serial_time

serial computation time of a parallel calculation

Computed as the sum of (completed-started) of each task

successful()

Return whether the call completed without raising an exception.

Will raise AssertionError if the result is not ready.

timedelta(start, end, start_key=<built-in function min>, end_key=<built-in function max>)

compute the difference between two sets of timestamps

The default behavior is to use the earliest of the first and the latest of the second list, but this can be changed by passing a different

Parameters :

start : one or more datetime objects (e.g. ar.submitted)

end : one or more datetime objects (e.g. ar.received)

start_key : callable

Function to call on start to extract the relevant entry [defalt: min]

end_key : callable

Function to call on end to extract the relevant entry [default: max]

Returns :

dt : float

The time elapsed (in seconds) between the two selected timestamps.

wait(timeout=-1)

wait for result to complete.

wait_for_send(timeout=-1)

wait for pyzmq send to complete.

This is necessary when sending arrays that you intend to edit in-place. timeout is in seconds, and will raise TimeoutError if it is reached before the send completes.

wait_interactive(interval=1.0, timeout=None)

interactive wait, printing progress at regular intervals

wall_time

actual computation time of a parallel calculation

Computed as the time between the latest received stamp and the earliest submitted.

Only reliable if Client was spinning/waiting when the task finished, because the received timestamp is created when a result is pulled off of the zmq queue, which happens as a result of client.spin().

For similar comparison of other timestamp pairs, check out AsyncResult.timedelta.

AsyncMapResult

class IPython.parallel.client.asyncresult.AsyncMapResult(client, msg_ids, mapObject, fname='', ordered=True)

Bases: IPython.parallel.client.asyncresult.AsyncResult

Class for representing results of non-blocking gathers.

This will properly reconstruct the gather.

This class is iterable at any time, and will wait on results as they come.

If ordered=False, then the first results to arrive will come first, otherwise results will be yielded in the order they were submitted.

__init__(client, msg_ids, mapObject, fname='', ordered=True)
abort()

abort my tasks.

display_outputs(groupby='type')

republish the outputs of the computation

Parameters :

groupby : str [default: type]

if ‘type’:

Group outputs by type (show all stdout, then all stderr, etc.):

[stdout:1] foo [stdout:2] foo [stderr:1] bar [stderr:2] bar

if ‘engine’:

Display outputs for each engine before moving on to the next:

[stdout:1] foo [stderr:1] bar [stdout:2] foo [stderr:2] bar

if ‘order’:

Like ‘type’, but further collate individual displaypub outputs. This is meant for cases of each command producing several plots, and you would like to see all of the first plots together, then all of the second plots, and so on.

elapsed

elapsed time since initial submission

get(timeout=-1)

Return the result when it arrives.

If timeout is not None and the result does not arrive within timeout seconds then TimeoutError is raised. If the remote call raised an exception then that exception will be reraised by get() inside a RemoteError.

get_dict(timeout=-1)

Get the results as a dict, keyed by engine_id.

timeout behavior is described in get().

metadata

property for accessing execution metadata.

msg_ids = None
progress

the number of tasks which have been completed at this point.

Fractional progress would be given by 1.0 * ar.progress / len(ar)

r

result property wrapper for get(timeout=0).

ready()

Return whether the call has completed.

result

result property wrapper for get(timeout=0).

result_dict

result property as a dict.

sent

check whether my messages have been sent.

serial_time

serial computation time of a parallel calculation

Computed as the sum of (completed-started) of each task

successful()

Return whether the call completed without raising an exception.

Will raise AssertionError if the result is not ready.

timedelta(start, end, start_key=<built-in function min>, end_key=<built-in function max>)

compute the difference between two sets of timestamps

The default behavior is to use the earliest of the first and the latest of the second list, but this can be changed by passing a different

Parameters :

start : one or more datetime objects (e.g. ar.submitted)

end : one or more datetime objects (e.g. ar.received)

start_key : callable

Function to call on start to extract the relevant entry [defalt: min]

end_key : callable

Function to call on end to extract the relevant entry [default: max]

Returns :

dt : float

The time elapsed (in seconds) between the two selected timestamps.

wait(timeout=-1)

Wait until the result is available or until timeout seconds pass.

This method always returns None.

wait_for_send(timeout=-1)

wait for pyzmq send to complete.

This is necessary when sending arrays that you intend to edit in-place. timeout is in seconds, and will raise TimeoutError if it is reached before the send completes.

wait_interactive(interval=1.0, timeout=None)

interactive wait, printing progress at regular intervals

wall_time

actual computation time of a parallel calculation

Computed as the time between the latest received stamp and the earliest submitted.

Only reliable if Client was spinning/waiting when the task finished, because the received timestamp is created when a result is pulled off of the zmq queue, which happens as a result of client.spin().

For similar comparison of other timestamp pairs, check out AsyncResult.timedelta.

AsyncResult

class IPython.parallel.client.asyncresult.AsyncResult(client, msg_ids, fname='unknown', targets=None, tracker=None)

Bases: object

Class for representing results of non-blocking calls.

Provides the same interface as multiprocessing.pool.AsyncResult.

__init__(client, msg_ids, fname='unknown', targets=None, tracker=None)
abort()

abort my tasks.

display_outputs(groupby='type')

republish the outputs of the computation

Parameters :

groupby : str [default: type]

if ‘type’:

Group outputs by type (show all stdout, then all stderr, etc.):

[stdout:1] foo [stdout:2] foo [stderr:1] bar [stderr:2] bar

if ‘engine’:

Display outputs for each engine before moving on to the next:

[stdout:1] foo [stderr:1] bar [stdout:2] foo [stderr:2] bar

if ‘order’:

Like ‘type’, but further collate individual displaypub outputs. This is meant for cases of each command producing several plots, and you would like to see all of the first plots together, then all of the second plots, and so on.

elapsed

elapsed time since initial submission

get(timeout=-1)

Return the result when it arrives.

If timeout is not None and the result does not arrive within timeout seconds then TimeoutError is raised. If the remote call raised an exception then that exception will be reraised by get() inside a RemoteError.

get_dict(timeout=-1)

Get the results as a dict, keyed by engine_id.

timeout behavior is described in get().

metadata

property for accessing execution metadata.

msg_ids = None
progress

the number of tasks which have been completed at this point.

Fractional progress would be given by 1.0 * ar.progress / len(ar)

r

result property wrapper for get(timeout=0).

ready()

Return whether the call has completed.

result

result property wrapper for get(timeout=0).

result_dict

result property as a dict.

sent

check whether my messages have been sent.

serial_time

serial computation time of a parallel calculation

Computed as the sum of (completed-started) of each task

successful()

Return whether the call completed without raising an exception.

Will raise AssertionError if the result is not ready.

timedelta(start, end, start_key=<built-in function min>, end_key=<built-in function max>)

compute the difference between two sets of timestamps

The default behavior is to use the earliest of the first and the latest of the second list, but this can be changed by passing a different

Parameters :

start : one or more datetime objects (e.g. ar.submitted)

end : one or more datetime objects (e.g. ar.received)

start_key : callable

Function to call on start to extract the relevant entry [defalt: min]

end_key : callable

Function to call on end to extract the relevant entry [default: max]

Returns :

dt : float

The time elapsed (in seconds) between the two selected timestamps.

wait(timeout=-1)

Wait until the result is available or until timeout seconds pass.

This method always returns None.

wait_for_send(timeout=-1)

wait for pyzmq send to complete.

This is necessary when sending arrays that you intend to edit in-place. timeout is in seconds, and will raise TimeoutError if it is reached before the send completes.

wait_interactive(interval=1.0, timeout=None)

interactive wait, printing progress at regular intervals

wall_time

actual computation time of a parallel calculation

Computed as the time between the latest received stamp and the earliest submitted.

Only reliable if Client was spinning/waiting when the task finished, because the received timestamp is created when a result is pulled off of the zmq queue, which happens as a result of client.spin().

For similar comparison of other timestamp pairs, check out AsyncResult.timedelta.

Function

IPython.parallel.client.asyncresult.check_ready(f)

Call spin() to sync state prior to calling the method.