send_to_queue, which feeds a backend response coroutine or async iterator
into an asyncio.Queue (including sentinel and error forwarding); wait_for_all_mots,
which gathers multiple [ModelOutputThunk](../core/base#class-modeloutputthunk) computations in a single asyncio.gather
call; and get_current_event_loop, a safe wrapper that returns None instead of
raising when no event loop is running. These utilities are used internally by backends
that operate in async contexts.
Functions
FUNC send_to_queue
co: A coroutine or async iterator producing the backend response.aqueue: The async queue to send results to. A sentinelNoneis appended on completion; an exception instance is appended on error.
FUNC wait_for_all_mots
mots: List of[ModelOutputThunk](../core/base#class-modeloutputthunk)objects to await concurrently.
FUNC get_current_event_loop
- The running event loop, or
Noneif no loop is running.
Classes
CLASS ClientCache
A simple LRU cache.
Used to keep track of clients for backends where the client is tied to a specific event loop.
Args:
capacity: Maximum number of entries to hold before evicting the least recently used.
cache: Ordered dictionary storing cached key-value pairs in LRU order; always initialised empty at construction.
FUNC current_size
- Number of entries currently in the cache.
FUNC get
key: Integer cache key.
- The cached value, or
Noneif the key is not present.
FUNC put
key: Integer cache key.value: Value to store.