autoray.compiler
================

.. py:module:: autoray.compiler


Attributes
----------

.. autoapisummary::

   autoray.compiler._backend_lookup
   autoray.compiler._compiler_lookup


Classes
-------

.. autoapisummary::

   autoray.compiler.CompilePython
   autoray.compiler.CompileJax
   autoray.compiler.CompileTensorFlow
   autoray.compiler.CompileTorch
   autoray.compiler.AutoCompiled


Functions
---------

.. autoapisummary::

   autoray.compiler.autojit


Module Contents
---------------

.. py:class:: CompilePython(fn, fold_constants=True, share_intermediates=True)

   A simple compiler that unravels all autoray calls, optionally sharing
   intermediates and folding constants, converts this to a code object using
   ``compile``, then executes this using ``exec``.

   :param fn: Function to compile - should have signature
              ``fn(*args, **kwargs) -> array``, with ``args`` and ``kwargs`` any
              nested combination of ``tuple``, ``list`` and ``dict`` objects
              containing arrays (or other constant arguments), and perform array
              operations on these using ``autoray.do``.
   :type fn: callable
   :param fold_constants: Whether to fold all constant array operations into the graph, which
                          might increase memory usage.
   :type fold_constants: bool, optional
   :param share_intermediates: Whether to cache all computational nodes during the trace, so that any
                               shared intermediate results can be identified.
   :type share_intermediates: bool, optional


   .. py:method:: setup(args, kwargs)

      Convert the example arrays to lazy variables and trace them through
      the function.



   .. py:method:: __call__(*args, array_backend=None, **kwargs)

      If necessary, build, then call the compiled function.



.. py:class:: CompileJax(fn, enable_x64=None, platform_name=None, **kwargs)

   .. py:method:: setup()


   .. py:method:: __call__(*args, array_backend=None, **kwargs)


.. py:class:: CompileTensorFlow(fn, **kwargs)

   .. py:method:: setup()


   .. py:method:: __call__(*args, array_backend=None, **kwargs)


.. py:class:: CompileTorch(fn, **kwargs)

   .. py:method:: setup(*args, **kwargs)


   .. py:method:: __call__(*args, array_backend=None, **kwargs)


.. py:data:: _backend_lookup

.. py:data:: _compiler_lookup

.. py:class:: AutoCompiled(fn, backend=None, compiler_opts=None)

   Just in time compile a ``autoray.do`` using function. See the main
   wrapper ``autojit``.


   .. py:method:: __call__(*args, backend=None, **kwargs)


.. py:function:: autojit(fn=None, *, backend=None, compiler_opts=None)

   Just-in-time compile an ``autoray`` function, automatically choosing
   the backend based on the input arrays, or via keyword argument.

   The backend used to do the compilation can be set in three ways:

       1. Automatically based on the arrays the function is called with,
          i.e. ``cfn(*torch_arrays)`` will use ``torch.jit.trace``.
       2. In this wrapper, ``@autojit(backend='jax')``, to provide a
          specific default instead.
       3. When you call the function ``cfn(*arrays, backend='torch')`` to
          override on a per-call basis.

   If the arrays supplied are of a different backend type to the compiler,
   then the returned array will also be converted back, i.e.
   ``cfn(*numpy_arrays, backend='tensorflow')`` will return a ``numpy`` array.

   The ``'python'`` backend simply extracts and unravels all the ``do`` calls
   into a code object using ``compile`` which is then run with ``exec``.
   This makes use of shared intermediates and constant folding, strips
   away any python scaffoliding, and is compatible with any library, but the
   resulting function is not 'low-level' in the same way as the other
   backends.

   :param fn: The autoray function to compile.
   :type fn: callable
   :param backend: If set, use this as the default backend.
   :type backend: {None, 'python', 'jax', 'torch', 'tensorflow'}, optional
   :param compiler_opts: Dict of dicts when you can supply options for each compiler backend
                         separately, e.g.:
                         ``@autojit(compiler_opts={'tensorflow': {'jit_compile': True}})``.
   :type compiler_opts: dict[dict], optional

   :returns: **cfn** -- The function with auto compilation.
   :rtype: callable


