-
Does Acme support my environment? All agents in Acme are designed to work with environments which implement the dm_env environment interface. This interface, however, has been designed to match general concepts widely in use across the RL research community. As a result, it should be quite straight-forward to write a wrapper for other environments in order to make them conform to this interface. See e.g. the
acme.wrappers.gym_wrapper
module which can be used to interact with OpenAI Gym environments.Similarly, learners in Acme are designed to consume dataset iterators (generally
tf.Dataset
instances) which consume either transition tuples or sequences of state, action, reward, etc. tuples. If your data does not match these formats it should be relatively straightforward to write an adaptor! See individual agents for more information on their expected input.
- How do I debug my TF2 learner? Debugging TensorFlow code has never been
easier! All our learners’
_step()
functions are decorated with a@tf.function
which can easily be commented out to run them in eager mode. In this mode, one can easily run through the code (say, viapdb
) line by line and examine outputs. Most of the time, if your code works in eager mode, it will work in graph mode (with the@tf.function
decorator) but there are rare exceptions when using exotic ops with unsupported dtypes. Finally, don’t forget to add the decorator back in or you’ll find your learner to be a little sluggish!
-
How should I spell Acme? Acme is a proper noun, not an acronym, and hence should be spelled "Acme" without caps.
-
Do you plan to release the distributed agents? We've only open-sourced our single-process agents. Internally, our distributed agents run the same code as these open-sourced agents but are tied to Launchpad and other DeepMind infrastructure. We don’t currently have a timetable for releasing these components.