|Package Status||Package Evaluator||Build Status|
OpenAI Gym is a open-source Python toolkit for developing and comparing reinforcement learning algorithms. This Julia package is a wrapper for the OpenAI Gym API, and enables access to an ever-growing variety of environments.
This package is registered in
METADATA.jl and can be installed as usual
Pkg.add("OpenAIGymAPI") using OpenAIGymAPI
If you encounter a clear bug, please file a minimal reproducible example on Github.
Setting up the server
To download the code and install the requirements, you can run the following shell commands:
git clone https://github.com/openai/gym-http-api cd gym-http-api pip install -r requirements.txt
This code is intended to be run locally by a single user. The server runs in python.
To start the server from the command line, run this:
For more details, please see here: https://github.com/openai/gym-http-api.
using OpenAIGymAPI remote_base = "http://127.0.0.1:5000" client = GymClient(remote_base) print(client) # Create environment env_id = "CartPole-v0" instance_id = env_create(client, env_id) print(instance_id) # List all environments all_envs = env_list_all(client) print(all_envs) # Set up agent action_space_info = env_action_space_info(client, instance_id) print(action_space_info) agent = action_space_info["n"] # perform same action every time # Run experiment, with monitor outdir = "/tmp/random-agent-results" env_monitor_start(client, instance_id, outdir, force = true, resume = false) episode_count = 100 max_steps = 200 for i in 1:episode_count ob = env_reset(client, instance_id) done = false j = 1 while j <= 200 && !done action = env_action_space_sample(client, instance_id) results = env_step(client, instance_id, action, render = true) done = results["done"] j = j + 1 end end # Dump result info to disk env_monitor_close(client, instance_id)
This code is free to use under the terms of the MIT license.
The original author of
OpenAIGymAPI is @Paul Hendricks.