Simple Tag

../../../_images/mpe_simple_tag.gif

警告

环境 pettingzoo.mpe.simple_tag_v3 已移至新的 MPE2 包中,并将在未来的 PettingZoo 版本中移除。请更新您的导入到 mpe2.simple_tag_v3

此环境是 MPE 环境的一部分。请先阅读该页面以获取一般信息。

导入

from pettingzoo.mpe import simple_tag_v3

动作

离散/连续

并行 API

手动控制

智能体

agents= [adversary_0, adversary_1, adversary_2, agent_0]

智能体

4

动作形状

(5)

动作值

Discrete(5)/Box(0.0, 1.0, (50))

观察形状

(14),(16)

观察值

(-inf,inf)

状态形状

(62,)

状态值

(-inf,inf)

这是一个捕食者-猎物环境。好的智能体(绿色)速度更快,被对手(红色)击中会受到负奖励(每次碰撞 -10)。对手速度较慢,击中好的智能体会受到奖励(每次碰撞 +10)。障碍物(大黑圈)会挡路。默认情况下,有 1 个好的智能体、3 个对手和 2 个障碍物。

为了防止好的智能体跑向无穷远,它们因离开区域而受到惩罚,惩罚函数如下:

def bound(x):
      if x < 0.9:
          return 0
      if x < 1.0:
          return (x - 0.9) * 10
      return min(np.exp(2 * x - 2), 10)

智能体和对手的观察:[self_vel, self_pos, landmark_rel_positions, other_agent_rel_positions, other_agent_velocities]

智能体和对手的动作空间:[no_action, move_left, move_right, move_down, move_up]

参数

simple_tag_v3.env(num_good=1, num_adversaries=3, num_obstacles=2, max_cycles=25, continuous_actions=False, dynamic_rescaling=False)

num_good:好的智能体数量

num_adversaries:对手数量

num_obstacles:障碍物数量

max_cycles:游戏结束前的帧数(每个智能体一步)

continuous_actions:智能体动作空间是离散(默认)还是连续

dynamic_rescaling:是否根据屏幕大小重新调整智能体和地标的大小

用法

AEC

from pettingzoo.mpe import simple_tag_v3

env = simple_tag_v3.env(render_mode="human")
env.reset(seed=42)

for agent in env.agent_iter():
    observation, reward, termination, truncation, info = env.last()

    if termination or truncation:
        action = None
    else:
        # this is where you would insert your policy
        action = env.action_space(agent).sample()

    env.step(action)
env.close()

并行

from pettingzoo.mpe import simple_tag_v3

env = simple_tag_v3.parallel_env(render_mode="human")
observations, infos = env.reset()

while env.agents:
    # this is where you would insert your policy
    actions = {agent: env.action_space(agent).sample() for agent in env.agents}

    observations, rewards, terminations, truncations, infos = env.step(actions)
env.close()

API

class pettingzoo.mpe.simple_tag.simple_tag.raw_env(num_good=1, num_adversaries=3, num_obstacles=2, max_cycles=25, continuous_actions=False, render_mode=None, dynamic_rescaling=False)[source]