swarmrl.trainers.trainer Module API Reference¶
Module for the Trainer parent.
Trainer
¶
Parent class for the RL Trainer.
Attributes¶
rl_protocols : list(protocol) A list of RL protocols to use in the simulation. loss : Loss An optimization method to compute the loss and update the model.
Source code in swarmrl/trainers/trainer.py
13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 |
|
engine
property
writable
¶
Runner engine property.
__init__(agents)
¶
Constructor for the MLP RL.
Parameters¶
agents : list A list of RL agents loss : Loss A loss model to use in the A-C loss computation.
Source code in swarmrl/trainers/trainer.py
41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 |
|
export_models(directory='Models')
¶
Export the models to the specified directory.
Parameters¶
directory : str (default='Models') Directory in which to save the models.
Returns¶
Saves the actor and the critic to the specific directory.
Source code in swarmrl/trainers/trainer.py
102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 |
|
initialize_models()
¶
Initialize all of the models in the gym.
Source code in swarmrl/trainers/trainer.py
134 135 136 137 138 139 |
|
initialize_training()
¶
Return an initialized interaction model.
Returns¶
interaction_model : ForceFunction Interaction model to start the simulation with.
Source code in swarmrl/trainers/trainer.py
62 63 64 65 66 67 68 69 70 71 72 73 74 |
|
perform_rl_training(**kwargs)
¶
Perform the RL training.
Parameters¶
**kwargs All arguments related to the specific trainer.
Source code in swarmrl/trainers/trainer.py
141 142 143 144 145 146 147 148 149 150 |
|
restore_models(directory='Models')
¶
Export the models to the specified directory.
Parameters¶
directory : str (default='Models') Directory from which to load the objects.
Returns¶
Loads the actor and critic from the specific directory.
Source code in swarmrl/trainers/trainer.py
118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 |
|
update_rl()
¶
Update the RL algorithm.
Returns¶
interaction_model : MLModel Interaction model to use in the next episode. reward : np.ndarray Current mean episode reward. This is returned for nice progress bars. killed : bool Whether or not the task has ended the training.
Source code in swarmrl/trainers/trainer.py
76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 |
|