Skip to content

KRLGroup/SymGroundMultiTask

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SymGroundMultiTask

Summary

This project extends the LTL2Action framework to train Reinforcement Learning (RL) agents that can follow multiple temporally extended tasks expressed in Linear Temporal Logic (LTL) without requiring access to the environment's labelling function. This is done through the usage of Neural Reward Machines, which enable to provide an indirect supervision signal to a grounder module neural network from the comparison between the ground-truth reward signals and the expected reward signals using the predicted symbols.

Installation

  1. Clone the repository:

    git clone https://github.com/KRLGroup/SymGroundMultiTask
  2. Create a new conda environment with Python 3.7.16 and the dependencies specified in environment.yml and requirements.txt:

    cd ./SymGroundMultiTask
    conda env create -f environment.yml
    conda activate symgroundmultitask
  3. (optional) Install MONA if you need to create new automata:

    sudo apt install -y mona
  4. (optional) Replace LTLf2DFA with its parallelizable version to create automata more efficently:

    pip uninstall ltlf2dfa
    git clone https://github.com/matteopannacci/multi-LTLf2DFA.git
    pip install ./multi-LTLf2DFA
  5. (optional) Install Safety-Gym Environment (requires mujoco 2.1.0):

    pip install -e envs/safety/safety-gym/

Dataset Creation

Create the datasets of formulas and automata needed for training the grounder:

python -m datasets.create_datasets --name <dataset> --workers <num_workers>

Training

  1. (optional) Pretrain the GNN using the configuration in ltl_bootcamp_config.py:

    python -m lab.run_ltl_bootcamp --device <device>
  2. (optional) Pretrain the grounder using the configuration in train_grounder_config.py:

    python -m lab.run_train_grounder --device <device>
  3. Train the agent using the configuration in train_agent_config.py:

    python -m lab.run_train_agent --device <device>

Evaluation

  1. Evaluate the grounder:

    python test_grounder.py --model_dir <model_name> --device <device>
  2. Evaluate the agent:

    python test_agent.py --model_dir <model_name> --device <device>
  3. Visualize the agent playing in the environment:

    python visualize_agent.py --model_dir <model_name> --device <device>

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •