Metadata-Version: 2.4
Name: pytest-optestrunner
Version: 2.1.0
Summary: Pytest OpTestRunner plugin for running simulation configurations and evaluating the results
Project-URL: Homepage, https://gitlab.eclipse.org/eclipse/openpass/optestrunner
Project-URL: Documentation, https://gitlab.eclipse.org/eclipse/openpass/optestrunner/-/blob/develop/plugin/optestrunner/README.md
Project-URL: Source, https://gitlab.eclipse.org/eclipse/openpass/optestrunner
Project-URL: Tracker, https://gitlab.eclipse.org/eclipse/openpass/optestrunner/-/issues
Description-Content-Type: text/markdown
Requires-Dist: junitparser==3.1.0
Requires-Dist: jsonschema==4.23.0
Requires-Dist: lxml==4.9.3
Requires-Dist: pandas==2.2.2
Requires-Dist: pytest>=7.4.2
Requires-Dist: psutil>=5.9.5
Requires-Dist: pytest-xdist[psutil]==3.3.1
Requires-Dist: filelock==3.12.3

This pip package acts as configurable executor for complete sets of configs for the gtgen simulators.

# Installation

Currently, `pytest-optestrunner` is only delivered as binary.
Please refer to the [project homepage](https://gitlab.eclipse.org/eclipse/openpass/optestrunner) for building instructions.

To build and install the `pytest-optestrunner`, execute the following steps

```bash
python -m build
pip install pytest-optestrunner.whl
```

# Execution

As `pytest-optestrunner` is a pytest plugin, it will be automatically executed when correct arguments are passed.

```bash
pytest
  --simulation=SIMULATION_EXE                # path to simulation executable, e.g. /opt/bin/gtgen_cli
  --mutual=MUTUAL_RESOURCES_PATH             # path to mutual config files for all runs, e.g. /gtgen/bin/examples/common
  --resources=RESOURCES_PATH                 # path from where configs are retrieved - override common files if necessary
  --plugins-path=PLUGINS_PATH                # path to plugin modules are retrieved
  --report-path=REPORT_PATH                  # path to where the report shall be stored
  TEST_FILE                                  # path to file under test, named `test_*.json`
```

💡 **Note:** You can use additional pytest arguments, such as `-v` for verbose output, `--collect-only` for listing the available tests and so on (see <https://docs.pytest.org>).

In addition `pytest-optestrunner` supports the following optional arguments:

```bash
--configs-path=INPUT             # path for providing configs during testing; default config folder is located at the simulation path
--results-path=OUTPUT            # path for collecting test results during testing; default results folder is located at the simulation path
--artifacts-path=ARTIFACTS       # path for collecting test artifacts during testing; default artifacts folder is located at the simulation path
```

For each specified `test_*.json` a corresponding `test_*.html` will be
generated.

Please note that the limit cannot be disabled completely. Resulting error messages are often misleading (e.g. `File not found` where the file actually exists or `shutil.py 2` etc.).

# Parallel Execution

If `pytest-xdist` is installed, pytest can be invoked with the
addition parameter `-n auto` (or similar - see <https://pypi.org/project/pytest-xdist/>). In this case, `pytest-optestrunner` will
execute the given tests on `n` parallel worker.

💡 **Note:** Running tests in parallel will result in the report displaying results in an arbitrary order, with only the executed tests listed (disabled tests will not be shown).

# Test Configuration

Test configuration is done by the test-json file, individually for each
test. Depending on the users choice, three different tests runners are
executed:

1.  Determinism: Check executability of configs + Determinism test  (*1  x n* vs *n x 1* tests).
2.  Parameterized: Check executability of configs using different
    parameters.
3.  Query: Execute config and check for specific results in the output
    of the simulator, given one or more queries.

In general, the test-json splits into two sections:

1.  Definition of `Configuration Sets`
2.  Definition of `Tests` using the `Configuration Sets` or a single
    `Config` directly

💡 **Note:** Whenever possible, `pytest-optestrunner` re-uses the results to speed up result
analysis.

```js
{
    "config_sets": {
        "Config_Set_1": [ // user defined name
            "Config_Folder_1",
            "Config_Folder_2"
        ],
        "Config_Set_2": [
            "Config_Folder_2",
            "Config_Folder_3"
        ],
        "Config_Set_3": [
            "Config_Folder_4"
        ]
    },
    "tests": {
        "Execution and Determinism": {
            "config_sets": [
                "Config_Set_1",
                "Config_Set_2",
                "Config_Set_3"
            ],
            "determinism": true, // ACTIVATES DETERMINISM
            "duration": 30, // how long shall be simulated
            "invocations": 3 // compare 1x3 run with 3x1 runs
        },
        "Parameterization": {
            "config_sets": [
                "Config_Set_2"
            ],
            "parameterization": { // ACTIVATES PARAMETERIZATION
                "file": "systemConfigFmu.xml", // Name of config, which shall be parameterized
                "xpath": "//value[../id='FmuPath']", // XPath, where values needs to be replaced
                "values": [ // Values, which shall be set
                    "resources/FMU1_StaticFMU.fmu",
                    "resources/FMU2_StaticFMU.fmu"
                ],
                "duration": 10,
                "invocations": 100
            },
            "Querying": {
                "config": "Config_Folder_2" // single config specification
                ],
                "queries": [ // ACTIVATES QUERYING
                    "count(AgentId | AgentId == 0 and Timestep == 10000 and VelocityEgo >= 30) == 1",
                    "mean(VelocityEgo | AgentId != 0) > 30"
                ],
                "success_rate": 0.8, // 80% of 60 invocations needs to pass
                "duration": 10,
                "invocations": 60,
                "ram_limit": 512.0, // Optional RAM Limit in MB, measured for each invocation
                "description": "Optional description"
            }
        }
    }
}
```

-   If the [success\_rate]{.title-ref} is specified, its values must be
    between 0 and 1.

-   It is also possible to define a range of success (e.g. for excluding
    100%) by using the following syntax:

    ```js
    "success_rate": [0.8, 0.99] // 80% to 99% need to pass
    ```

-   If the [ram\_limit]{.title-ref} is specified, its values are
    measured in MB

# Querying Results

Internally, `pytest-optestrunner` uses DataFrames to aggregate data. This data is then accessed using a custom query language described below. Before the query is executed, `pytest-optestrunner` gathers data from the relevant simulation
output folder.

Typically, the following files are expected:

-   `simulationOutput.xml`: This file is the source for events (see
    below for more details).
-   `Cyclics_Run<run_id>.csv`: Here, `<run_id>` is a placeholder for the number of the corresponding invocation. This file contains cyclic data, such as x-position, y-position, or velocity.

gtgen also allows for independent output of controllers in
subfolders, where these subfolders follow the pattern
`run<run_id>/entity<entity_id>/<controller>`. If `pytest-optestrunner` discovers such subfolders, it will look recursilvy for CSV files within them. For every CSV file, `pytest-optestrunner` checks if the belowstanding conditions are satisfied. If they are, the file is merged with the corresponding [Cyclics\_Run\<runId\>.csv]{.title-ref} file.

-   The file must contain a column named `Timestep`.

-   Every other column must start with the corresponding entity id,
    matching the entity id in the subfolder name. For example,
    `00:DetectedObjects`.

    💡 **Note:** If a column name follows the pattern `<id>:<Prefix>.<ColumnName>` it will be shortened to `<ColumnName>`.

    ⚠️ **Warning:** `pytest-optestrunner` does not take care of columns with duplicate names. If such columns are found, duplicate names will be suffixed (see [here](https://pandas.pydata.org/pandas-docs/version/2.2.2/user_guide/merging.html#database-style-dataframe-or-named-series-joining-merging)
    for details).

When merging succeeds, columns from the additional controllers can be
queried like every other column in the queries described below.

## Basic Syntax

    [aggregate]([column] | [filter]) [operator] [value]

-   Aggregate: Everything pandas supports on dataframes, such as
    [pandas.DataFrame.count](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.count.html?highlight=count#pandas.DataFrame.count),
    min, max, mean

-   Column: A column on which the aggregate should operate.

    Columns are generally given by the simulation outputs cyclic
    columns, such as `PositionRoute`. In addition the following columns
    are available:

    -   `AgentId`
    -   From the tag `Agents` (see `simulationOutput.xml`):
        -   `AgentTypeGroupName`
        -   `AgentTypeName`
        -   `VehicleModelType`
        -   `DriverProfileName`
        -   `AgentType`

-   Filter: A filter based on
    [pandas.DataFrame.filter](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.filter.html?highlight=filter#pandas.DataFrame.filter)
    syntax using the available columns.

-   Operator: A comparison operator from the following list: ==,
    \<=, \>=, \<, \>, !=, \~= (approximate). The approximate operator
    allows `1*e-6 x value` as maximum deviation from value.

-   Value: A number

💡 **Note:** In seldom cases, the filter can be skipped, e.g. when securing that no agent has been spawned: `count(AgentId) == 0`.

### Example

    count(AgentId | PositionRoute >= 800 and Lane != -3) == 0

## Using Events in Filter

In order to query for a specific event, use `#(EVENT)` within the filter
syntax.

### Example

    count(AgentId | PositionRoute >= 800 and #(Collision) == True) == 0

### Event Payload

Each event is associated with a set of triggering entity ids, affected
entity ids, and arbitrary key/value pairs (please refer to the gtgen
documentation for details). This information is transformed into a \"per
agent\" scope.

In the following the `Collision` event is taken as example.

### TriggeringEntity

All agents, flagged as triggering become `IsTriggering`

Query: `#(Collision):IsTriggering == True`

### AffectedEntity

All agents, flagged as affected become `IsAffected`

Query: `#(Collision):IsAffected == True`

### Key/Value Pairs

If an event publishes additional payload with the key `XYZ`, it will can
be queried by `#(EVENT):XYZ`.

Query: `#(Collision):WithAgent`

⚠️ **Warning:** Keys carrying the event name as prefix, such as in
`#(Collision):CollisionWithAgent`, will be stripped to `Collision:WithAgent`

### Query Example

| *No agent should collide with agent 0:*
| `count(AgentId | AgentId == 0 and #(Collision):WithAgent == 1) == 0`

## Using OpenSCENARIO Events

OpenSCENARIO events are processed in the same manner as regular events
(see above).

This allows to query for occurrences of OpenSCENARIO events with a name
specified within the following xpath:
`OpenSCENARIO/Story/Act/Sequence/Maneuver/Event/@name`

### OpenSCENARIO Event Definition

``` {.xml}
<Story name="TheStory">
  <Act name="TheAct">
    <Sequence name="TheSequence" numberOfExecutions="1">
      ...
      <Maneuver name="TheManeuver">
        ...
        <!-- example name "ttc_event"-->
        <Event name="ttc_event" priority="overwrite">
        ...
          <StartConditions>
            <ConditionGroup>
              <Condition name="Conditional">
                <ByEntity>
                  ...
                  <EntityCondition>
                     <TimeToCollision>
                       ...
                     </TimeToCollision>
                  </EntityCondition>
                </ByEntity>
              </Condition>
            </ConditionGroup>
          </StartConditions>
        </Event>
        ...
      </Maneuver>
    </Sequence>
  </Act>
</Story>
```

### Example gtgen_cli Output

``` {.xml}
<Event Time="0" Source="OpenSCENARIO" Name="TheStory/TheAct/TheSequence/TheManeuver/ttc_event">
    <TriggeringEntities/>
    <AffectedEntities>
        <Entity Id="1"/>
    </AffectedEntities>
    <Parameters/>
</Event>
```

### Query

`count(AgentId | #(TheStory/TheAct/TheSequence/TheManeuver/ttc_event) == True ) > 0`

## Querying Transitions

Sometimes it is necessary to check, whether a transition happened, such
as counting agents, passing a certain position.

This can be achieved by shifting individual columns by `N` time steps.

### Time Shift Syntax

`Column-Shift` =\> `PositionRoute-1` means PositionRoute at one time
step earlier

### Example Use Case

Counting agents passing `PositionRoute == 350` on `LaneId == -1`

### Query

`count(AgentId | LaneId == -1 and PositionRoute-1 < 350 and PositionRoute >= 350 ) > 0`

⚠️ **Warning:** In seldom cases, a result column happens to have a name like `Name-N` where `N` is an integer. Querying this column would automatically apply time shifting (default behavior) leading to a parsing error. In such cases, escape the column name with single quotes (e.g. `'Name-1'`).

## Querying Spawning Time

Queries can be restricted to the spawning time:

### Query

`count(AgentId | Timestep == {first} and Velocity < 30) == 0`

⚠️ **Warning:** `Timestep == {first}` must be the first parameter in the filter and can only succeeded by `and`.

## Explicit Datatypes

`pytest-optestrunner` uses Pandas DataFrames internally. Pandas will try to detect
the datatype of the individual cyclic columns automatically. This won\'t
fit the user\'s intention in some cases, such as when the column holds a
semicolon separated list of integers but every list contains just one
element. In such cases it is impossible to distinguish between integers
and strings based on the data.

For this reason, datatypes can be specified explicitly along with a
query:

```js
"queries": [ ... ],
"datatypes": {
    "Sensor0_DetectedAgents": "str" // string with "missing value" support
}
```

# Dev Notes

If you want to execute/debug `pytest-optestrunner` in VS-Code, you can add a
configuration, similar to the one shown below, to the `launch.json`
after opening `pytest-optestrunner` as VS-Code project:

```js
"configurations": [
{
    "name": "pytest-optestrunner",
    "type": "python",
    "module": "pytest",
    "args": [
        "--simulation=/gtgen_cli/bin/gtgen_cli",
        "--mutual=/path/to/Common/",
        "--resources=/path/to/Configurations/",
        "--report-path=/gtgen_cli/reports",
        "test_end_to_end.json",
        "-v"],
    "request": "launch",
    "console": "integratedTerminal"
}]
```
