Skip to content

Benchmark to to evaluate the performance of ECS code for my bachelor thesis

License

Notifications You must be signed in to change notification settings

SPapandreou/EcsBenchmark

Repository files navigation

ECS Benchmark

ECS Benchmark allows the execution of several test scenarios to compare the performance of different implementation designed with OOP- and DOD-Principles respectively.

Three different sets of test scenarios are implemented:

  • Iteration Tests: A list of GameObjects or entities is created, iterated and a velocity is added to the position of the objects. To demonstrate the capabilities of the burst compiler, two more ECS test cases are added that execute a math heavy operation with and without burst compiled code.
  • Animation Tests: A number of animated objects is created, kept in the view port of the camera and the frame times are recorded. Since there is no ECS implementation for animations, the animation system of the Latios Framework is used for the ECS tests.
  • Physics Tests: A number of primitive objects is created, arranged in a configurable shape and placed a configurable distance above a ground. The objects are accelerated by the gravity of the physics system and collide with each other and eventually the ground. Each frame, the frame timings are recorded.

Features

  • Main menu with test case selection, test run table and functionality to save/load predefined test run configurations.
  • 12 different test cases that explore the performance of ECS and OOP code in different configurations and evaluate the performance gain of parallelized execution.
  • AMDuProf integration to evaluate the CPU cache statistics. This requires a AMD CPU.
  • Test infrastructure that allows creating test cases by creating a test case class, that will be automatically discovered using reflection and loads a test scene automatically on execution.
  • Test coordinator that enables the integration of test cases into the test execution flow.
  • The application will run with 1920x1080 resolution in a window, this is to keep the impact of the GPU as low as possible, since performance of CPU code will be profiled.

External Dependencies

Frameworks & Libraries:

  • VContainer, UniTask, R3 (modern OOP patterns)
  • Unity DOTS (Entities, Physics, Rendering)

Engine: Unity 6 (6000.0.41f1) with URP 17.0.4

Licensing: The MIT license of this repository applies only to the original source code and benchmark framework. All external dependencies and redistributed assets (3D models, code, etc.) retain their original licenses as specified in CREDITS.md. Downloading this repository does not grant additional redistribution rights beyond what the original licenses permit.

See CREDITS.md for full license information on redistributed external dependencies.

Many thanks to Shokubutsu for approving the redistribution of their 3D model used in the animation test.

Build and install instructions

  • Create a checkout of this repository.
  • Download and install Unity Hub and Unity 6 (6000.0.41f1).
  • Import the project on Unity Hub and open the Unity Editor.
  • Choose File->Build Profiles, click the Build button and choose a target folder. The default configuration expects the application to be stored in "C:\EcsBenchmark". For the collection of AMDuProf measurements:
  • Install AMDuProf.
  • Open the solution that can be found in "UprofWrapper" in this repository, build it and copy the build artifacts to the location specified in AppConfig.json (default is "C:\EcsBenchmark\UprofWrapper\UprofWrapper.exe").
  • Disable vSync for the EcsBenchmark.exe in the graphics driver. Failing to do so will lock the maximum FPS to the refresh rate of the monitor and therefore invalidate all measurements.

Note: It might be possible to open and build the project for different Unity 6 versions. Attempts to automatically convert the project might work out of the box or fail miserably. Caution advised. Older Unity versions pre Unity 6 will be incompatible.

Note: This application has only been tested on Windows 11. With some changes it might be possible to run it under different operating systems, but especially the AMDuProf integration is tailored to the Windows environment and will require major adaptions to work in a different OS.

The pre-built release version 1.0.0 for Windows 11 is available on the Releases page. The UprofWrapper is included. The configuration assumes that the archive is extracted to "C:\EcsBenchmark" and that AMDuProf is installed in "C:\Program Files\AMD\AMDuProf\bin\AMDuProfCLI.exe".

Configuration

The build folder contains an AppConfig.json file. Before launching the application, it needs to be ensured that all the paths point to a valid location with write permissions. Failing to do so will make the application crash. To extract the AMDuProf results, the path to the AMDuProfCLI.exe binary needs to point to the AMDuProf installation.

The UprofWrapper build folder contains a config.json file. The path to the AMDuProfCLI.exe binary needs to point to the AMDuProf installation.

Usage

Upon start, of the application, the main menu will be presented. On the left, a test case can be selected from the list and added to the test runs by the means of the "Add" button. The test run table will allow configuring the parameters that are applicable to the selected test case. The AMDuProf toggle will enable the collection of CPU cache metrics and add them to the result file. The DualRun toggle will execute each test run twice, once with CPU cache metrics collection and once without. All test results will be written to the location specified in the AppConfig.json file. The test execution can be started with the "Start" button.

The current test run configuration can be saved to a JSON file with the "Save File" button and a previously saved configuration can be loaded with the "Load File" button. The default folder for the test run configurations can be configured in the AppConfig.json file. When a test run file is loaded, the contents will be appended to the list of test runs.

To use the DualRun functionality, the result directory needs to be empty or the application will crash.

Troubleshooting

If the application crashes or behaves unexpectedly, check the Unity player.log file for error details:

  • Default location: C:\Users\<username>\AppData\LocalLow\SnailhogInc\EcsBenchmark\Player.log
  • Common issues:
    • DualRun requires an empty result directory
    • Invalid paths in AppConfig.json
    • Missing AMDuProf binaries when profiling is enabled

Project Structure

Assets/Scenes			# All scene files, each test case gets its own scene file
Assets/Scripts/
├── AnimationTest/		# Code for the animation test cases
├── Core/				# Code for the test infrastructure and main menu
├── IterationTest/      # Code for the iteration test cases
└── PhysicsTest/        # Code for the physics test cases
Settings/			    # All project settings, like the URP settings, AppConfig.json etc.
UI/					    # UI Toolkit uxml files

Note: This only includes the most important folders.

Architecture overview

Quick start: The main entry point into the application logic can be found in MainMenuScope, RootScope contains all registrations for the test infrastructure and TestCase gives an overview over the integration of tests into the test discovery and coordination.

VContainer is utilised to maintain all dependencies and manage inter-scene communication. All test logic is implemented as IAsyncStartable, and therefore it is possible to yield the test execution to the player loop. The main entry point into the application is the MainMenu scene and the corresponding MainMenuScope. There is also a RootScope that contains the registrations for the application settings and the test infrastructure. To get an overview over the MainMenu implementation, a good starting point is the MainMenuController.

Each test scene contains a TestScope that registers the TestLogic entry point, UI GameObjects and all test specific setup, like references to the prefabs that are being used as part of the test.

The test scenes are loaded based on the type name of the TestCase class. These are discovered with reflection, using the ITestRun interface as marker. To create a new test, it is required to create a non-abstract empty class that inherits from TestCase and a corresponding scene that is named exactly like the TestCase class. The TestCase class also stores the test parameters. To allow flexibility with these parameters, they are stored as key-value-pairs. int, float, bool and Enum parameters are supported.

To allow execution of the tests in the editor, a MonoBehaviour that implements ITestRunProvider needs to be created, to allow setting the parameters directly in the scene. A reference to an instance of this MonoBehaviour needs to be registered in the TestScope and the entry point DefaultParametersController needs to be provided. The registration for the entry point needs to be wrapped in #if UNITY_EDITOR, or else the application will not build.

The execution of the tests is coordinated with the ITestManager and the ITestContext. The ITestManager stores the list of ITestRun and allows to trigger the execution of this list, the ITestContext is used for inter-scene communication, to aggregate the test results, and to provide a method to signal the completion of a test run. The test results are stored as key-value-pairs. Supported types are double (AddKeyResult) and List (AddSeries).

Therefore the execution flow of a test case is:

  • Test run gets added to test run list in the main menu. This list is stored in the ITestManager.
  • Test execution gets started by the means of the Run method in ITestManager.
  • ITestManager injects the current test run into the ITestContext and calls the Run method, which is awaitable and just loads the test scene, then awaits the completion of a UniTaskCompletionSource.
  • The test scene executes the TestScope which contains the registration for the TestLogic entry point
  • The TestLogic executes the test and stores the results in ITestContext. The TestLogic is IAsyncStartable, therefore it is possible to call async methods during the test execution and yield.
  • On completion of the test, the TestLogic will call the CompleteTest method on ITestContext. This triggers the generation of the result file and completes the task of the UniTaskCompletionSource.
  • ITestManager proceeds with the next test run in the list, or returns to the main menu when all tests are completed.

The measurements are collected using:

  • The .NET Stopwatch class for time measurements
  • The IMeasurementService for frame time measurements
  • The IUprofWrapper for CPU cache statistics

The IMeasurementService and IUprofWrapper are injected into the test logic and activated as required by the test. The IUprofWrapper will only activate the collection of measurements, if the corresponding boolean "UprofEnabled" is set to true in the test parameters.

To allow the usage of the AMDuProfCLI.exe process as part of the test execution, the additional application UprofWrapper.exe is required. This is due to the limitations of the control of process execution from code in Windows. It is very difficult to terminate a process gracefully. This is required, since the measurement results are only written on graceful termination. The lack of inter-process communication signals makes it necessary to wrap the external process in a virtual shell that allows sending the CTRL+C signal. Since processes that are started as child process will be bundled together as a process group, the corresponding functionality cannot be included into the Unity process itself, since sending CTRL+C would terminate the Unity process as well. Therefore, the UprofWrapper will start the AMDuProfCLI.exe process, and allow the graceful termination through communication over the input stream. The implementation of this helper can be found in the "UprofWrapper" subfolder in this git repository.

About

Benchmark to to evaluate the performance of ECS code for my bachelor thesis

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages