Commit 6a64736d authored by Vladimir Malenovsky's avatar Vladimir Malenovsky
Browse files

Merge branch 'main' into basop-2095-remove-unused-isar-tables

parents f89c8f30 a09f52b8
Loading
Loading
Loading
Loading
Loading
+1 −1
Original line number Diff line number Diff line
@@ -38,7 +38,7 @@
#include <stdlib.h>
#include <string.h>

#if defined( __i386__ ) || defined( _M_IX86 ) || defined( __x86_64__ ) || defined( _M_X64 ) || defined( __arm__ ) || defined( __aarch64__ ) || ( defined( __BYTE_ORDER__ ) && __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ )
#if defined( __i386__ ) || defined( _M_IX86 ) || defined( __x86_64__ ) || defined( _M_X64 ) || defined( __arm__ ) || defined( __aarch64__ ) || defined( _M_ARM ) || defined( _M_ARM64 ) || ( defined( __BYTE_ORDER__ ) && __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ )
#define __TWI_LE /* _T_iny _W_ave _I_n _L_ittle _E_ndian */
#endif

+2 −2
Original line number Diff line number Diff line
@@ -37,7 +37,7 @@
#include <stdio.h>
#include <stdlib.h>

#if defined( __i386__ ) || defined( _M_IX86 ) || defined( _M_X64 ) || defined( __x86_64__ ) || defined( __arm__ ) || defined( __aarch64__ ) || ( defined( __BYTE_ORDER__ ) && __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ )
#if defined( __i386__ ) || defined( _M_IX86 ) || defined( _M_X64 ) || defined( __x86_64__ ) || defined( __arm__ ) || defined( __aarch64__ ) || defined( _M_ARM ) || defined( _M_ARM64 ) || ( defined( __BYTE_ORDER__ ) && __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ )
#define __TWO_LE /* _T_iny _W_ave _O_ut _L_ittle _E_ndian */
#endif

@@ -47,7 +47,7 @@

#if !defined( __TWO_LE ) && !defined( __TWO_BE )
#pragma message( "unknown processor - assuming Little Endian" )
#define __TWI_LE
#define __TWO_LE
#endif

#define __TWO_SUCCESS ( 0 )
+7 −11
Original line number Diff line number Diff line
@@ -42,7 +42,6 @@ It might be required to set Clang-18 as the default clang on the machine
  sudo apt install python3.13 python3.13-venv
  ```


## Run CUT tests on Target platform

Running the conformance tests requires around 30 gb of disk space and around 6 gb of RAM space.
@@ -97,7 +96,7 @@ CUT_OUTPUTS
  +- failedCmds.txt   : Log of all the shell commands that failed execution
  +- dec/             : Folder containing all decoder tests CUT outputs
  +- enc/             : Folder containing all encoder tests CUT outputs
  +- renderer_short/  : Folder containing all renderer tests CUT outputs
  +- renderer/        : Folder containing all renderer tests CUT outputs
  +- split_rendering/ : Folder containing all split rendering enc/dec tests
```

@@ -105,7 +104,6 @@ CUT_OUTPUTS

If CUT test execution is done on a different platform, the scripts/CUT_OUTPUTS must be copied and provided in the reference platform's scripts/CUT_OUTPUTS. Then the BE analysis or non-BE analysis procedure below should be followed. It is recommended to perform first analysis with BE comparison and then analysis with non-BE comparison if non-BE outputs were found. Note, non-BE conformance applies currently only if metadata file output is BE and there are non-BE results only in wave-file output.


### Perform the BE comparison on the CUT outputs on reference platform

The BE comparison is performed to the CUT outputs using the command below. Encoded outputs will be decoded using the reference decoder executables as part of the process. The BE comparison is then performed between the CUT and reference decoded outputs. This includes comparison of ".wav"-files, and ".csv" and ".met" metadata files. If any non-BE results are observed, this is reported on the command-line and link to an analysis ".csv" file is given. The analysis file shows which exact files were non-BE. An example passing output is shown below. If all test sets print `PASSED BE TEST`, then CUT outputs are BE-conformant.
@@ -142,7 +140,6 @@ Analysing tests for ISAR (1252 tests)
</code></pre>
</details>


### Perform the MLD based non-BE analysis on the CUT outputs on reference platform (Ubuntu 24.04)

The non-BE analysis below compares CUT and reference outputs by running MLD on audio (`.wav`) and, when MASA metadata are generated, for the matching reference/DUT `.met` files. For encoder tests, encoded CUT bitstreams are first decoded with the reference decoder before analysis. Per-frame MLD and MASA metadata values are written to `scripts/CUT_OUTPUTS` and checked against corridor references in `testvec/testv/mld_ref` (`mld_ref_<TAG>.csv` and `masa_ref_<TAG>.csv`).
@@ -245,7 +242,6 @@ MLD Corridor passed for ISAR with max MLD diff of 0.0
</code></pre>
</details>


## Executing specific tests only

All CUT tests can be run specifically for IVAS Encoder,IVAS Decoder,IVAS Renderer, ISAR Encoder and ISAR Decoder only. The commandline allows for ```-test-mode=<PARAM>``` for this functionality, examples:
+6 −6
Original line number Diff line number Diff line
@@ -8,13 +8,13 @@ cp IVAS_cod IVAS_cod_ref
cp IVAS_dec IVAS_dec_ref
cp IVAS_rend IVAS_rend_ref
cp ISAR_post_rend ISAR_post_rend_ref
python3 -m pytest -q tests/codec_be_on_mr_nonselection tests/renderer_short/test_renderer.py tests/split_rendering/test_split_rendering.py -v -n auto --update_ref 1 --create_ref --keep_files --html=report_cmd.html --self-contained-html
python3 -m pytest -q tests/codec_be_on_mr_nonselection tests/renderer/test_renderer_short.py tests/split_rendering/test_split_rendering.py -v -n auto --update_ref 1 --create_ref --keep_files --html=report_cmd.html --self-contained-html
python3 scripts/parse_commands.py report_cmd.html Readme_IVAS.txt
rm -rf testvec
mkdir testvec
mkdir testvec/binauralRenderer_interface
mkdir testvec/testv
mkdir testvec/testv/renderer_short
mkdir testvec/testv/renderer
mkdir testvec/testv/split_rendering
mkdir testvec/bin
cp -r scripts/testv/* testvec/testv
@@ -24,7 +24,7 @@ cp -r scripts/switchPaths testvec
cp -r scripts/trajectories testvec
cp -r scripts/binauralRenderer_interface/binaural_renderers_hrtf_data testvec/binauralRenderer_interface
cp -r tests/ref testvec/testv/ref
cp -r tests/renderer_short/ref testvec/testv/renderer_short/ref
cp -r tests/renderer/ref testvec/testv/renderer/ref
cp -r tests/split_rendering/ref testvec/testv/split_rendering/ref
cp -r tests/split_rendering/renderer_configs testvec/testv/split_rendering/renderer_configs
cp -r tests/split_rendering/error_patterns testvec/testv/split_rendering/error_patterns
+125 −41
Original line number Diff line number Diff line
@@ -213,15 +213,16 @@ def validate_build_binaries(parser, build_path: str, build_label: str) -> None:
    for tag, binary in IVAS_Bins.items():
        candidate = os.path.join(abs_build_path, binary)
        candidate_exe = f"{candidate}.exe"
        exists = os.path.isfile(candidate) or (is_windows and os.path.isfile(candidate_exe))
        exists = os.path.isfile(candidate) or (
            is_windows and os.path.isfile(candidate_exe)
        )
        if not exists:
            shown = candidate_exe if is_windows else candidate
            missing.append(f"{tag}: {shown}")

    if missing:
        parser.error(
            f"Missing {build_label} binaries:\n  - " + "\n  - ".join(missing)
        )
        parser.error(f"Missing {build_label} binaries:\n  - " + "\n  - ".join(missing))


ReferenceMldFiles = {
    "ENC": "mld_ref_ENC.csv",
@@ -263,7 +264,9 @@ class MLDConformance:
        with open(self.failedCmdsFile, "r") as f:
            return sum(1 for line in f if line.strip())

    def appendRunlog(self, command: str = "", output: str = "", context: str = "") -> None:
    def appendRunlog(
        self, command: str = "", output: str = "", context: str = ""
    ) -> None:
        if not getattr(self, "logFile", None):
            return
        with open(self.logFile, "a") as fd:
@@ -276,7 +279,9 @@ class MLDConformance:
                if not output.endswith("\n"):
                    fd.write("\n")

    def appendFailed(self, command: str = "", output: str = "", context: str = "") -> None:
    def appendFailed(
        self, command: str = "", output: str = "", context: str = ""
    ) -> None:
        if not getattr(self, "failedCmdsFile", None):
            return
        with open(self.failedCmdsFile, "a") as fd:
@@ -307,11 +312,19 @@ class MLDConformance:
        if self.args.clean_output_dir and os.path.exists(self.outputDir):
            shutil.rmtree(self.outputDir, ignore_errors=False)
        os.makedirs(self.outputDir, exist_ok=True)
        subdirs = ["enc", "dec", "renderer_short", "split_rendering"]
        subdirs = ["enc", "dec", "renderer", "split_rendering"]
        for odir in subdirs:
            os.makedirs(os.path.join(self.testvDir, odir), exist_ok=True)
            os.makedirs(os.path.join(self.outputDir, odir), exist_ok=True)

        nested_subdirs = [
            os.path.join("renderer", "ref"),
            os.path.join("split_rendering", "cut"),
            os.path.join("split_rendering", "ref"),
        ]
        for odir in nested_subdirs:
            os.makedirs(os.path.join(self.outputDir, odir), exist_ok=True)

        self.logFile = os.path.join(self.outputDir, "runlog.txt")
        self.failedCmdsFile = os.path.join(self.outputDir, "failedCmds.txt")
        self.errorBlocksDir = os.path.join(self.outputDir, "error_blocks")
@@ -663,14 +676,16 @@ class MLDConformance:
            )
            return (non_be, None, None, None)
        else:
            if not os.path.exists(testDesc.refOutput) or not os.path.exists(testDesc.dutOutput):
                msg = (
                    f"Missing file for compare: ref={testDesc.refOutput}, dut={testDesc.dutOutput}"
                )
            if not os.path.exists(testDesc.refOutput) or not os.path.exists(
                testDesc.dutOutput
            ):
                msg = f"Missing file for compare: ref={testDesc.refOutput}, dut={testDesc.dutOutput}"
                self.appendFailed(context=f"[{tag}:{dutPytestTag}] {msg}")
                return (None, None, (msg, ""), None)

            validate_err = self.validateAudioPairHeader(testDesc.refOutput, testDesc.dutOutput)
            validate_err = self.validateAudioPairHeader(
                testDesc.refOutput, testDesc.dutOutput
            )
            if validate_err:
                self.appendFailed(context=f"[{tag}:{dutPytestTag}] {validate_err}")
                return (None, None, (validate_err, ""), None)
@@ -757,17 +772,23 @@ class MLDConformance:
            )
            if rc != 0:
                return (None, None, (dutDecCmd, err_output), dutDecCmd)
            if not os.path.exists(refDecOutputFile) or not os.path.exists(dutDecOutputFile):
            if not os.path.exists(refDecOutputFile) or not os.path.exists(
                dutDecOutputFile
            ):
                msg = f"Missing file for compare: ref={refDecOutputFile}, dut={dutDecOutputFile}"
                self.appendFailed(context=f"[{tag}:{encPytestTag}] {msg}")
                return (None, None, (msg, ""), dutDecCmd)

            validate_err = self.validateAudioPairHeader(refDecOutputFile, dutDecOutputFile)
            validate_err = self.validateAudioPairHeader(
                refDecOutputFile, dutDecOutputFile
            )
            if validate_err:
                self.appendFailed(context=f"[{tag}:{encPytestTag}] {validate_err}")
                return (None, None, (validate_err, ""), dutDecCmd)

            non_be = int(not filecmp.cmp(refDecOutputFile, dutDecOutputFile, shallow=False))
            non_be = int(
                not filecmp.cmp(refDecOutputFile, dutDecOutputFile, shallow=False)
            )
            max_mld, mld_error = self.mld(
                tag, encPytestTag, refFile=refDecOutputFile, dutFile=dutDecOutputFile
            )
@@ -813,7 +834,9 @@ class MLDConformance:
            return (non_be, None, None, None)
        else:
            refDecOutputFile = testDesc.refOutput.replace(".splt.bit", ".wav")
            dutDecOutputFile = testDesc.dutOutput.replace(".splt.bit", "_CUT_REFDECODED.wav")
            dutDecOutputFile = testDesc.dutOutput.replace(
                ".splt.bit", "_CUT_REFDECODED.wav"
            )
            # Decode the encoded output with Reference ISAR decoder
            dutDecCmd = testDesc.refDecCmdline.split()
            for idx, cmd in enumerate(dutDecCmd):
@@ -831,17 +854,23 @@ class MLDConformance:
            )
            if rc != 0:
                return (None, None, (dutDecCmd, err_output), dutDecCmd)
            if not os.path.exists(refDecOutputFile) or not os.path.exists(dutDecOutputFile):
            if not os.path.exists(refDecOutputFile) or not os.path.exists(
                dutDecOutputFile
            ):
                msg = f"Missing file for compare: ref={refDecOutputFile}, dut={dutDecOutputFile}"
                self.appendFailed(context=f"[{tag}:{pytestTag}] {msg}")
                return (None, None, (msg, ""), dutDecCmd)

            validate_err = self.validateAudioPairHeader(refDecOutputFile, dutDecOutputFile)
            validate_err = self.validateAudioPairHeader(
                refDecOutputFile, dutDecOutputFile
            )
            if validate_err:
                self.appendFailed(context=f"[{tag}:{pytestTag}] {validate_err}")
                return (None, None, (validate_err, ""), dutDecCmd)

            non_be = int(not filecmp.cmp(refDecOutputFile, dutDecOutputFile, shallow=False))
            non_be = int(
                not filecmp.cmp(refDecOutputFile, dutDecOutputFile, shallow=False)
            )
            max_mld, mld_error = self.mld(
                tag, pytestTag, refFile=refDecOutputFile, dutFile=dutDecOutputFile
            )
@@ -884,7 +913,7 @@ class MLDConformance:
                "$CUT_PATH/ref/param_file/", f"{self.testvDir}/ref/param_file/"
            )
            command = command.replace(
                "$CUT_PATH/renderer_short/ref/", f"{self.testvDir}/renderer_short/ref/"
                "$CUT_PATH/renderer/ref/", f"{self.testvDir}/renderer/ref/"
            )
            command = command.replace(
                "$CUT_PATH/split_rendering/ref",
@@ -901,7 +930,7 @@ class MLDConformance:
                "$CUT_PATH/ref/param_file/dec/", f"{self.outputDir}/dec/"
            )
            command = command.replace(
                "$CUT_PATH/renderer_short/ref/", f"{self.outputDir}/renderer_short/ref/"
                "$CUT_PATH/renderer/ref/", f"{self.outputDir}/renderer/ref/"
            )
            command = command.replace(
                "$CUT_PATH/split_rendering/cut/",
@@ -934,7 +963,9 @@ class MLDConformance:
    ):
        # Run CUT Cmdline
        testPrefix = f"[{tag} {testIndex}/{totalTests}]"
        self.appendRunlog(context=self.formatTestHeader(testPrefix, "Running test", pyTestsTag))
        self.appendRunlog(
            context=self.formatTestHeader(testPrefix, "Running test", pyTestsTag)
        )
        testDesc = self.TestDesc[tag][pyTestsTag]
        rc, err_output = self.process(
            command=testDesc.dutCmdline,
@@ -961,28 +992,49 @@ class MLDConformance:
        errorDetails = None
        executedCommand = None
        if tag == "ENC":
            non_be, max_mld, errorDetails, executedCommand = self.analyseOneEncoderTest(tag, pyTestsTag)
            non_be, max_mld, errorDetails, executedCommand = self.analyseOneEncoderTest(
                tag, pyTestsTag
            )
        elif tag == "DEC":
            non_be, max_mld, errorDetails, executedCommand = self.analyseWavOutputTest(tag, pyTestsTag)
            non_be, max_mld, errorDetails, executedCommand = self.analyseWavOutputTest(
                tag, pyTestsTag
            )
        elif tag == "REND":
            non_be, max_mld, errorDetails, executedCommand = self.analyseWavOutputTest(tag, pyTestsTag)
            non_be, max_mld, errorDetails, executedCommand = self.analyseWavOutputTest(
                tag, pyTestsTag
            )
        elif tag == "ISAR_ENC":
            non_be, max_mld, errorDetails, executedCommand = self.analyseOneIsarEncoderTest(tag, pyTestsTag)
            non_be, max_mld, errorDetails, executedCommand = (
                self.analyseOneIsarEncoderTest(tag, pyTestsTag)
            )
        elif tag == "ISAR":
            non_be, max_mld, errorDetails, executedCommand = self.analyseWavOutputTest(tag, pyTestsTag)
            non_be, max_mld, errorDetails, executedCommand = self.analyseWavOutputTest(
                tag, pyTestsTag
            )
        else:
            assert False, f"Un-implemented Tag {tag}"

        if errorDetails is not None:
            if errorDetails:
                cmd, err_output = errorDetails
                self.appendFailed(context=header, command=cmd, output=(err_output or ""))
                self.appendFailed(
                    context=header, command=cmd, output=(err_output or "")
                )
            elif executedCommand:
                self.appendFailed(context=header, command=executedCommand)
            else:
                self.appendFailed(context=header)
            self.stats()
            return (testPrefix, pyTestsTag, "ERROR", None, errorDetails, executedCommand, None, None)
            return (
                testPrefix,
                pyTestsTag,
                "ERROR",
                None,
                errorDetails,
                executedCommand,
                None,
                None,
            )

        if self.args.be_test:
            verdict = "NON-BE" if non_be else "BE"
@@ -995,7 +1047,16 @@ class MLDConformance:
                result_text = f"{verdict}, MLD_MAX={max_mld}"

        self.stats()
        return (testPrefix, pyTestsTag, "OK", result_text, None, executedCommand, verdict, max_mld)
        return (
            testPrefix,
            pyTestsTag,
            "OK",
            result_text,
            None,
            executedCommand,
            verdict,
            max_mld,
        )

    def analyseOneCommandFromTuple(self, args):
        return self.analyseOneCommand(*args)
@@ -1154,9 +1215,13 @@ class MLDConformance:
                    (tag, pyTestsTag, idx, self.totalTests)
                    for idx, pyTestsTag in enumerate(selectedTests, start=1)
                ]
                for testPrefix, pyTestsTag, rc, command, err_output in pool.imap_unordered(
                    self.runOneCommandFromTuple, args
                ):
                for (
                    testPrefix,
                    pyTestsTag,
                    rc,
                    command,
                    err_output,
                ) in pool.imap_unordered(self.runOneCommandFromTuple, args):
                    status = "OK" if rc == 0 else "ERROR"
                    print(
                        f"{testPrefix} Running test: {pyTestsTag} ... {status}",
@@ -1248,7 +1313,12 @@ class MLDConformance:
            verdict,
            test_max_mld,
        ):
            nonlocal command_fail_count, be_count, non_be_count, failure_count, worst_failure
            nonlocal \
                command_fail_count, \
                be_count, \
                non_be_count, \
                failure_count, \
                worst_failure

            if runStatus != "OK":
                command_fail_count += 1
@@ -1258,7 +1328,9 @@ class MLDConformance:
                non_be_count += 1
                if test_max_mld is not None and test_max_mld > corridor_threshold:
                    failure_count += 1
                    fail_header = self.formatTestHeader(testPrefix, "Analyzing test", pyTestsTag)
                    fail_header = self.formatTestHeader(
                        testPrefix, "Analyzing test", pyTestsTag
                    )
                    self.appendFailed(
                        context=(
                            fail_header
@@ -1310,7 +1382,9 @@ class MLDConformance:
                    (tag, pyTestsTag, idx, self.totalTests)
                    for idx, pyTestsTag in enumerate(selectedTests, start=1)
                ]
                for result in pool.imap_unordered(self.analyseOneCommandFromTuple, args):
                for result in pool.imap_unordered(
                    self.analyseOneCommandFromTuple, args
                ):
                    handle_test_result(*result)
        else:
            for idx, pyTestsTag in enumerate(selectedTests, start=1):
@@ -1413,7 +1487,11 @@ class MLDConformance:
            if emitConsole:
                print(f"{prefix}Failed command: {command}", flush=True)
                if c.stdout:
                    print(c.stdout, end="" if c.stdout.endswith("\n") else "\n", flush=True)
                    print(
                        c.stdout,
                        end="" if c.stdout.endswith("\n") else "\n",
                        flush=True,
                    )

        if returnOutput:
            return c.returncode, (c.stdout or "")
@@ -1653,7 +1731,10 @@ class MLDConformance:
                wavdiff_log_lines = []
                wavdiff_rows_omitted = 0
                for line in wavdiff_output.splitlines():
                    if re.match(r"^\s*[-+]?\d+(?:\.\d+)?;[-+]?\d+(?:\.\d+)?;[-+]?\d+(?:\.\d+)?\s*$", line):
                    if re.match(
                        r"^\s*[-+]?\d+(?:\.\d+)?;[-+]?\d+(?:\.\d+)?;[-+]?\d+(?:\.\d+)?\s*$",
                        line,
                    ):
                        wavdiff_rows_omitted += 1
                    else:
                        wavdiff_log_lines.append(line)
@@ -1705,7 +1786,9 @@ class MLDConformance:
            mldWithTags = np.column_stack(
                (
                    mldThisFile,
                    np.array([f"{pytestTag}-FRM{x:05d}" for x in range(mldThisFile.size)]),
                    np.array(
                        [f"{pytestTag}-FRM{x:05d}" for x in range(mldThisFile.size)]
                    ),
                )
            )
            with open(self.mldcsv[tag], "ab") as f:
@@ -1941,7 +2024,9 @@ class MLDConformance:
                        all_ok = all_ok and corridor_ok
                        corridor_fail_count += int(not corridor_ok)
                    else:
                        missing_msg = f"Missing reference MLD file for {tag} : {refMldFile}"
                        missing_msg = (
                            f"Missing reference MLD file for {tag} : {refMldFile}"
                        )
                        print(f"\033[91m{missing_msg} \033[00m")
                        self.appendRunlog(context=missing_msg)
                        self.appendFailed(context=missing_msg)
@@ -2176,4 +2261,3 @@ if __name__ == "__main__":
        for tag in testTags:
            tag_status = "OK" if tag_results.get(tag, False) else "FAILED"
            print(f"[{tag}] {tag_status}")
Loading