August 22nd, 2024

GPU Transcription Inference Container

10.7.0 - GPU Transcription Inference Container

GPU Inference Containers are released in sync with the Real-time/Batch containers they support. You should only rely on an Inference Container working with a Real-time/Batch container if it has the same version number.

For full details and a guide to implementation see GPU Inference Container.

  • Compatible with version 10.7.0 of the Batch and Real-time Containers

  • Security fixes

  • High Severity: CVE-2022-29500, CVE-2022-29501, CVE-2024-6387

Known Limitations

Issue ID

Summary

Description and Workarounds

DEL-18942

Triton signal 11 failure

Occasionally the inference server will receive a signal 11 followed by a series of error logs and begin to shutdown. This issue does not occur when transcribing using a custom dictionary. Therefore, a workaround is to use any custom dictionary, e.g., "additional_vocab": [{ "content": "Speechmatics" }].