ExoPlayer 如何实现音画同步

在解释这个问题之前,先讲一下 ExoPlayer 中音频播放的三种输出模式。

第一种是PCM模式(普通播放模式)。这是最基本的播放模式,音频以PCM(脉冲编码调制)数据形式处理,可以通过音频处理器进行各种处理。在DefaultAudioSink中,这种模式被定义为OUTPUT_MODE_PCMDefaultAudioSink.java:474

第二种是直通模式(Passthrough Mode),对于编码音频格式(如杜比数字、DTS),编码音频直接传递给兼容的音频硬件,无需解码。这种模式保留了环绕声系统的最高音频质量。在DefaultAudioSink中,这种模式被定义为OUTPUT_MODE_PASSTHROUGHDefaultAudioSink.java:475

第三种是卸载模式(Offload Mode),音频处理卸载到专用硬件,减少CPU使用和功耗,非常适合后台播放。在DefaultAudioSink中,这种模式被定义为OUTPUT_MODE_OFFLOADDefaultAudioSink.java:476

还有一种模式是隧道模式(Tunnel Mode),它是一种特殊的播放方式,它使用AV同步标头来确保音视频同步。这不是一个独立的输出模式,而是一种在上述模式中可以启用的特性。在DefaultAudioSink中,我们可以看到隧道模式下使用AV同步标头的实现:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
private int writeNonBlockingWithAvSync(
AudioTrack audioTrack, ByteBuffer buffer, int size, long presentationTimeUs) {
if (Util.SDK_INT >= 26) {
// The underlying platform AudioTrack writes AV sync headers directly.
return audioTrack.write(
buffer, size, AudioTrack.WRITE_NON_BLOCKING, presentationTimeUs * 1000);
}
if (avSyncHeader == null) {
avSyncHeader = ByteBuffer.allocate(16);
avSyncHeader.order(ByteOrder.BIG_ENDIAN);
avSyncHeader.putInt(0x55550001);
}
if (bytesUntilNextAvSync == 0) {
avSyncHeader.putInt(4, size);
avSyncHeader.putLong(8, presentationTimeUs * 1000);
avSyncHeader.position(0);
bytesUntilNextAvSync = size;
}
...

DefaultAudioSink.java:1890-1907

系统会根据以下因素选择合适的模式:输入音频格式,设备硬件能力,应用程序配置,性能需求。这些模式共同构成了 ExoPlayer 灵活的音频处理架构,能够适应各种播放场景和设备能力。此外,还有其他一些特性如跳过静音、速度调整等,但这些是音频处理的功能,而不是基本的输出模式。

接下来讲音画同步。

ExoPlayer 使用音频轨道作为主时钟(master clock)。MediaCodecAudioRenderer实现了MediaClock接口,负责提供准确的播放位置:

1
2
3
4
5
6
7
@Override  
public long getPositionUs() {
if (getState() == STATE_STARTED) {
updateCurrentPosition();
}
return currentPositionUs;
}

MediaCodecAudioRenderer.java:767-773

播放位置通过AudioSinkgetCurrentPositionUs方法获取,这是同步机制的基础:

1
2
3
4
5
6
7
8
9
10
private void updateCurrentPosition() {  
long newCurrentPositionUs = audioSink.getCurrentPositionUs(isEnded());
if (newCurrentPositionUs != AudioSink.CURRENT_POSITION_NOT_SET) {
currentPositionUs =
allowPositionDiscontinuity
? newCurrentPositionUs
: max(currentPositionUs, newCurrentPositionUs);
allowPositionDiscontinuity = false;
}
}

MediaCodecAudioRenderer.java:1062-1071

AudioTrackPositionTracker负责精确跟踪音频播放位置,通过AudioTrackgetPlaybackHeadPosition()getTimestamp()方法:

1
2
3
4
5
/**  
* Wraps an {@link AudioTrack}, exposing a position based on {@link
* AudioTrack#getPlaybackHeadPosition()} and {@link AudioTrack#getTimestamp(AudioTimestamp)}.
*/
/* package */ final class AudioTrackPositionTracker {

AudioTrackPositionTracker.java:41-51

VideoFrameReleaseControl根据视频帧的PTS(Presentation Timestamp)与当前音频位置的比较,决定何时释放视频帧:

1
2
3
4
5
6
7
8
9
10
@VideoFrameReleaseControl.FrameReleaseAction  
int frameReleaseAction =
videoFrameReleaseControl.getFrameReleaseAction(
bufferPresentationTimeUs,
positionUs,
elapsedRealtimeUs,
getOutputStreamStartPositionUs(),
isDecodeOnlyBuffer,
isLastBuffer,
videoFrameReleaseInfo);

MediaCodecVideoRenderer.java:721-729

根据返回的frameReleaseAction,视频渲染器决定立即渲染、按计划渲染、丢弃或跳过当前帧:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
switch (frameReleaseAction) {  
case VideoFrameReleaseControl.FRAME_RELEASE_IMMEDIATELY:
long releaseTimeNs = getClock().nanoTime();
notifyFrameMetadataListener(presentationTimeUs, releaseTimeNs, format);
renderOutputBuffer(codec, bufferIndex, presentationTimeUs, releaseTimeNs);
updateVideoFrameProcessingOffsetCounters(videoFrameReleaseInfo.getEarlyUs());
return true;
case VideoFrameReleaseControl.FRAME_RELEASE_SKIP:
skipOutputBuffer(codec, bufferIndex, presentationTimeUs);
updateVideoFrameProcessingOffsetCounters(videoFrameReleaseInfo.getEarlyUs());
return true;
case VideoFrameReleaseControl.FRAME_RELEASE_DROP:
dropOutputBuffer(codec, bufferIndex, presentationTimeUs);
updateVideoFrameProcessingOffsetCounters(videoFrameReleaseInfo.getEarlyUs());
return true;

MediaCodecVideoRenderer.java:730-744

在隧道播放模式下,DefaultAudioSink使用AV同步标头(AV Sync Headers)来确保音视频同步:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
private int writeNonBlockingWithAvSync(  
AudioTrack audioTrack, ByteBuffer buffer, int size, long presentationTimeUs) {
if (Util.SDK_INT >= 26) {
// The underlying platform AudioTrack writes AV sync headers directly.
return audioTrack.write(
buffer, size, AudioTrack.WRITE_NON_BLOCKING, presentationTimeUs * 1000);
}
if (avSyncHeader == null) {
avSyncHeader = ByteBuffer.allocate(16);
avSyncHeader.order(ByteOrder.BIG_ENDIAN);
avSyncHeader.putInt(0x55550001);
}
if (bytesUntilNextAvSync == 0) {
avSyncHeader.putInt(4, size);
avSyncHeader.putLong(8, presentationTimeUs * 1000);
avSyncHeader.position(0);
bytesUntilNextAvSync = size;
}

DefaultAudioSink.java:1890-1907

系统通过计算帧的”早期时间”(early time),即帧应该播放时间与当前播放位置的差值,来确定视频帧的处理方式:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
private long calculateEarlyTimeUs(  
long positionUs, long elapsedRealtimeUs, long framePresentationTimeUs) {
// Calculate how early we are. In other words, the realtime duration that needs to elapse whilst
// the renderer is started before the frame should be rendered. A negative value means that
// we're already late.
// Note: Use of double rather than float is intentional for accuracy in the calculations below.
long earlyUs = (long) ((framePresentationTimeUs - positionUs) / (double) playbackSpeed);
if (started) {
// Account for the elapsed time since the start of this iteration of the rendering loop.
earlyUs -= Util.msToUs(clock.elapsedRealtime()) - elapsedRealtimeUs;
}

return earlyUs;
}

VideoFrameReleaseControl.java:458-471

ExoPlayer 的音画同步实现采用音频作为主时钟,通过精确计算视频帧与音频位置的时间差来决定视频帧的处理方式。系统会根据不同的播放情况(正常播放、隧道模式等)选择合适的同步策略,并支持各种特殊情况如播放速度变化、跳转等。整个同步机制考虑了精确度和性能的平衡,确保在各种设备和播放条件下都能提供流畅的音视频同步体验。