public abstract class

DecoderAudioRenderer<T extends Decoder>

extends BaseRenderer

implements MediaClock

 java.lang.Object

androidx.media3.exoplayer.BaseRenderer

↳androidx.media3.exoplayer.audio.DecoderAudioRenderer<T>

Gradle dependencies

compile group: 'androidx.media3', name: 'media3-exoplayer', version: '1.0.0-alpha03'

  • groupId: androidx.media3
  • artifactId: media3-exoplayer
  • version: 1.0.0-alpha03

Artifact androidx.media3:media3-exoplayer:1.0.0-alpha03 it located at Google repository (https://maven.google.com/)

Overview

Decodes and renders audio using a Decoder.

This renderer accepts the following messages sent via ExoPlayer on the playback thread:

Summary

Constructors
publicDecoderAudioRenderer()

publicDecoderAudioRenderer(Handler eventHandler, AudioRendererEventListener eventListener, AudioCapabilities audioCapabilities, AudioProcessor audioProcessors[])

publicDecoderAudioRenderer(Handler eventHandler, AudioRendererEventListener eventListener, AudioProcessor audioProcessors[])

publicDecoderAudioRenderer(Handler eventHandler, AudioRendererEventListener eventListener, AudioSink audioSink)

Methods
protected DecoderReuseEvaluationcanReuseDecoder(java.lang.String decoderName, Format oldFormat, Format newFormat)

Evaluates whether the existing decoder can be reused for a new Format.

protected abstract Decoder<I, O, E>createDecoder(Format format, CryptoConfig cryptoConfig)

Creates a decoder for the given format.

public voidexperimentalSetEnableKeepAudioTrackOnSeek(boolean enableKeepAudioTrackOnSeek)

Sets whether to enable the experimental feature that keeps and flushes the when a seek occurs, as opposed to releasing and reinitialising.

public MediaClockgetMediaClock()

protected abstract FormatgetOutputFormat(Decoder<I, O, E> decoder)

Returns the format of audio buffers output by the decoder.

public PlaybackParametersgetPlaybackParameters()

public longgetPositionUs()

protected final intgetSinkFormatSupport(Format format)

Returns the level of support that the renderer's AudioSink provides for a given Format.

public voidhandleMessage(int messageType, java.lang.Object message)

public booleanisEnded()

public booleanisReady()

protected voidonDisabled()

Called when the renderer is disabled.

protected voidonEnabled(boolean joining, boolean mayRenderStartOfStream)

Called when the renderer is enabled.

protected voidonPositionDiscontinuity()

See AudioSink.Listener.onPositionDiscontinuity().

protected voidonPositionReset(long positionUs, boolean joining)

Called when the position is reset.

protected voidonQueueInputBuffer(DecoderInputBuffer buffer)

protected voidonStarted()

Called when the renderer is started.

protected voidonStopped()

Called when the renderer is stopped.

public voidrender(long positionUs, long elapsedRealtimeUs)

public voidsetPlaybackParameters(PlaybackParameters playbackParameters)

protected final booleansinkSupportsFormat(Format format)

Returns whether the renderer's AudioSink supports a given Format.

public final intsupportsFormat(Format format)

protected abstract intsupportsFormatInternal(Format format)

Returns the for the given Format.

from BaseRenderercreateRendererException, createRendererException, disable, enable, getCapabilities, getConfiguration, getFormatHolder, getIndex, getLastResetPositionUs, getPlayerId, getReadingPositionUs, getState, getStream, getStreamFormats, getTrackType, hasReadStreamToEnd, init, isCurrentStreamFinal, isSourceReady, maybeThrowStreamError, onReset, onStreamChanged, readSource, replaceStream, reset, resetPosition, setCurrentStreamFinal, skipSource, start, stop, supportsMixedMimeTypeAdaptation
from java.lang.Objectclone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait

Constructors

public DecoderAudioRenderer()

public DecoderAudioRenderer(Handler eventHandler, AudioRendererEventListener eventListener, AudioProcessor audioProcessors[])

Parameters:

eventHandler: A handler to use when delivering events to eventListener. May be null if delivery of events is not required.
eventListener: A listener of events. May be null if delivery of events is not required.
audioProcessors: Optional AudioProcessors that will process audio before output.

public DecoderAudioRenderer(Handler eventHandler, AudioRendererEventListener eventListener, AudioCapabilities audioCapabilities, AudioProcessor audioProcessors[])

Parameters:

eventHandler: A handler to use when delivering events to eventListener. May be null if delivery of events is not required.
eventListener: A listener of events. May be null if delivery of events is not required.
audioCapabilities: The audio capabilities for playback on this device. Use AudioCapabilities.DEFAULT_AUDIO_CAPABILITIES if default capabilities (no encoded audio passthrough support) should be assumed.
audioProcessors: Optional AudioProcessors that will process audio before output.

public DecoderAudioRenderer(Handler eventHandler, AudioRendererEventListener eventListener, AudioSink audioSink)

Parameters:

eventHandler: A handler to use when delivering events to eventListener. May be null if delivery of events is not required.
eventListener: A listener of events. May be null if delivery of events is not required.
audioSink: The sink to which audio will be output.

Methods

public void experimentalSetEnableKeepAudioTrackOnSeek(boolean enableKeepAudioTrackOnSeek)

Sets whether to enable the experimental feature that keeps and flushes the when a seek occurs, as opposed to releasing and reinitialising. Off by default.

This method is experimental, and will be renamed or removed in a future release.

Parameters:

enableKeepAudioTrackOnSeek: Whether to keep the on seek.

public MediaClock getMediaClock()

public final int supportsFormat(Format format)

protected abstract int supportsFormatInternal(Format format)

Returns the for the given Format.

Parameters:

format: The format, which has an audio Format.sampleMimeType.

Returns:

The for this Format.

protected final boolean sinkSupportsFormat(Format format)

Returns whether the renderer's AudioSink supports a given Format.

See also: AudioSink.supportsFormat(Format)

protected final int getSinkFormatSupport(Format format)

Returns the level of support that the renderer's AudioSink provides for a given Format.

See also: (Format)

public void render(long positionUs, long elapsedRealtimeUs)

protected void onPositionDiscontinuity()

See AudioSink.Listener.onPositionDiscontinuity().

protected abstract Decoder<I, O, E> createDecoder(Format format, CryptoConfig cryptoConfig)

Creates a decoder for the given format.

Parameters:

format: The format for which a decoder is required.
cryptoConfig: The CryptoConfig object required for decoding encrypted content. May be null and can be ignored if decoder does not handle encrypted content.

Returns:

The decoder.

protected abstract Format getOutputFormat(Decoder<I, O, E> decoder)

Returns the format of audio buffers output by the decoder. Will not be called until the first output buffer has been dequeued, so the decoder may use input data to determine the format.

Parameters:

decoder: The decoder.

protected DecoderReuseEvaluation canReuseDecoder(java.lang.String decoderName, Format oldFormat, Format newFormat)

Evaluates whether the existing decoder can be reused for a new Format.

The default implementation does not allow decoder reuse.

Parameters:

decoderName: The name of the decoder.
oldFormat: The previous format.
newFormat: The new format.

Returns:

The result of the evaluation.

public boolean isEnded()

public boolean isReady()

public long getPositionUs()

public void setPlaybackParameters(PlaybackParameters playbackParameters)

public PlaybackParameters getPlaybackParameters()

protected void onEnabled(boolean joining, boolean mayRenderStartOfStream)

Called when the renderer is enabled.

The default implementation is a no-op.

Parameters:

joining: Whether this renderer is being enabled to join an ongoing playback.
mayRenderStartOfStream: Whether this renderer is allowed to render the start of the stream even if the state is not Renderer.STATE_STARTED yet.

protected void onPositionReset(long positionUs, boolean joining)

Called when the position is reset. This occurs when the renderer is enabled after BaseRenderer.onStreamChanged(Format[], long, long) has been called, and also when a position discontinuity is encountered.

After a position reset, the renderer's SampleStream is guaranteed to provide samples starting from a key frame.

The default implementation is a no-op.

Parameters:

positionUs: The new playback position in microseconds.
joining: Whether this renderer is being enabled to join an ongoing playback.

protected void onStarted()

Called when the renderer is started.

The default implementation is a no-op.

protected void onStopped()

Called when the renderer is stopped.

The default implementation is a no-op.

protected void onDisabled()

Called when the renderer is disabled.

The default implementation is a no-op.

public void handleMessage(int messageType, java.lang.Object message)

protected void onQueueInputBuffer(DecoderInputBuffer buffer)

Source

/*
 * Copyright (C) 2016 The Android Open Source Project
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *      http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package androidx.media3.exoplayer.audio;

import static androidx.media3.exoplayer.DecoderReuseEvaluation.DISCARD_REASON_DRM_SESSION_CHANGED;
import static androidx.media3.exoplayer.DecoderReuseEvaluation.DISCARD_REASON_REUSE_NOT_IMPLEMENTED;
import static androidx.media3.exoplayer.DecoderReuseEvaluation.REUSE_RESULT_NO;
import static androidx.media3.exoplayer.source.SampleStream.FLAG_REQUIRE_FORMAT;
import static com.google.common.base.MoreObjects.firstNonNull;
import static java.lang.Math.max;
import static java.lang.annotation.ElementType.TYPE_USE;

import android.os.Handler;
import android.os.SystemClock;
import androidx.annotation.CallSuper;
import androidx.annotation.IntDef;
import androidx.annotation.Nullable;
import androidx.media3.common.AudioAttributes;
import androidx.media3.common.AuxEffectInfo;
import androidx.media3.common.C;
import androidx.media3.common.Format;
import androidx.media3.common.MimeTypes;
import androidx.media3.common.PlaybackException;
import androidx.media3.common.PlaybackParameters;
import androidx.media3.common.util.Assertions;
import androidx.media3.common.util.Log;
import androidx.media3.common.util.TraceUtil;
import androidx.media3.common.util.UnstableApi;
import androidx.media3.common.util.Util;
import androidx.media3.decoder.CryptoConfig;
import androidx.media3.decoder.Decoder;
import androidx.media3.decoder.DecoderException;
import androidx.media3.decoder.DecoderInputBuffer;
import androidx.media3.decoder.SimpleDecoderOutputBuffer;
import androidx.media3.exoplayer.BaseRenderer;
import androidx.media3.exoplayer.DecoderCounters;
import androidx.media3.exoplayer.DecoderReuseEvaluation;
import androidx.media3.exoplayer.ExoPlaybackException;
import androidx.media3.exoplayer.ExoPlayer;
import androidx.media3.exoplayer.FormatHolder;
import androidx.media3.exoplayer.MediaClock;
import androidx.media3.exoplayer.PlayerMessage.Target;
import androidx.media3.exoplayer.RendererCapabilities;
import androidx.media3.exoplayer.audio.AudioRendererEventListener.EventDispatcher;
import androidx.media3.exoplayer.audio.AudioSink.SinkFormatSupport;
import androidx.media3.exoplayer.drm.DrmSession;
import androidx.media3.exoplayer.drm.DrmSession.DrmSessionException;
import androidx.media3.exoplayer.source.SampleStream.ReadDataResult;
import java.lang.annotation.Documented;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;

/**
 * Decodes and renders audio using a {@link Decoder}.
 *
 * <p>This renderer accepts the following messages sent via {@link ExoPlayer#createMessage(Target)}
 * on the playback thread:
 *
 * <ul>
 *   <li>Message with type {@link #MSG_SET_VOLUME} to set the volume. The message payload should be
 *       a {@link Float} with 0 being silence and 1 being unity gain.
 *   <li>Message with type {@link #MSG_SET_AUDIO_ATTRIBUTES} to set the audio attributes. The
 *       message payload should be an {@link AudioAttributes} instance that will configure the
 *       underlying audio track.
 *   <li>Message with type {@link #MSG_SET_AUX_EFFECT_INFO} to set the auxiliary effect. The message
 *       payload should be an {@link AuxEffectInfo} instance that will configure the underlying
 *       audio track.
 *   <li>Message with type {@link #MSG_SET_SKIP_SILENCE_ENABLED} to enable or disable skipping
 *       silences. The message payload should be a {@link Boolean}.
 *   <li>Message with type {@link #MSG_SET_AUDIO_SESSION_ID} to set the audio session ID. The
 *       message payload should be a session ID {@link Integer} that will be attached to the
 *       underlying audio track.
 * </ul>
 */
@UnstableApi
public abstract class DecoderAudioRenderer<
        T extends
            Decoder<
                    DecoderInputBuffer,
                    ? extends SimpleDecoderOutputBuffer,
                    ? extends DecoderException>>
    extends BaseRenderer implements MediaClock {

  private static final String TAG = "DecoderAudioRenderer";

  @Documented
  @Retention(RetentionPolicy.SOURCE)
  @java.lang.annotation.Target(TYPE_USE)
  @IntDef({
    REINITIALIZATION_STATE_NONE,
    REINITIALIZATION_STATE_SIGNAL_END_OF_STREAM,
    REINITIALIZATION_STATE_WAIT_END_OF_STREAM
  })
  private @interface ReinitializationState {}
  /** The decoder does not need to be re-initialized. */
  private static final int REINITIALIZATION_STATE_NONE = 0;
  /**
   * The input format has changed in a way that requires the decoder to be re-initialized, but we
   * haven't yet signaled an end of stream to the existing decoder. We need to do so in order to
   * ensure that it outputs any remaining buffers before we release it.
   */
  private static final int REINITIALIZATION_STATE_SIGNAL_END_OF_STREAM = 1;
  /**
   * The input format has changed in a way that requires the decoder to be re-initialized, and we've
   * signaled an end of stream to the existing decoder. We're waiting for the decoder to output an
   * end of stream signal to indicate that it has output any remaining buffers before we release it.
   */
  private static final int REINITIALIZATION_STATE_WAIT_END_OF_STREAM = 2;

  private final EventDispatcher eventDispatcher;
  private final AudioSink audioSink;
  private final DecoderInputBuffer flagsOnlyBuffer;

  private DecoderCounters decoderCounters;
  private Format inputFormat;
  private int encoderDelay;
  private int encoderPadding;

  private boolean experimentalKeepAudioTrackOnSeek;

  @Nullable private T decoder;

  @Nullable private DecoderInputBuffer inputBuffer;
  @Nullable private SimpleDecoderOutputBuffer outputBuffer;
  @Nullable private DrmSession decoderDrmSession;
  @Nullable private DrmSession sourceDrmSession;

  private @ReinitializationState int decoderReinitializationState;
  private boolean decoderReceivedBuffers;
  private boolean audioTrackNeedsConfigure;

  private long currentPositionUs;
  private boolean allowFirstBufferPositionDiscontinuity;
  private boolean allowPositionDiscontinuity;
  private boolean inputStreamEnded;
  private boolean outputStreamEnded;

  public DecoderAudioRenderer() {
    this(/* eventHandler= */ null, /* eventListener= */ null);
  }

  /**
   * @param eventHandler A handler to use when delivering events to {@code eventListener}. May be
   *     null if delivery of events is not required.
   * @param eventListener A listener of events. May be null if delivery of events is not required.
   * @param audioProcessors Optional {@link AudioProcessor}s that will process audio before output.
   */
  public DecoderAudioRenderer(
      @Nullable Handler eventHandler,
      @Nullable AudioRendererEventListener eventListener,
      AudioProcessor... audioProcessors) {
    this(eventHandler, eventListener, /* audioCapabilities= */ null, audioProcessors);
  }

  /**
   * @param eventHandler A handler to use when delivering events to {@code eventListener}. May be
   *     null if delivery of events is not required.
   * @param eventListener A listener of events. May be null if delivery of events is not required.
   * @param audioCapabilities The audio capabilities for playback on this device. Use {@link
   *     AudioCapabilities#DEFAULT_AUDIO_CAPABILITIES} if default capabilities (no encoded audio
   *     passthrough support) should be assumed.
   * @param audioProcessors Optional {@link AudioProcessor}s that will process audio before output.
   */
  public DecoderAudioRenderer(
      @Nullable Handler eventHandler,
      @Nullable AudioRendererEventListener eventListener,
      AudioCapabilities audioCapabilities,
      AudioProcessor... audioProcessors) {
    this(
        eventHandler,
        eventListener,
        new DefaultAudioSink.Builder()
            .setAudioCapabilities( // For backward compatibility, null == default.
                firstNonNull(audioCapabilities, AudioCapabilities.DEFAULT_AUDIO_CAPABILITIES))
            .setAudioProcessors(audioProcessors)
            .build());
  }

  /**
   * @param eventHandler A handler to use when delivering events to {@code eventListener}. May be
   *     null if delivery of events is not required.
   * @param eventListener A listener of events. May be null if delivery of events is not required.
   * @param audioSink The sink to which audio will be output.
   */
  public DecoderAudioRenderer(
      @Nullable Handler eventHandler,
      @Nullable AudioRendererEventListener eventListener,
      AudioSink audioSink) {
    super(C.TRACK_TYPE_AUDIO);
    eventDispatcher = new EventDispatcher(eventHandler, eventListener);
    this.audioSink = audioSink;
    audioSink.setListener(new AudioSinkListener());
    flagsOnlyBuffer = DecoderInputBuffer.newNoDataInstance();
    decoderReinitializationState = REINITIALIZATION_STATE_NONE;
    audioTrackNeedsConfigure = true;
  }

  /**
   * Sets whether to enable the experimental feature that keeps and flushes the {@link
   * android.media.AudioTrack} when a seek occurs, as opposed to releasing and reinitialising. Off
   * by default.
   *
   * <p>This method is experimental, and will be renamed or removed in a future release.
   *
   * @param enableKeepAudioTrackOnSeek Whether to keep the {@link android.media.AudioTrack} on seek.
   */
  public void experimentalSetEnableKeepAudioTrackOnSeek(boolean enableKeepAudioTrackOnSeek) {
    this.experimentalKeepAudioTrackOnSeek = enableKeepAudioTrackOnSeek;
  }

  @Override
  @Nullable
  public MediaClock getMediaClock() {
    return this;
  }

  @Override
  public final @Capabilities int supportsFormat(Format format) {
    if (!MimeTypes.isAudio(format.sampleMimeType)) {
      return RendererCapabilities.create(C.FORMAT_UNSUPPORTED_TYPE);
    }
    @C.FormatSupport int formatSupport = supportsFormatInternal(format);
    if (formatSupport <= C.FORMAT_UNSUPPORTED_DRM) {
      return RendererCapabilities.create(formatSupport);
    }
    @TunnelingSupport
    int tunnelingSupport = Util.SDK_INT >= 21 ? TUNNELING_SUPPORTED : TUNNELING_NOT_SUPPORTED;
    return RendererCapabilities.create(formatSupport, ADAPTIVE_NOT_SEAMLESS, tunnelingSupport);
  }

  /**
   * Returns the {@link C.FormatSupport} for the given {@link Format}.
   *
   * @param format The format, which has an audio {@link Format#sampleMimeType}.
   * @return The {@link C.FormatSupport} for this {@link Format}.
   */
  protected abstract @C.FormatSupport int supportsFormatInternal(Format format);

  /**
   * Returns whether the renderer's {@link AudioSink} supports a given {@link Format}.
   *
   * @see AudioSink#supportsFormat(Format)
   */
  protected final boolean sinkSupportsFormat(Format format) {
    return audioSink.supportsFormat(format);
  }

  /**
   * Returns the level of support that the renderer's {@link AudioSink} provides for a given {@link
   * Format}.
   *
   * @see AudioSink#getFormatSupport(Format) (Format)
   */
  protected final @SinkFormatSupport int getSinkFormatSupport(Format format) {
    return audioSink.getFormatSupport(format);
  }

  @Override
  public void render(long positionUs, long elapsedRealtimeUs) throws ExoPlaybackException {
    if (outputStreamEnded) {
      try {
        audioSink.playToEndOfStream();
      } catch (AudioSink.WriteException e) {
        throw createRendererException(
            e, e.format, e.isRecoverable, PlaybackException.ERROR_CODE_AUDIO_TRACK_WRITE_FAILED);
      }
      return;
    }

    // Try and read a format if we don't have one already.
    if (inputFormat == null) {
      // We don't have a format yet, so try and read one.
      FormatHolder formatHolder = getFormatHolder();
      flagsOnlyBuffer.clear();
      @ReadDataResult int result = readSource(formatHolder, flagsOnlyBuffer, FLAG_REQUIRE_FORMAT);
      if (result == C.RESULT_FORMAT_READ) {
        onInputFormatChanged(formatHolder);
      } else if (result == C.RESULT_BUFFER_READ) {
        // End of stream read having not read a format.
        Assertions.checkState(flagsOnlyBuffer.isEndOfStream());
        inputStreamEnded = true;
        try {
          processEndOfStream();
        } catch (AudioSink.WriteException e) {
          throw createRendererException(
              e, /* format= */ null, PlaybackException.ERROR_CODE_AUDIO_TRACK_WRITE_FAILED);
        }
        return;
      } else {
        // We still don't have a format and can't make progress without one.
        return;
      }
    }

    // If we don't have a decoder yet, we need to instantiate one.
    maybeInitDecoder();

    if (decoder != null) {
      try {
        // Rendering loop.
        TraceUtil.beginSection("drainAndFeed");
        while (drainOutputBuffer()) {}
        while (feedInputBuffer()) {}
        TraceUtil.endSection();
      } catch (DecoderException e) {
        // Can happen with dequeueOutputBuffer, dequeueInputBuffer, queueInputBuffer
        Log.e(TAG, "Audio codec error", e);
        eventDispatcher.audioCodecError(e);
        throw createRendererException(e, inputFormat, PlaybackException.ERROR_CODE_DECODING_FAILED);
      } catch (AudioSink.ConfigurationException e) {
        throw createRendererException(
            e, e.format, PlaybackException.ERROR_CODE_AUDIO_TRACK_INIT_FAILED);
      } catch (AudioSink.InitializationException e) {
        throw createRendererException(
            e, e.format, e.isRecoverable, PlaybackException.ERROR_CODE_AUDIO_TRACK_INIT_FAILED);
      } catch (AudioSink.WriteException e) {
        throw createRendererException(
            e, e.format, e.isRecoverable, PlaybackException.ERROR_CODE_AUDIO_TRACK_WRITE_FAILED);
      }
      decoderCounters.ensureUpdated();
    }
  }

  /** See {@link AudioSink.Listener#onPositionDiscontinuity()}. */
  @CallSuper
  protected void onPositionDiscontinuity() {
    // We are out of sync so allow currentPositionUs to jump backwards.
    allowPositionDiscontinuity = true;
  }

  /**
   * Creates a decoder for the given format.
   *
   * @param format The format for which a decoder is required.
   * @param cryptoConfig The {@link CryptoConfig} object required for decoding encrypted content.
   *     May be null and can be ignored if decoder does not handle encrypted content.
   * @return The decoder.
   * @throws DecoderException If an error occurred creating a suitable decoder.
   */
  protected abstract T createDecoder(Format format, @Nullable CryptoConfig cryptoConfig)
      throws DecoderException;

  /**
   * Returns the format of audio buffers output by the decoder. Will not be called until the first
   * output buffer has been dequeued, so the decoder may use input data to determine the format.
   *
   * @param decoder The decoder.
   */
  protected abstract Format getOutputFormat(T decoder);

  /**
   * Evaluates whether the existing decoder can be reused for a new {@link Format}.
   *
   * <p>The default implementation does not allow decoder reuse.
   *
   * @param decoderName The name of the decoder.
   * @param oldFormat The previous format.
   * @param newFormat The new format.
   * @return The result of the evaluation.
   */
  protected DecoderReuseEvaluation canReuseDecoder(
      String decoderName, Format oldFormat, Format newFormat) {
    return new DecoderReuseEvaluation(
        decoderName, oldFormat, newFormat, REUSE_RESULT_NO, DISCARD_REASON_REUSE_NOT_IMPLEMENTED);
  }

  private boolean drainOutputBuffer()
      throws ExoPlaybackException, DecoderException, AudioSink.ConfigurationException,
          AudioSink.InitializationException, AudioSink.WriteException {
    if (outputBuffer == null) {
      outputBuffer = decoder.dequeueOutputBuffer();
      if (outputBuffer == null) {
        return false;
      }
      if (outputBuffer.skippedOutputBufferCount > 0) {
        decoderCounters.skippedOutputBufferCount += outputBuffer.skippedOutputBufferCount;
        audioSink.handleDiscontinuity();
      }
    }

    if (outputBuffer.isEndOfStream()) {
      if (decoderReinitializationState == REINITIALIZATION_STATE_WAIT_END_OF_STREAM) {
        // We're waiting to re-initialize the decoder, and have now processed all final buffers.
        releaseDecoder();
        maybeInitDecoder();
        // The audio track may need to be recreated once the new output format is known.
        audioTrackNeedsConfigure = true;
      } else {
        outputBuffer.release();
        outputBuffer = null;
        try {
          processEndOfStream();
        } catch (AudioSink.WriteException e) {
          throw createRendererException(
              e, e.format, e.isRecoverable, PlaybackException.ERROR_CODE_AUDIO_TRACK_WRITE_FAILED);
        }
      }
      return false;
    }

    if (audioTrackNeedsConfigure) {
      Format outputFormat =
          getOutputFormat(decoder)
              .buildUpon()
              .setEncoderDelay(encoderDelay)
              .setEncoderPadding(encoderPadding)
              .build();
      audioSink.configure(outputFormat, /* specifiedBufferSize= */ 0, /* outputChannels= */ null);
      audioTrackNeedsConfigure = false;
    }

    if (audioSink.handleBuffer(
        outputBuffer.data, outputBuffer.timeUs, /* encodedAccessUnitCount= */ 1)) {
      decoderCounters.renderedOutputBufferCount++;
      outputBuffer.release();
      outputBuffer = null;
      return true;
    }

    return false;
  }

  private boolean feedInputBuffer() throws DecoderException, ExoPlaybackException {
    if (decoder == null
        || decoderReinitializationState == REINITIALIZATION_STATE_WAIT_END_OF_STREAM
        || inputStreamEnded) {
      // We need to reinitialize the decoder or the input stream has ended.
      return false;
    }

    if (inputBuffer == null) {
      inputBuffer = decoder.dequeueInputBuffer();
      if (inputBuffer == null) {
        return false;
      }
    }

    if (decoderReinitializationState == REINITIALIZATION_STATE_SIGNAL_END_OF_STREAM) {
      inputBuffer.setFlags(C.BUFFER_FLAG_END_OF_STREAM);
      decoder.queueInputBuffer(inputBuffer);
      inputBuffer = null;
      decoderReinitializationState = REINITIALIZATION_STATE_WAIT_END_OF_STREAM;
      return false;
    }

    FormatHolder formatHolder = getFormatHolder();
    switch (readSource(formatHolder, inputBuffer, /* readFlags= */ 0)) {
      case C.RESULT_NOTHING_READ:
        return false;
      case C.RESULT_FORMAT_READ:
        onInputFormatChanged(formatHolder);
        return true;
      case C.RESULT_BUFFER_READ:
        if (inputBuffer.isEndOfStream()) {
          inputStreamEnded = true;
          decoder.queueInputBuffer(inputBuffer);
          inputBuffer = null;
          return false;
        }
        inputBuffer.flip();
        inputBuffer.format = inputFormat;
        onQueueInputBuffer(inputBuffer);
        decoder.queueInputBuffer(inputBuffer);
        decoderReceivedBuffers = true;
        decoderCounters.queuedInputBufferCount++;
        inputBuffer = null;
        return true;
      default:
        throw new IllegalStateException();
    }
  }

  private void processEndOfStream() throws AudioSink.WriteException {
    outputStreamEnded = true;
    audioSink.playToEndOfStream();
  }

  private void flushDecoder() throws ExoPlaybackException {
    if (decoderReinitializationState != REINITIALIZATION_STATE_NONE) {
      releaseDecoder();
      maybeInitDecoder();
    } else {
      inputBuffer = null;
      if (outputBuffer != null) {
        outputBuffer.release();
        outputBuffer = null;
      }
      decoder.flush();
      decoderReceivedBuffers = false;
    }
  }

  @Override
  public boolean isEnded() {
    return outputStreamEnded && audioSink.isEnded();
  }

  @Override
  public boolean isReady() {
    return audioSink.hasPendingData()
        || (inputFormat != null && (isSourceReady() || outputBuffer != null));
  }

  @Override
  public long getPositionUs() {
    if (getState() == STATE_STARTED) {
      updateCurrentPosition();
    }
    return currentPositionUs;
  }

  @Override
  public void setPlaybackParameters(PlaybackParameters playbackParameters) {
    audioSink.setPlaybackParameters(playbackParameters);
  }

  @Override
  public PlaybackParameters getPlaybackParameters() {
    return audioSink.getPlaybackParameters();
  }

  @Override
  protected void onEnabled(boolean joining, boolean mayRenderStartOfStream)
      throws ExoPlaybackException {
    decoderCounters = new DecoderCounters();
    eventDispatcher.enabled(decoderCounters);
    if (getConfiguration().tunneling) {
      audioSink.enableTunnelingV21();
    } else {
      audioSink.disableTunneling();
    }
    audioSink.setPlayerId(getPlayerId());
  }

  @Override
  protected void onPositionReset(long positionUs, boolean joining) throws ExoPlaybackException {
    if (experimentalKeepAudioTrackOnSeek) {
      audioSink.experimentalFlushWithoutAudioTrackRelease();
    } else {
      audioSink.flush();
    }

    currentPositionUs = positionUs;
    allowFirstBufferPositionDiscontinuity = true;
    allowPositionDiscontinuity = true;
    inputStreamEnded = false;
    outputStreamEnded = false;
    if (decoder != null) {
      flushDecoder();
    }
  }

  @Override
  protected void onStarted() {
    audioSink.play();
  }

  @Override
  protected void onStopped() {
    updateCurrentPosition();
    audioSink.pause();
  }

  @Override
  protected void onDisabled() {
    inputFormat = null;
    audioTrackNeedsConfigure = true;
    try {
      setSourceDrmSession(null);
      releaseDecoder();
      audioSink.reset();
    } finally {
      eventDispatcher.disabled(decoderCounters);
    }
  }

  @Override
  public void handleMessage(@MessageType int messageType, @Nullable Object message)
      throws ExoPlaybackException {
    switch (messageType) {
      case MSG_SET_VOLUME:
        audioSink.setVolume((Float) message);
        break;
      case MSG_SET_AUDIO_ATTRIBUTES:
        AudioAttributes audioAttributes = (AudioAttributes) message;
        audioSink.setAudioAttributes(audioAttributes);
        break;
      case MSG_SET_AUX_EFFECT_INFO:
        AuxEffectInfo auxEffectInfo = (AuxEffectInfo) message;
        audioSink.setAuxEffectInfo(auxEffectInfo);
        break;
      case MSG_SET_SKIP_SILENCE_ENABLED:
        audioSink.setSkipSilenceEnabled((Boolean) message);
        break;
      case MSG_SET_AUDIO_SESSION_ID:
        audioSink.setAudioSessionId((Integer) message);
        break;
      case MSG_SET_CAMERA_MOTION_LISTENER:
      case MSG_SET_CHANGE_FRAME_RATE_STRATEGY:
      case MSG_SET_SCALING_MODE:
      case MSG_SET_VIDEO_FRAME_METADATA_LISTENER:
      case MSG_SET_VIDEO_OUTPUT:
      case MSG_SET_WAKEUP_LISTENER:
      default:
        super.handleMessage(messageType, message);
        break;
    }
  }

  private void maybeInitDecoder() throws ExoPlaybackException {
    if (decoder != null) {
      return;
    }

    setDecoderDrmSession(sourceDrmSession);

    CryptoConfig cryptoConfig = null;
    if (decoderDrmSession != null) {
      cryptoConfig = decoderDrmSession.getCryptoConfig();
      if (cryptoConfig == null) {
        DrmSessionException drmError = decoderDrmSession.getError();
        if (drmError != null) {
          // Continue for now. We may be able to avoid failure if a new input format causes the
          // session to be replaced without it having been used.
        } else {
          // The drm session isn't open yet.
          return;
        }
      }
    }

    try {
      long codecInitializingTimestamp = SystemClock.elapsedRealtime();
      TraceUtil.beginSection("createAudioDecoder");
      decoder = createDecoder(inputFormat, cryptoConfig);
      TraceUtil.endSection();
      long codecInitializedTimestamp = SystemClock.elapsedRealtime();
      eventDispatcher.decoderInitialized(
          decoder.getName(),
          codecInitializedTimestamp,
          codecInitializedTimestamp - codecInitializingTimestamp);
      decoderCounters.decoderInitCount++;
    } catch (DecoderException e) {
      Log.e(TAG, "Audio codec error", e);
      eventDispatcher.audioCodecError(e);
      throw createRendererException(
          e, inputFormat, PlaybackException.ERROR_CODE_DECODER_INIT_FAILED);
    } catch (OutOfMemoryError e) {
      throw createRendererException(
          e, inputFormat, PlaybackException.ERROR_CODE_DECODER_INIT_FAILED);
    }
  }

  private void releaseDecoder() {
    inputBuffer = null;
    outputBuffer = null;
    decoderReinitializationState = REINITIALIZATION_STATE_NONE;
    decoderReceivedBuffers = false;
    if (decoder != null) {
      decoderCounters.decoderReleaseCount++;
      decoder.release();
      eventDispatcher.decoderReleased(decoder.getName());
      decoder = null;
    }
    setDecoderDrmSession(null);
  }

  private void setSourceDrmSession(@Nullable DrmSession session) {
    DrmSession.replaceSession(sourceDrmSession, session);
    sourceDrmSession = session;
  }

  private void setDecoderDrmSession(@Nullable DrmSession session) {
    DrmSession.replaceSession(decoderDrmSession, session);
    decoderDrmSession = session;
  }

  private void onInputFormatChanged(FormatHolder formatHolder) throws ExoPlaybackException {
    Format newFormat = Assertions.checkNotNull(formatHolder.format);
    setSourceDrmSession(formatHolder.drmSession);
    Format oldFormat = inputFormat;
    inputFormat = newFormat;
    encoderDelay = newFormat.encoderDelay;
    encoderPadding = newFormat.encoderPadding;

    if (decoder == null) {
      maybeInitDecoder();
      eventDispatcher.inputFormatChanged(inputFormat, /* decoderReuseEvaluation= */ null);
      return;
    }

    DecoderReuseEvaluation evaluation;
    if (sourceDrmSession != decoderDrmSession) {
      evaluation =
          new DecoderReuseEvaluation(
              decoder.getName(),
              oldFormat,
              newFormat,
              REUSE_RESULT_NO,
              DISCARD_REASON_DRM_SESSION_CHANGED);
    } else {
      evaluation = canReuseDecoder(decoder.getName(), oldFormat, newFormat);
    }

    if (evaluation.result == REUSE_RESULT_NO) {
      if (decoderReceivedBuffers) {
        // Signal end of stream and wait for any final output buffers before re-initialization.
        decoderReinitializationState = REINITIALIZATION_STATE_SIGNAL_END_OF_STREAM;
      } else {
        // There aren't any final output buffers, so release the decoder immediately.
        releaseDecoder();
        maybeInitDecoder();
        audioTrackNeedsConfigure = true;
      }
    }
    eventDispatcher.inputFormatChanged(inputFormat, evaluation);
  }

  protected void onQueueInputBuffer(DecoderInputBuffer buffer) {
    if (allowFirstBufferPositionDiscontinuity && !buffer.isDecodeOnly()) {
      // TODO: Remove this hack once we have a proper fix for [Internal: b/71876314].
      // Allow the position to jump if the first presentable input buffer has a timestamp that
      // differs significantly from what was expected.
      if (Math.abs(buffer.timeUs - currentPositionUs) > 500000) {
        currentPositionUs = buffer.timeUs;
      }
      allowFirstBufferPositionDiscontinuity = false;
    }
  }

  private void updateCurrentPosition() {
    long newCurrentPositionUs = audioSink.getCurrentPositionUs(isEnded());
    if (newCurrentPositionUs != AudioSink.CURRENT_POSITION_NOT_SET) {
      currentPositionUs =
          allowPositionDiscontinuity
              ? newCurrentPositionUs
              : max(currentPositionUs, newCurrentPositionUs);
      allowPositionDiscontinuity = false;
    }
  }

  private final class AudioSinkListener implements AudioSink.Listener {

    @Override
    public void onPositionDiscontinuity() {
      DecoderAudioRenderer.this.onPositionDiscontinuity();
    }

    @Override
    public void onPositionAdvancing(long playoutStartSystemTimeMs) {
      eventDispatcher.positionAdvancing(playoutStartSystemTimeMs);
    }

    @Override
    public void onUnderrun(int bufferSize, long bufferSizeMs, long elapsedSinceLastFeedMs) {
      eventDispatcher.underrun(bufferSize, bufferSizeMs, elapsedSinceLastFeedMs);
    }

    @Override
    public void onSkipSilenceEnabledChanged(boolean skipSilenceEnabled) {
      eventDispatcher.skipSilenceEnabledChanged(skipSilenceEnabled);
    }

    @Override
    public void onAudioSinkError(Exception audioSinkError) {
      Log.e(TAG, "Audio sink error", audioSinkError);
      eventDispatcher.audioSinkError(audioSinkError);
    }
  }
}