40

Initialization Strategies With Testcontainers For Integration Tests

 3 years ago
source link: https://rieckpil.de/initialization-strategies-with-testcontainers-for-integration-tests/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Initialization Strategies With Testcontainers For Integration Tests

Published: February 23, 2021Last Updated on March 9, 2021

Testcontainers offers several initialization strategies for our Docker containers when writing integration tests. Depending on the Docker image we use for our tests, we might have to perform additional setup steps. This includes adjusting the container configuration or populating data. With Testcontainers, we can tweak the container configuration either during runtime (executing commands inside the container) or before starting it. With this blog post, we'll look at several of these strategies to configure the Docker container for our integration tests.

Execute Commands Inside The Container With Testcontainers

The first strategy we'll take a look at is executing shell commands inside the container. As soon as our Docker container is up-and-running, we can initialize our container from inside with Testcontainers.

This .execInContainer() functionality is part of every container that implements the ContainerState interface. As the GenericContainer class implements this interface and acts as the base class for all custom container definitions. we can be certain that every container class has this method.

Let's take a look at this setup strategy in action. We'll use the LocalStack module of Testcontainers as an example. In short, with LocalStack, we can spin up local AWS services that make testing Java applications that connect to, e.g., SQS or S3, a breeze.

When we start the LocalStack container, our local AWS cloud is empty. If our application under test, e.g., connects to an SQS queue or expects an S3 bucket to be present, our test will fail because the AWS resources are not available.

To prevent this early test failure, we can execute shell commands inside the container and ensure it's properly initialized before running the first test. As part of the LocalStack container, we have access to the awslocal executable (a wrapper around the AWS CLI) that we can use to create resources:

container.execInContainer("awslocal", "s3api", "create-bucket", "--bucket", "testcontainers");

With Testcontainers, we can even get information about the outcome of our command as the .execInContainer() returns a ExecResult object. This gives us access to the exit code, stdout, and stderr:

@Testcontainers
public class LocalStackExampleTest {
  @Container
  static LocalStackContainer container = new LocalStackContainer(DockerImageName.parse("localstack/localstack:0.12.6"))
    .withServices(Service.S3, Service.SQS);
  @BeforeAll
  static void initContainer() throws IOException, InterruptedException {
    container.execInContainer("awslocal", "s3api", "create-bucket", "--bucket", "testcontainers");
    ExecResult createQueue = container
      .execInContainer("awslocal", "sqs", "create-queue", "--queue-name", "testcontainers");
    System.out.println(createQueue.getExitCode());
    System.out.println(createQueue.getStdout());

The test above uses the JUnit Jupiter lifecycle method @BeforeAll to initialize the container before running any test. As we're also using the Testcontainers JUnit Jupiter extension for this test setup, this lifecycle method will only run as soon as our container is ready to accept traffic.

Mount Files Into Our Container For The Integration Test

With the next initialization strategy, we'll mount files or directories to a path inside our container. We can reference any file or directory from the classpath or the filesystem of our host.

As an example, let's see how we can initialize a Keycloak instance with pre-configured realm settings and a user base:

@Testcontainers
public class KeycloakExampleTest {
  @Container
  static GenericContainer<?> keycloak =
    new GenericContainer<>(DockerImageName.parse("jboss/keycloak:11.0.0"))
      .waitingFor(Wait.forHttp("/auth").forStatusCode(200))
      .withExposedPorts(8080)
      .withClasspathResourceMapping("/keycloak/dump.json", "/tmp/dump.json", BindMode.READ_ONLY)
      .withEnv(Map.of(
        "KEYCLOAK_USER", "testcontainers",
        "KEYCLOAK_PASSWORD", "testcontainers",
        "JAVA_OPTS", "-D ... -Dkeycloak.migration.file=/tmp/dump.json",
        "DB_VENDOR", "h2"
  // tests

In case we want to copy a file from our host system, we can use the following method:

.withCopyFileToContainer(MountableFile.forHostPath("/tmp/dump.json"), "/tmp/dump.json")

Some Docker images also expect initialization scripts at a pre-defined folder that will run during our container's startup.

LocalStack, for example, will execute any script that is part of /docker-entrypoint-initaws.d during its bootstrap phase. With this approach, we can even simplify the initialization of our LocalStack instance by providing an init script (or multiple):

@Container
static LocalStackContainer container = new LocalStackContainer(DockerImageName.parse("localstack/localstack:0.12.6"))
  .withClasspathResourceMapping("/localstack", "/docker-entrypoint-initaws.d", BindMode.READ_ONLY)
  .withServices(Service.S3, Service.SQS)
  .waitingFor(Wait.forLogMessage(".*Initialized\\.\n", 1));

The container setup above maps the whole localstack folder to the relevant folder inside the LocalStack container. The default wait strategy of the LocalStack Testcontainers module waits for the Ready. log output. Unfortunately, this output might be visible before all our scripts finished the initialization. We can override the default wait strategy to wait for a custom log message that our init scripts produce.

For this example, there's only one script ( init.sh) with the following content:

#!/bin/sh
awslocal sqs create-queue --queue-name testcontainers
awslocal s3api create-bucket --bucket testcontainers
echo "Initialized."

Use An InitScript To Initialize Our Container

When writing integration tests that involve a database, we need a solution to initialize our database container. With Testcontainers, we can define an init script that is executed as part of the container initialization: .withInitScript().

This method is part of the JdbcDatabaseContainer. Any database container class that extends this class will have this functionality. The PostgreSQLContainer (part of the org.testcontainers:postgresql module), for example, is such a container class.

As part of the container setup, we can pass the location of a .sql script that we want to execute on container startup:

@Container
static PostgreSQLContainer<?> database = new PostgreSQLContainer<>("postgres:12")
  .withUsername("testcontainers")
  .withPassword("testcontainers")
  .withInitScript("database/INIT.sql") // inside src/test/resources
  .withDatabaseName("tescontainers");

Testcontainers will now run our INIT.sql script and executes the following statements right after the database container is ready to accept traffic:

CREATE TABLE messages
    ID      BIGSERIAL PRIMARY KEY,
    CONTENT VARCHAR(255) NOT NULL
INSERT INTO messages (content) VALUES ('Hello World From Init Script!');

We can use this strategy to apply the schema to our database or execute any other preparation task (e.g., add additional database users).

Using Spring Boot and Spring Data JPA with Flyway or Liquibase, the database schema migration tool can create the database schema for us. With Spring's @Sql annotation, we can also execute specific .sql scripts for each test. This integration test setup using @DataJpaTest is described in detail as part of another article.

Use A Prepopulated Container With Testcontainers

Finally, let's take a look at the simplest initialization strategy for writing integration tests with Testcontainers. This time we're using a custom container that is already initialized and populated with data. As Testcontainers can manage any Docker container's lifecycle (the only requirment is to have access to the Docker Image), there's no restriction only to use official container images from Docker Hub. We can also manage the lifecycle of a container from a custom Docker image.

Let's take the database example from the previous section as an example. Any additional initialization step after starting the container takes extra time. Depending on the size of the database schema and the number of initialization steps, this can greatly impact our build time. Besides, most integration tests rarely simulate the actual load and use an (almost) empty database.

For load testing scenarios or when trying to reproduce a critical bug, it's beneficial to use a production-like setup. We can either take a subset of anonymized data from production or create it ourselves.

What's left is to initialize the database when creating the Docker image by, e.g., extending an official database image. For our PostgreSQL example, we can then use this custom image for our integration tests:

@Container
static PostgreSQLContainer<?> database = new PostgreSQLContainer<>("custom-postgres:1.0.3");

This can be the fastest way to work with an initialized container for our integration test. However, it involves preparation and effort whenever we have to adopt the Docker image because of schema changes.

You can find further Testcontainers-related tips & tricks in the following articles:

As part of the Testing Spring Boot Applications Masterclass, you'll learn how to use Testcontainers when writing integration and end-to-end tests for real-world applications. The course application uses the following tech stack: Java 14, Spring Boot 2.4, React, TypeScript, AWS, etc.

The source code for the different initialization strategies with Testcontainers for writing efficient integration tests is available on GitHub.

Have fun initializing your container with Testcontainers,

Philip


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK