DADI has rebranded to Edge. Find out more.

Running DADI API in a Container

In this article, the next service we’ll containerize is DADI API, our high-performance RESTful API layer.

🔗Part II: Running DADI API in a container

Previously, in the Running DADI Web in a container article, we looked at how we could utilize Docker containers to run DADI Web.

In this article, the next service we’ll containerize is DADI API, our high-performance RESTful API layer. As with web, first we’ll create a standard API project, and then we’ll containerize it. Before we get started with API though, there are a few things we need to setup. We’ll need to setup a data store to hold our data, and we’ll need to configure a Docker network so that services can talk to each other.

🔗Docker

Docker is an open container platform providing developers and sysadmins with a lightweight method of process & resource isolation, the ability to package an application and all it’s dependencies into a single image, allowing greater levels of automation and portability.

The DADI suite runs on Node.js and works perfectly fine without containers, but by utilising containers, we can gain greater isolation, scalability, and enhance our automation ability as well.

If you haven’t used Docker before, then I’d recommend reading through the Docker getting started guide.

🔗Requirements

For this article I’ll be working with Docker 17.06.2-ce-mac27 and macOS Sierra. If you don’t have Docker installed, you can get it from the Docker Community Edition website.

I’ll also be installing DADI applications with the DADI command line interface, to install that run npm install @dadi/cli -g.

🔗Getting started

We’re going to be setting up two containers today, our DADI API instance, as well as a data store as well. DADI API currently supports three data connectors: MongoDB, CouchDB, and FileStore. For this article, we’ll be using MongoDB.

🔗User Defined Network

Previously, in the Running DADI Web in a container article, we used the default docker0 bridge network. But in order for our services to talk to each other, we’re going to setup a user defined network. By using a user defined network, we can utilize the embedded DNS server to resolve containers by their names. Read more about Docker container networking.

Let’s create a new bridge network called dadi-service-network.

$ docker network create --driver bridge dadi-service-network

35cd5f842594fe553b0369757d70b8356397eb9ad7a40255211d3b55eaaab800

Now when we create containers, we can attach them to that network using the --network flag.

🔗Creating a MongoDB container

Next, we’re going to setup the MongoDB server to hold our data. The first thing we’re going to need to create is a Docker volume. Volumes are a method for persisting data generated by and used by Docker containers. As a general rule, containers should be transient, and should contain no data themselves that isn’t dispensible. Read more about Docker volumes.

Let’s create a volume for our MongoDB data with the name api-project-mongo-data.

$ docker volume create --name=api-project-mongo-data

api-project-mongo-data

Now we can create a MongoDB container and mount our volume to Mongo’s default data directory, that way all data that is written to the directory is persisted. We aren’t going to publish a port here, we’ll connect to it via the user defined network instead.

$ docker run \\
  --name mongo-server \\
  --volume api-project-mongo-data:/data/db \\
  --network dadi-service-network \\
  --detach \\
  mongo:3.2

28d98b0d98fb2743a2c90d29d238064ea97e9efc58c2b0140491cdf4e07f4406

Great. Let’s check that that’s running okay. I’ve trimmed the logs for brevity.

$ docker logs mongo-server

15:39:37.615+0000 I CONTROL \[initandlisten\] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=ede6ddeecafa
\*\*SNIP\*\*
15:39:37.877+0000 I NETWORK \[initandlisten\] waiting for connections on port 27017

Now that we have our MongoDB container running, we’re ready to setup DADI API. Notice how we didn’t create a Dockerfile for the MongoDB container? That’s because we’re using the prebuilt mongo image with the 3.2 version tag. It’s prebuilt so we just run it.

🔗Setting up DADI API

Now it’s time to create a DADI API project using the DADI CLI. Previously when setting up DADI Web, we created the directory ourselves. This time we’ll use the shorthand dadi <product> new <project-name>. This will create the directory for us and take us through the interactive wizard.

$ dadi api new api-project
``````json
✔ Checking the available versions of DADI API
✔ Pulling the list of available database connectors from NPM
? Which database engine would you like to install? @dadi/api-mongodb
✔ Downloading boilerplate (100%)
✔ Installing DADI API (3.x)
✔ Installing the '@dadi/api-mongodb' database connector

  ▓▓▓▓▓  ▓▓▓▓▓▓▓
              ▓▓▓▓
     ▓▓▓▓▓▓▓    ▓▓▓▓
              ▓▓▓▓
          ▓▓▓▓▓▓▓


    DADI API setup

Let's start by configuring the web server that API runs. (0% complete)

? What is the name of this DADI API instance? api-project
? What is the IP address the application should run on? 0.0.0.0
? What is the port number? 80
? What protocol would you like to use? HTTP (insecure)

We'll now define how your API instance can be accessed from the outside world. (16% complete)

? What is the hostname or domain where your API can be accessed at? api-project
? What is the port? 80

Looking great! Time to configure your databases. (22% complete)

? What is the name of the database? dadiapi
? What is the database username? 
? What is the database password? 
? What is the database server host? mongo-server
? And what is the database server port? 27017

You'll need an oAuth2 client to interact with API. It consists of an ID + secret pair, which you'll send to API in exchange for a bearer token. This token is then sent alongside each request in order to authenticate you with the system. (44% complete)

? Would you like to create a client? No

Let's now look at caching, which is crucial to ensure that API delivers data in a performant way. (56% complete)

? Would you like to cache items on the local filesystem? Yes
? What is the path to the cache directory? ./cache/web
? Would you like to cache items on a Redis server? No

Almost there! Time to define how API handles media uploads (e.g. images). (75% complete)

? Where should API store uploaded assets? Nowhere, I don't want API to handle media

You made it! We're wrapping up. (94% complete)

? Which environment does this config apply to? Development

✔ API configuration file written to /projects/api-project/config/config.development.json.
✔ Database configuration file written to/projects/api-project/config/mongodb.development.json.

All done!

If you remember, we named our MongoDB container mongo-server and attached it to our user defined network. This means we can use the container name like a hostname, and Docker’s embedded DNS server will resolve it for us.

Next, let’s create a collection to test later. Create a new directory within workspace/collections/1.0 called test and inside that, a new file called collection.messages.json.

$ cd api-project
$ mkdir workspace/collections/1.0/test
$ vi workspace/collections/1.0/test/collection.messages.json
``````json
{
  "fields": {
    "name": {
      "type": "String",
      "label": "name",
      "example": "Joe Blogs",
      "comments": "This is the name of the author",
      "required": true
    },
    "message": {
      "type": "String",
      "label": "message",
      "example": "Hello world!",
      "comments": "This is the author's message",
      "required": true
    }
  },
  "settings": {
    "cache": true,
    "authenticate": true,
    "count": 10,
    "sort": "name",
    "sortOrder": 1
  }
}

We’ll use this simple collection later to test that everything is working. Read more about API Collections.

DADI API should now be ready to run in a container.

🔗Creating a Dockerfile

Dockerfiles are like recipes for container images. Traditionally they are kept in the root of the project. Create a file named Dockerfile and open it up in your text editor.

$ vi Dockerfile
```

FROM node:6.11

RUN npm install @dadi/cli -qg

RUN mkdir /var/api
ADD . /var/api
WORKDIR /var/api

RUN npm install -q

CMD [“npm”, “start”]


First we specify a parent container image that we want to build on top of, in this case, `node:6.11`, the official Node.js image. Next we install the DADI CLI globally. We'll need that later. Then we create a directory for the service, add in our project files, and set that to the working directory. Next we run `npm install` and then, finally, specify the command to run on execution – `npm start`.

## Building a Docker image

Now that we have our `Dockerfile` setup, we need to build our container image. Docker has images, which are binaries generally built from Dockerfiles, and containers, which are running instances of those images.

Before we do this, let's create a `.dockerignore` file. This works a lot like a `.gitignore` file, and lets us exclude files and directories from being sent to the docker daemon. This will make our build process quicker and more efficient.

$ vi .dockerignore

node_modules
log


Now let's build our image. With Docker installed, we simply need to tell the Docker CLI to build the image from the current directory. We'll give it a tag with the `-t` flag as well so we can refer to it easier later on.

$ docker build -t api-project .


Once the build has finished, we should be able to see the image:

$ docker images | grep api-project

api-project latest 925df4ec046d About a minute ago 737MB


Great. Now we're ready to run the container.

## Running the Docker container

In the [Running DADI Web in a container](https://forum.dadi.tech/topic/55/running-dadi-web-in-a-container) article, we ran the container in the foreground to check everything worked. This time, we'll run it in the background to begin with, and use the `docker logs` command to peer inside.

Below we give the container the name `api-project`, attach to the `dadi-service-network` network, map the host port `8001` to the container's port `80`, and specify that we wish to run detached.

$ docker run \
–name api-project \
–network dadi-service-network \
–publish 8001:80 \
–detach \
api-project

152b664e9483f82f0a019e6e2155a11d3bbba7323993d65d69d621f85564090c


Now let's take a look at the logs. We'll use the `-f` flag to "follow" the logs. I've trimmed these a bit.

$ docker logs -f api-project

@dadi/api-boilerplate@ start /var/api
node server.js


DADI API

🔗Started ‘DADI API’

Server: 0.0.0.0:80
Version: 2.2.5
Node.JS: 6.11

🔗Environment: development


Let's exit out of these logs with `ctrl-c` and see if we can query the API. First, we'll need to add a client to the API so that we can authenticate. For that we'll use the `docker exec` command to attach to the `api-project` container and then use the DADI CLI to add an authentication client to the database. Read more about [DADI API Authentication](https://docs.dadi.cloud/api#authentication).

$ docker exec -ti api-project /bin/bash
root@152b664e9483:/var/api# dadi api clients:add
? Enter the client ID someUser
? Enter a strong secret (press Enter if you want us to generate one for you)
? What type of access does the user require? Regular user
✔ Created client with ID someUser and type user. The secret we generated for you is 00nvrno0fv – store it somewhere safe!


Now let's check the Mongo database to see our user. Again we'll use the `docker exec` command.

$ docker exec mongo-server \
mongo dadi-api –eval ‘db[“clientStore”].find({})’

MongoDB shell version: 3.2.7
connecting to: dadi-api
{ “_id” : ObjectId(“59ce87f21aa0ec006417830e”), “clientId” : “someUser”, “secret” : “00nvrno0fv”, “type” : “user” }


Nice. We can use this to obtain ourselves a bearer token. I've formatted the JSON result here and below for readability, but if you want this in the terminal you can use the `jq` utility (just pipe the output into `jq` like this: `curl ... | jq` etc.)

$ curl \
–header “Content-Type: application/json” \
–data ‘{“clientId”:”someUser”,”secret”:”00nvrno0fv”}’ \
http://localhost:8001/token

{
“accessToken”: “2055ac17-5423-47f1-a31c-a085e302291f”,
“tokenType”: “Bearer”,
“expiresIn”: 1800
}


Now we have our access token, let's query the collection we created earlier.

$ curl \
–header “Authorization: Bearer 2055ac17-5423-47f1-a31c-a085e302291f” \
–header “Accept: application/json” \
http://localhost:8001/1.0/test/messages

{
“results”: [],
“metadata”: {
“limit”: 10,
“page”: 1,
“fields”: {},
“sort”: {
“name”: 1
},
“offset”: 0,
“totalCount”: 0,
“totalPages”: 0
}
}


Darn. Empty!

Let's add some messages. If you remember, we need a `name` and a `message` . Let's add two by posting an array of objects to the API.

$ curl \
–header “Authorization: Bearer 2055ac17-5423-47f1-a31c-a085e302291f” \
–header “Accept: application/json” \
–header “Content-Type: application/json” \
–data ‘[{“name”: “Joe Blogs”, “message”: “Hello world!”},{“name”: “Steve Jones”, “message”: “This is an example message.”}]‘ \
http://localhost:8001/1.0/test/messages

{
“results”: [
{
“name”: “Joe Blogs”,
“message”: “Hello world!”,
“apiVersion”: “1.0”,
“createdAt”: 1506765822151,
“createdBy”: “someUser”,
“history”: [],
“v”: 1,
“_id”: “59cf6bfefb09920011d9fbf1”
},
{
“name”: “Steve Jones”,
“message”: “This is an example message.”,
“apiVersion”: “1.0”,
“createdAt”: 1506765822151,
“createdBy”: “someUser”,
“history”: [],
“v”: 1,
“_id”: “59cf6bfefb09920011d9fbf2”
}
]
}


Great, now let's check again.

$ curl \
–header “Authorization: Bearer 2055ac17-5423-47f1-a31c-a085e302291f” \
–header “Accept: application/json” \
http://localhost:8001/1.0/test/messages

{
“results”: [
{
“_id”: “59cf6bfefb09920011d9fbf1”,
“name”: “Joe Blogs”,
“message”: “Hello world!”,
“apiVersion”: “1.0”,
“createdAt”: 1506765822151,
“createdBy”: “someUser”,
“history”: [],
“v”: 1
},
{
“_id”: “59cf6bfefb09920011d9fbf2”,
“name”: “Steve Jones”,
“message”: “This is an example message.”,
“apiVersion”: “1.0”,
“createdAt”: 1506765822151,
“createdBy”: “someUser”,
“history”: [],
“v”: 1
}
],
“metadata”: {
“limit”: 10,
“page”: 1,
“fields”: {},
“sort”: {
“name”: 1
},
“offset”: 0,
“totalCount”: 2,
“totalPages”: 1
}
}
`

Fantastic. There we go, a fully working, containerized API with data persistence and resource isolation. Let’s have a quick recap.

  1. We setup a Docker network for our containers to communicate.
  2. Then we created a volume to hold our MongoDB data.
  3. Next we created a MongoDB server and attached the volume & network.
  4. We then created a new DADI API project using the DADI CLI.
  5. Once that was done, we configured API to bind to 0.0.0.0:80 and connect to mongo-server, and added a collection to test later on.
  6. Then we wrote our Dockerfile, created a .dockerignore file to optimize the build, and built our Docker image.
  7. After that, we ran our container, attached it to our network, and published the API port to our host on port 8001.
  8. We added a client to our API and checked MongoDB to see that it’d been saved using the docker exec command.
  9. Finally, we queried our empty collection, added some documents, and queried it again to show our data.

In the next article (coming soon), we’ll look at containerizing DADI CDN, our just-in-time asset manipulation and delivery layer, where we’ll be looking at resource persistence & isolation.

Related articles

More tutorials articles
Mail icon

Like what you are reading? Want to earn tokens by becoming an Edge Node? Save money on your hosting services? Build amazing digital products on top of Edge services? Join our mailing list.

To hear about our news, events, products and services, subscribe now. You can also indicate which services you are interested in, which we use for research and to inform the content that we send.

* You can unsubscribe at any time by emailing us at data@edge.network or by clicking on the unsubscribe link which can be found in our emails to you. Read our Privacy Policy.
Winner Best Edge Computing Platform Technology & Innovation Awards
Presented by Juniper Research