TDD #4: Path Finding

graph-theory, screencast, tdd, tdd-screencasts, test-driven-development

Recommended to watch in a full-screen and on 720p or higher quality.

List of all TDD Screencasts can be found here.

Script

Hello. I am Oleksii Fedorov and I am wearing a hat of That TDD Fellow. That means you are watching TDD Screencast episode #4.

As mentioned in the previous episode, today we are going to implement a path finding algorithm.

Specifically, our first problem in a path finding theme will be the following:

  • We are given a directional graph, that consists of nodes and edges between them.
  • We are given two nodes: start and finish.
  • We need to answer such questions:
    • Is there a path from start to finish?
    • If there is, What this path is?

I need to make a remark, that we clearly dont have to return the shortest path, just any path, that we can find. We will tackle shortest path problem in later episodes.

Now, I think we can start.

1
2
./watch.sh
vim         # coding session

This was pretty easy to make a DFS algorithm to emerge on its own, by driving it with the specification. Lets see if it is possible to do the same with shortest path problem.. Next time, on the TDD Screencast. Have a nice day.

Why Are You Slow?

cleancode, craftsmanship, preaching, professionalism, software-industry

We are slow because we have the worst codebase!

So why don’t you clean it?

Because we have to go fast!

I will let you figure out logical inconsistency there.

Ask yourself a question: how much times in your career you were slowed down by a horrible, dirty, untested code? It doesn’t matter who have written it, usually, it is already there and already a fact you have to deal with.

If your answer was: “Once.. or twice..” (and your career is sufficiently long) - you can freely stop reading this post and simply be a happy developer.

I was slowed down by bad code horrible amount of times. I come to be sick of it. I don’t want to be slow, because I am trying to go fast. I want my tool to be clean, I want my creations to be manageable, and I don’t want to fear the code, I have created, or anyone else have created. And I want to go fast while enjoying it!

Trust me, I met lots of developers during my career. Not as much as heroes of our industry have met, but enough to be able to tell you, that every single one of them, who has any reasonable length of the career, had same problems.

And that is totally not normal. Do you understand, that from the outside world, we, as an industry, are perceived as very non-professional, because of that? Business even came to a conclusion, that they can’t trust us to deliver working software - and that is how QA role was born.

I will ask again.

So why don’t you clean it?

My company pays me for features, not for <insert your statement here>

That is very common response. And, in essence, it is the truth.

OK, now, let’s think. As time passes by, after certain threshold on every codebase, that is not clean, adding next feature costs more time, even if the features are of roughly the same size. After 1 year of such development, features can literally become 2-4 times more expensive. Couple of years in, and it, practically, becomes impossible to develop anything except “Change the color of this button” (or provide your own example).

What does that mean?

One would say: “Features become more expensive over time”. Business usually sees it as: “Arrgh, our programmers slower and slower with each month”. So from the business perspective, after perceived effectiveness of developers drops by 2-4 times, why would business be willing to pay these developers the same salary? You should be afraid of this question.

Now imagine land of unicorns!

Now imagine, that you started a bit slower in the beginning (like 8% slower), but you have kept your effectiveness over time either at roughly same level, or even have increased it over time. How hard do you think it would be to ask for a raise, after 2 years of loyal work?

Not hard at all. And probably business will be doing just great (if the business idea itself was sustainable to begin with, of course), and will be capable of satisfying the request.

Why such a thing happens?

Because at the moment we are not perceived as professionals, nor as experts of our field. We are perceived as just some coding monkeys, who you always need to ask: “Can this be delivered on Friday” with intimidating tone. And we gladly reply through our teeth: “I will try”, meaning “Just go already”; ending up on Friday evening with: “I tried” - “You have tried not enough!”.

Clean your code already!

Believe me, investing 15-45 minutes every single day into increasing test code coverage and some little refactorings will not make you any slower (you probably already very slow, so that it will not make any perceivable difference). Rather, over time, you (and your fellow programmers) will start to actually being bit-by-bit faster, as long as your application (or applications) get cleaner and cleaner.

It goes without saying, that you should be using proper XP (pair-programming and TDD) techniques, while writing any new piece of code (read: new class, module, library, package, etc.). Because it will be extremely easy to unit-test it with near-100% test coverage. Believe me, that is easy and fun.

Refactoring and covering old and messy parts of an application is not fun, though. You have to face the truth. Consider it as a chore, like the one you do to your apartment, when you clean it on a regular basis. And as we all know, the more you wait to clean an apartment, the harder it would be to do it. And the function is not linear..

If you have a really big legacy application, you are probably in doing that sort of thing for 2-3 years, before you can proudly call this application clean again.

There is one important trick though: prioritize cleaning of parts of application, that change often. If some part changes once half a year, you should probably clean it once half a year too.

You are hired for your expertise!

Believe me, you do have all the knowledge, required to make it.

The only thing that stops you is your inability to say “No”, when “No” is the correct answer from your experience and expertise point of view.

Parallel example

Do you know that surgeon, before any surgery, washes his hands. You not just know, it is your expectation! In some countries if he doesn’t, he can easily end up in jail.

Do you know, how surgeon washes his hands?

He rubs each finger from 4 different sides 10 times. That is stated in Doctors’ code of ethics. And they have to follow it, since it has power of law.

Why 10 times? Wouldn’t 7 be enough, or maybe 14 should be a minimum? It doesn’t matter. It is a discipline, that is to be followed to the letter, and there are no exceptions.

Well, he probably can rub 11 times, without getting in jail..

Nobody will ever ask surgeon, why he does it. Everyone expects him to do it.

Back to our universe

You are surgeons!

You are surgeons, that operate on a heart of the business.

With your wrong move, with your mistake, the whole business can go out of business overnight.

With your wrong move, with your mistake, thousands of people can die (if you are in certain domains).

So why increasing a chance of your own mistakes (and everybody else on a team) by not cleaning the code?

Even if you are not in such critical domain, you are part of industry, and you should be a great professional, just to be an example for others, who might end up working in such crucial business domain in the future.

And I know, you can pull it off. And you do know it yourself.

So now go, and be professional, be professional for everyone around you.

Thanks!

This article might be a bit too rough - I believe that is the truth we face now, as an industry. And let us be the ones fixing it!

I recommend reading this: Software Engineering Code of Ethics by ACM organization (originally, created in 1999, why are we all still not using it?!).

TDD #3: Kind Sort

screencast, tdd, tdd-screencasts, test-driven-development

Recommended to watch in a full-screen and on 720p or higher quality.

Script

Hello, hello! I am Oleksii Fedorov, and this is the third episode of TDD Screencast.

In last episode we have implemented a sorting algorithm using TDD, without thinking about algorithm beforehand. As a result, bubble sort have emerged.

We have noticed, that there is a small weird thing about this implementation: it has unspecified behavior - mutation of the original array.

So we asked, which algorithm would emerge, if we were to ban such side-effects from our algorithm. Let’s find out!

1
2
./watch.sh
vim           # implement sorting algorithm

I think we are done here. If you look closely, it is a quicksort. It is not the most memory-efficient implementation, but that is something that is simple to optimize (instead of passing recursively arrays, pass original array and indexes). That optimization will involve actual mutation of the array in place, so if we want to stay true to our specification /show test for no-side-effects/. We will have to copy the array once, using some sort of wrapper function.

Applying this optimization I leave as an exercise to you my users.

In the next episode we will look into path-finding problem, and we will see, how these techniques apply there. See you next time! Have a nice day.

Test-Driven-Development Screencast #2

screencast, tdd, tdd-screencasts, test-driven-development

Recommended to watch in a full-screen and on 720p or higher quality.

List of all TDD Screencasts can be found here.

Script

Hello, hello! I am Oleksii Fedorov, and this is the second episode of Test-Driven-Development Screencast.

Now that I think about it, first episode was more of an audio podcast (with two and half visual slides), rather than a screencast. Don’t worry - this episode will be full of code and actual action time on the screen.

Today we are going to implement a sorting algorithm. We will not come up with algorithm beforehand and we will simply let it emerge by itself, while we are doing TDD.

Lets jump in!

1
2
3
4
5
6
./watch.sh  # So I have here small script, that will watch changes in my code
            # and will run all my tests. Additionally, it will show me the
            # result in the notification bar (you will see, shortly).

vim         # 1) Create test file
            # 2) Follow TDD rules to the letter

I think we are done here. And notice, that this is a bubble sort algorithm. Now lets ask a question, why the worst possible algorithm have emerged, while we were using TDD. That is an interesting question.

But first, lets ask ourselves a question: Are we kind to the user of our function? The answer is - NO. We are mutating the argument, that user passed to us. And this mutation might be unexpected. At least our test suite doesn’t even mention such behavior.

The root of the problem is this little swap operation that we have here:

1
2
# show swap operation on the screen
a[0], a[1] = {a[1], a[0]}

I wonder, what will happen if we were to ban swap operation (and any kind of mutation of the argument of the function), and implement sorting algorithm again?

Find out next time, on the next episode of Test-Driven-Development! Have a good day.

Test-Driven-Development Screencast #1

screencast, tdd, tdd-screencasts, test-driven-development

Recommended to watch in a full-screen and on 720p or higher quality.

List of all TDD Screencasts can be found here.

Script

Hello! I am Oleksii Fedorov, and this is the first episode of Test-Driven-Development Screencast.

Today, I am going to briefly address following questions about Test-Driven-Development: - What TDD is? - What are main benefits of doing TDD?

At the end I am going to demonstrate a small example on how to implement simple sorting algorithm using TDD.

Let me open my slides. Don’t worry: there are only three small slides.

1
vim ./slides/*

TDD is Software Development discipline. Therefore, basic rules, defined by TDD, are arbitrary, weird and are to be followed to the letter, if you want it to be useful. There is a reason for that – but we will talk shortly about that. Let me read these basic rules for you:

  1. You are not allowed to write a line of production code, unless test fails.

    Which means, that I will have to write the test even before I have something to test. It may sound stupid. Maybe, it is stupid. Maybe, it is not.

    But the next rule is even more weird than the first one:

  2. You are not allowed to write more of a test, that is sufficient to fail.

    It is important to clarify, what ˝fail˝ means in this context. It means test expectation failure, and compilation/parsing/interpretation failure (depending if your programming language is compiled or interpreted).

    Which means, that you will have to interrupt yourself, while writing a test, because you have mentioned the class or package, that does not exist yet. Or you have mentioned the method or function, that does not exist yet.

    Now that may sound really stupid to you. Bear with me, and lets see how weird the last rule is:

  3. You are not allowed to write more production code, that is sufficient to make the failing test pass.

    Which means, that once you have defined a class, that was mentioned - you have just fixed a failing test, and you have to go back and write more of the test, or add new test.

    Which means, that once you have defined a method, that was mentioned - you have just fixed a failing test, and you have to got back and write more of the test again.

    Which means, that once you changed your production code only slightly in direction of the correct implementation, you have to go back and write more tests.

    This is interesting. Now you got yourself in a very tight lock. In a very tight feedback loop. Write a line of test, write a line of code, write a line of test, write a line of code, and so on. The length of this loop is probably 5, 10, 30 seconds. If you have a test suite that needs half an hour to run, you will not do TDD.

What happens if you do not follow this discipline to the letter? If you slip there and there?: write a bit more production code, than you had to?, write a bit more of a test, than you had to?, or even wrote a test after writing the whole class under test?

Well, you have just lost the main benefit of TDD: you can no longer trust the test suite. For sure, there will be some untested code if you do it this way.

Why do we need 100% test coverage, you ask? 70% sounds like an awesome achievement! Or is it?..

What can you tell from the fact, that only 70% of your code is covered by test suite? Only that 30% is not covered, and therefore, there is no quick and easy way to verify that it works.

Lets imagine the following scenario:

  • You open a file on your screen.
  • You spot some nasty duplication, and you know you want to fix it.
  • You even see an obvious way to fix it.
  • You touch your keyboard.
  • And now, the fear overwhelms you: this class is not tested.
  • And your reaction? - I won’t touch it!

That is where code starts to rot, while nobody cleans it up, because test suite can not be trusted, and the whole codebase slowly down-slides to a big pile of mess.

Now, lets imagine, that you have 100% code coverage (Well, maybe 98%, because 100% is the goal, that is not achievable). And the same scenario:

  • You open a file on your screen.
  • You spot some nasty duplication.
  • You fix the duplication.
  • You run tests - and they are green.
  • You check-in cleaner code in your code control system.

Or, lets say, that the problem is not trivial: - You spot the long method. - You split it in 3 methods - tests are still green. - You proceed and extract these methods to the new class. - And tests fail. - Undo-Undo-Undo. And you are back to the green state. - And now you think for a moment, what happened there. - And you already have this ˝Gotcha!˝. - And you successfully extract a class again - and the test suite is green. - You check-in cleaner code in your code control system.

Undo button becomes your best friend. Once you stop knowing what is going on, or what you are doing, or you simply confused, you can always go back to the green state; that just happens to be 25 seconds ago (or 2-3 undo) away, because of the tight feedback loop you got yourself into.

Now, there is a hidden rule of TDD. That feels more, like an implementation detail of TDD:

1
:next

As tests become more specific, production code should become more generic.

And it is very true, otherwise, you would end up adding a bunch of if statements every time you add a failing test.

What that means, I will point out during the example.

And lets sum up now:

100% Code Coverage => Lack of Fear => Consistent Random Kindness to the Code => Clean Code.


60% Code Coverage => Fear to Break It => ˝I won’t touch it!˝ => Mess.

Now we can finally move on to the example. Next time, on the next episode of Test-Driven-Development Screencast! Have a good day.

Why Do You Need to Be Careful With Loop Variable in Go

computers, concurrency, golang, pitfalls, programming

TL;DR

This post describes two different issues:

Race conditions when taking reference of loop variable and passing it to another goroutine:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
// WRONG: pass message by reference
for message := range inbox {
        outbox <- EnhancedMessage{
                // .. more fields here ..
                Original: &message,
        }
}

// CORRECT: pass message by value
for message := range inbox {
        outbox <- EnhancedMessage{
                // .. more fields here ..
                // Pass message by value here
                Original: message,
        }
}

See explanation here.

Race conditions when using loop variable inside of goroutine inside of loop:

1
2
3
4
5
6
7
8
9
10
11
12
13
// WRONG: use loop variable directly from goroutine
for message := range inbox {
        go func() {
                // .. do something important with message ..
        }()
}

// CORRECT: pass loop variable by value as an argument for goroutine's function
for message := range inbox {
        go func(message Message) {
                // .. do something important with message ..
        }(message)
}

See explanation here.

Taking reference of loop variable

Lets start off with simple code example:

1
2
3
4
5
6
for message := range inbox {
        outbox <- EnhancedMessage{
                // .. more fields here ..
                Original: &message,
        }
}

Looks quite legit. In practice it will often cause race conditions. Because message variable is defined once and then mutated in each loop iteration. The variable is passed pointer to some concurrent collaborators, which causes race conditions and very confusing bugs.

Above code can be re-written as:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
// local scope begins here
        var (
                message Message
                ok bool
        )
        for {
                message, ok = <-inbox
                if !ok {
                        break
                }

                outbox <- EnhancedMessage{
                        // .. more fields here ..
                        Original: &message,
                }
        }
// local scope ends here

Looking at this code, it is quite obvious why it would have race conditions.

Correct way of doing that would be to either define a new variable manually each iteration and copy message’s value into it:

1
2
3
4
5
6
7
for message := range inbox {
        m := message
        outbox <- EnhancedMessage{
                // ...
                Original: &m,
        }
}

Another way of doing that would be to take control of how loop variable works yourself:

1
2
3
4
5
6
7
8
9
10
11
for {
        message, ok := <-inbox
        if !ok {
                break
        }

        outbox <- EnhancedMessage{
                // ...
                Original: &message,
        }
}

Taking into account, that until the EnhancedMessage is processed by concurrent collaborator and garbage collected, variables, created during each iteration, i.e.: m and message for both examples, will stay in memory. Therefore it is possible to just use pass-by-value instead of pass-by-reference to achieve the same result. It is simpler too:

1
2
3
4
5
6
7
8
9
for message := range inbox {
        outbox <- EnhancedMessage{
                // ...

                // Given the fact that `EnhancedMessage.Original` definition
                // changed to be of value type `Message`
                Original: message,
        }
}

Personally, I prefer latter. If you know of any drawbacks of this approach comparing to other 2, or if you know of entirely better way of doing that, please let me know.

Running goroutine, that uses loop variable

Example code:

1
2
3
4
5
for message := range inbox {
        go func() {
                // .. do something important with message ..
        }()
}

This code might look legit too. You might think it will process whole inbox concurrently, but most probably it will process only a couple of last elements multiple times.

If you rewrite the loop in a similar fashion as in previous section, you would notice that message would be mutated while these goroutines are still processing it. This will cause confusing race conditions.

Correct way of doing that is:

1
2
3
4
5
for message := range inbox {
        go func(message Message) {
                // .. do something important with message ..
        }(message)
}

In this case, it is basically the same as copying the value to the new defined variable at each iteration of the loop. It just looks nicer.

Thanks!

If you have any questions, suggestions or just want to chat about the topic, you can ping me on twitter @waterlink000 or drop a comment on hackernews.

Especially, if you think I am wrong somewhere in this article, please tell me, I will only be happy to learn and iterate over this article to improve it.

Happy coding!

Credits

Intention-revealing Code

computers, design, golang, pragmatic, programming, rant

Lets start off with very simple code:

1
2
3
4
5
6
7
func toJSON(post Post) string {
        return fmt.Sprintf(
                `{"post_title": "%s", "post_content": "%s"}`,
                post.Title,
                post.Content,
        )
}

This is very simple code and it is easy to understand what it is doing and what it is doing wrong:

  • It tries to marshal post struct to custom JSON representation.
  • It fails when there are special characters in these strings.
  • It does not use standard MarshalJSON interface.

It can be fixed in a pretty simple way:

1
2
3
4
5
6
func (post Post) MarshalJSON() ([]byte, error) {
        return json.Marshal(map[string]interface{}{
                "post_title":   post.Title,
                "post_content": post.Content,
        })
}

And at the usage site, you can now just use standard encoding/json package capabilities:

1
2
3
4
5
6
rawPost, err := json.Marshal(post)
if err != nil {
  return err
}

// do stuff with rawPost

And now you can notice that tests do not pass. And the place that failed is totally unrelevant. Long story short. Name of the original method was not revealing any real intent: it was actually specific json representation for usage with external API, but normal json.Marshal is used by this same application for providing responses to its own HTTP clients.

Were the name a bit more intention-revealing, nobody would waste their time on finding this out by trial and mistake:

1
2
3
4
// should probably even sit in different package
func marshalToExternalAPIFormat(post Post) ([]byte, err) {
        // ...
}

And this is only a tip of the iceberg of how non-intention-revealing code can trip you over.

Using contracts.ruby With RSpec. Part 2

contracts.ruby, design-by-contract, rspec, ruby

Remember Using contracts.ruby With RSpec ?

RSpec mocks violate all :class contracts because is_a?(ClassName) returns false for mock. That post describes 2 possible solutions:

  • stub :is_a?: allow(my_double).to receive(:is_a?).with(MyClass).and_return(true), or
  • use contracts-rspec gem, that patches instance_double RSpec helper.

Custom validators

Since custom validators have finally landed here: egonShiele/contracts.ruby#159, now you can just override :class validator to accept all RSpec mocks:

1
2
3
4
5
6
7
# Make contracts accept all RSpec doubles
Contract.override_validator(:class) do |contract|
  lambda do |arg|
    arg.is_a?(RSpec::Mocks::Double) ||
      arg.is_a?(contract)
  end
end

Now, RSpec mocks will not violate all the :class contracts.

More information can be found here: Providing your own custom validators.

Additionally this refactoring enabled valuable speed optimization for complex contracts - validators for them will be evaluated only once and memoized.

Links

Thanks!

If you have any questions, suggestions or just want to chat about how contracts.ruby is awesome, you can ping me on twitter @waterlink000. If you have any issues using contracts.ruby you can create issues on corresponding github project. Pull requests are welcome!

Comments on hackernews.

Happy coding! @waterlink000 on twitter.

Docker Machine Guide (VirtualBox on Mac OS X)

docker, docker-machine, guide, macosx, virtualbox

(VirtualBox on Mac OS X)

This guide is a combination of official docs and usage experience.

Installation

Install virtualbox first virtualbox downloads.

Look at this page: https://github.com/docker/machine/releases/ and pick latest release. At the time of writing this is v0.4.1.

Now, assign version number to an environment variable, together with your architecture:

1
2
DOCKER_MACHINE_VERSION=v0.4.1
DOCKER_MACHINE_ARCH=darwin-amd64

Now download docker-machine binary and put it onto your PATH (recommended is ~/bin/):

1
2
3
4
5
mkdir -p ~/bin
URL=https://github.com/docker/machine/releases/download/${DOCKER_MACHINE_VERSION}/docker-machine_${DOCKER_MACHINE_ARCH}
OUTPUT=~/bin/docker-machine
curl -L ${URL} > ${OUTPUT}
chmod +x ${OUTPUT}

If you still haven’t yet, put ~/bin/ on your PATH with export PATH=$PATH:$HOME/bin into your .bashrc or .zshrc (or whatever shell you use).

Installing docker client

1
2
3
4
URL=https://get.docker.com/builds/Darwin/x86_64/docker-latest
OUTPUT=~/bin/docker
curl -L ${URL} > ${OUTPUT}
chmod +x ${OUTPUT}

Creating your first docker machine

1
docker-machine create -d virtualbox dev

This is how you create docker machine with name dev having virtualbox as a backend.

But after some time you will encounter a problem of running out of memory. So my recommended command to create your primary development docker machine is this:

1
docker-machine create -d virtualbox dev --virtualbox-memory "5120"

This will create virtualbox VM with enough memory to run low-to-moderate size clusters with docker-compose. Which should be enough for development.

This should be your primary docker machine that is always activated and used. There is no need to destroy and re-create this dev machine unless you are testing some edge-cases. And better to use additional docker machine with different name for this.

Connecting to your dev docker machine

1
eval $(docker-machine env dev)

After this command, you will have everything you need to run docker in the same terminal:

1
2
# Using alpine image here because it is only ~5 MB
docker run -it --rm alpine echo 'hello world'

You should see:

1
2
# .. pulling alpine image here .. and:
hello world

It might be annoying to run eval $(docker-machine env dev) each time you open new terminal. So feel free to put this line into your .bashrc or .zshrc or whatever shell you use:

1
2
# in .bashrc:
eval $(docker-machine env dev)

If you have just powered on your Mac (or just stopped your docker machine) you will experience this error:

1
Error: Host does not exist: dev

In that case just start it with:

1
docker-machine start dev

And re-open your terminal.

Dealing with docker machine’s IP address

Fact: docker machine’s IP address stays the same, usually 192.168.99.100, unless:

  • you destroy your docker machine dev, create another VirtualBox VM and create docker machine dev afterwards, or
  • you have custom VirtualBox configuration.

Given that docker machine’s IP address stays the same or changes very rarely, you can simply put its IP address in your /etc/hosts.

First, figure out current docker machine’s IP address:

1
docker-machine ip dev

And put it in /etc/hosts:

1
2
3
4
# in /etc/hosts

# Put IP address previous command has returned here:
192.168.99.100 docker-dev

To test that it works correclty try to run:

1
docker run -it --rm -p 80:80 nginx

And now reach http://docker-dev in your browser - you should see default Nginx page.

If you want to refer docker machine dev in your scripts, it is better to use $(docker-machine ip dev) capabilities for that. For example, curl-ing the page we have seen in browser just now:

1
curl $(docker-machine ip dev)

For teams it would make sense to agree on the same name for primary development docker machine. dev works just great!

NOTE: personally, I use both docker-dev and just dev as a hostname to type less, but that might clash with something else, so docker-dev it is.

Upgrading

To upgrade docker-machine or docker binaries, just follow Installation instructions again.

To upgrade docker server inside of already running docker machine, use:

1
docker-machine upgrade dev

This will update to the latest version of docker server and boot2docker image.

Re-creating a fresh dev docker machine

1
2
docker-machine rm dev
docker-machine create -d virtualbox dev --virtualbox-memory "5120"

Further reading

Comments on hackernews.

Happy hacking! @waterlink000 on twitter.

Running Kitchen-docker Tests With Upstart

chef, docker, kitchen-docker, ruby, test-kitchen, ubuntu, upstart

TL;DR:

.kitchen.yml
1
2
3
4
5
6
7
8
9
10
11
# .kitchen.yml
---
driver:
  name: docker
  use_sudo: false              # this depends if you need to do `sudo` to run `docker` command or not
  disable_upstart: false
  image: ubuntu-upstart:14.04
  run_command: /sbin/init

platforms:
  - name: ubuntu-14.04

It is possible because there is this official base image specifically for upstart: https://registry.hub.docker.com/_/ubuntu-upstart/.

After making your .kitchen.yml look like this, just use kitchen as you would normally would.

Happy coding! @waterlink000 on twitter.