Most expensive code, NASA standards and bold developers

Rover footprints on Mars, starting from nowhere

I was always curious how people write enormous successful applications without bugs, for instance, the Mars Rover “Curiosity”, which flew to Mars, covered 350 million miles in 8 months, landed within 6 mile radius, walks on unknown land and sends data to the Earth.

 

How Rover was sent to Mars

There is one of the rare technical speeches about Mars Rover on the internet. Gerard Holzmann, one of the lead scientists of NASA JPL (Jet Propulsion Laboratory), describes how they were trying to manage software development processes and minimize the problems.

I will shortly mention few facts from the video:

  • Approximately 4 million lines of code, 100+ modules, 120 threads on one processor (+1 backup). Five years and 40 developers. And all of this was for one client and one-time use.
  • The project contained more code, that all previous Mars missions combined. Because of this scale, human code reviews were not effective any more.
  • They created standard to reduce risks and constantly checked code against it using automatic tools. As people used to ignore documents with hundreds of rules, they made a poll and selected out ten most important rules: Power of ten
    E.g. don’t use recursion, goto and other complex flow constructs; variable scope must be minimal; all loops must have fixed edges; pointer usage must be limited to a single dereference, etc.
  • Every night the code was built, checked with static analysis and various automatic tests. It was at night, as analysis took 15 hours.
  • The one who broke the build, would receive a penalty, so they would have to put up a Britney Spears poster in their cubicle. And the one who would leave many warnings in their code, would be printed on the “Wall of Shame”. It seems, that people need motivation, even in NASA 😀
  • Person was not allowed to code until a specific training and certification.
  • They required 100% code coverage. If you have ever done this, you should know how heavy task this is, as they would need to test impossible cases, too.
  • Code review was not a long group meeting. They met only to discuss disagreements. Notes and directions were exchanged via a special application, which also showed the code status after previous night checkings.
  • The warnings from a compiler and static analysis had to be zero. This turned out to be a difficult task and took lots of time. The correlation of this requirement and the project success is unknown, but this was the cleanest code in comparison to their previous missions.
  • In critical parts they followed the MISRA programming standard – which is used in engines and life-critical equipment.
  • They performed logical verification of critical subsystems – mathematically proved correctness of the algorithm.

If you are interested in this mission, there is one more speech (although I liked the first one more): CppCon 2014: Mark Maimone “C++ on Mars: Incorporating C++ into Mars Rover Flight Software”

 

About violating the standard

Of all the cases known on the internet, the most expensive code was written for Space Shuttle – 1000$/line. But in 2013 Toyota lost in court and if we calculate the compensation amount, the cost of one line would turn out as 1200$. The unintended acceleration of Toyta was in news for several times due to car accidents and complaints. They revoked pads, then acceleration pedals, but it was not enough. Then NASA team checked the software of a Toyota car against their standard and even though they found 243 violations, they could not confirm that software caused problems. Court had invited an external expert, who critisized Toyota software because of recursion, stack overflow and much more.

Whole case is here: A Case Study of Toyota Unintended Acceleration and Software Safety

 

We, mere mortal developers

Writing constructor incorrectly in JavaScript

It turns out the we, software developers, risk too much. 🙂 We trust an OS, external libraries, don’t check the validity of value returned from function. Although we filter the user input, do we do the same while communicating to various inner services? In my opinion, this is natural. Checking everything is very timeconsuming and, consequently, expensive. You can look at some Defensive programming strategies.

There are applications, which almost never make mistakes, but when it does, it brings huge loss. Similarly, there are applications, which has more bugs, but correcting them is simple and cheap. Probably the same as with cars – it’s expensive to recover a BMW, which is rarely broken. And during the war, US had Willys MB jeeps, which were recovered very quickly. They would simply disassemble the car to relocate. There even is a video where Canadian soldiers disassemble and assemble Jeep in under 4 minutes.

I think, most of our applications are in the later category. Important thing is to be able to make changes quickly and with minimal expenses.

My talk at DevFest 2017: Continuous Integration-Delivery-Deployment

The talks from the Developers’ festival are being published ^_^
I’m sharing my talk here, to keep it on my blog. I love this festival. Instead of few days, it took me one month to prepare the presentation cause of my little baby, but I really wanted to participate :)))

This is the demo url on Github:
https://github.com/elatsoshvili/DevFestDemo2017

Integration tests with databases (Node.js + Mocha)

Automation tests are divided into several categories. To be short, unit tests are used to test small fragments of code. For example, there is a function for formatting a phone number. We might have several unit tests for covering various scenarios, but if we want to check how user performs registration with this number and then passes authorization, we need to cover interaction of several components in our test – this is an integration test (or maybe even acceptance test).

Generally, we are facing an integration test, if it uses:

  • A database
  • A network
  • Any external system (e.g. a mail server)
  • I/O operations

The hard part is that unlike unit tests we cannot run test operations directly on external systems. E.g. we cannot send thousands of test mails to randomly generated addresses. there are several ways to solve this kind of problem depending on what we want to test. let’s look at the options:

 

Service imitation (Stubs, Mocks)

Let’s assume we’re writing a client application, which invokes services on various servers. I.e. our priority is testing a client and no need to actually use production operations. In this case we can create a service stub with exactly same functions and parameters as the real one. only instead of executing the real logic, it will return some fixed responses.

function sendMail(email, content) {
    console.log(‘Email sent to: ‘ + email);
    return true;
}

When we run our app in a test mode, we should make it use the fake service object instead of a real one (Let’s dive into details in future articles).

 

Using the database

Let’s say we are writing a service which heavily uses a database and we need integration tests to check it. clearly we can substitute the database layer with a stub and let select, insert,etc. operations return some predefined fixed values. However in most cases this is not practical and doesn’t really test the relations among various processes. For instance, I would like user to register, activate their account and perform authorization. This flow uses several tables and I would prefer to execute it on the database.

There are several solutions here, too. I prefer to have an empty database separately – neither in-memory, nor a lighter alternative, but exactly the same version of a database, just dedicated to testing. When my app runs in a test mode, it will fetch the test database path from corresponding configuration and will use for test operations. First it will clear the tables to avoid broken state.

I will use Node and Mocha for this example

In my previous post I was describing configuration of various environments. I don’t think of Mocha tests as a different environment, because we might have dev, test and even build servers and tests would be running on all of them. However I will follow the similar method – I’ll use environment variables for configuring testing runtime, too, and I’ll create .env.mocha file.

I’d like to note that the dotenv documentation clearly states – it’s not recommended to have multiple env files like .env, env.test, env.prod, etc, but we should have one .env file with different content on different servers. In my opinion .env.mocha serves completely different purpose and is not included in this rule.

The next step is to use .env.mocha file instead of a real one while app runs in a test mode. Currently there is no working cross-platform code on the internet and I like using Windows OS, so I’m offering my solution, and no need to load configuration in every test file either:

  • Create .env.mocha file in the project directory and configure properly with test values.
  • Create setup.js file under test directory and put this line into it:
    require('dotenv').config({path:__dirname + '/../.env.mocha'});
  • Create one more file under test directory – mocha.opts and put this line there:
    --require test/setup.js

That’s it.
When you run ‘npm test’ on the project, .env.mocha configuration will be used in every test automatically.

For the sake of insurance and to make sure that I’m not loading production configuration (not to drop all databases), I’ll add one more property into the .env.mocha file and execution of setup.js will continue only in case it is found (e.g. MOCHA_CONFIG_LOADED=yes)

I would also like to have empty tables before running tests. Mocha has various hooks and among them before(), which will be invoked before executing the test suite, if we put it inside ‘describe’. If we declare it globally, then it will be executed only once before all tests. That’s exactly what I need. It would be better if I could put this code in setup.js, but if you try, you’ll find that mocha is not yet loaded on that stage and ‘before’ variable won’t be defined. So, I added hooks.js file under the test directory and described my global hooks there.

If integration tests take too long to execute, it’s possible to configure scripts in package.json and make different commands for running unit and integration tests (separated on a directory level).