Let's start the talk I promised about continuous integration and continuous development.
Why are we doing this?
The simple answer is best practises. Why is it best practices though? If you're going to fail, fail fast. We want to put our code in the hands of (trusted) users as soon as we can to spot bugs and report any feedback. I'm a fan of the lean startup, and it does apply to coding practices as well. You can checkout lean startup methodology at http://theleanstartup.com/
There's an underlying simple principal of closing the feedback loop as quickly as possible.
We start with an idea, we build the minimal version we can, and release it. We then collect data and feedback, and form new ideas, and star the cycle over again. Applying this to devops, measurements can come from user feedback, or failing tests. It's with these test that we're going to start.
All the cubits, non-trivial services, and repositories come along with unit testing. Running them will ensure each module works in isolation. When I sit down to code a post, when I think I'm finished I make and run the test manually. But I do forget from time to time. Automating this is going to be key.
I'll be using https://www.bitrise.io/ to automate this task. If you've viewed the source code in the past few weeks, I've already finished this integration. You'll see the bitrise status at the top. These inidcate if the test are passing or failing. Every time I make a commit, the tests are run, and a report is emailed to me. This allows me to fix the mistakes before they go anywhere. Let's see my (only for now) workflow.
These are the steps, and it's fairly basic. When this workflow is triggered, it runs these steps every time. The only one really worth talking about is the do anything script.
#!/usr/bin/env bash # fail if any commands fails set -e # debug log set -x flutter pub get flutter pub run build_runner build --delete-conflicting-outputs
This will run pub get, and run build_runner to generate all the generated code. Remeber we don't have them checked in. This can lead to merge conflicts on a team, and old code being carried. Instead it's generated fresh every time. From there the tests are run.
One note, we'll be looking at the analyze step at a later time. Right now basic options are run, but I'll make a seperate post on what I like to use for analysis options.
The next thing we need to do is make sure these tests are run at every commit. The default action is actually this. So let's use it!
As we get closer to a first working version, we'll look at widget and integration testing. These tests will make sure our app runs on different devices and works properly end to end. Right now I don't see much value in them as things are changing so rapidly, we'd spend more time managing these tests then actually developing. In those posts, I'll detail how I change the CICD pipeline.
Like what you see?
If you like what I've done so far, and want to follow along, make sure you subscribe! The free tier will keep you up to date on the progress of this app. Want early access? Want to be a beta tester of the app? Subscribe on the paid tier!