Here we are at the most exciting and also
the most expensive part of the development process,
we're going to build software.
Here we're going to ask this question, does it break?
We're going to answer it,
so that we can deliver software frequently, and learn quickly,
so we are confident that we can do a little bit learn,
and then figure out where we should invest our resources next,
and so that we can develop code and refactor it with confidence.
Now, at this point,
we've narrowed our options,
because it is very expensive to build software and the worst thing you can do,
is just throw in the kitchen sink.
We want to be specifically right or
specifically wrong test-ably wrong about what we're doing,
so we don't end up with some feature
that one person uses and we have to go to the expense of maintaining.
Our cost has escalated throughout this process.
This is the most expensive part,
building and ultimately maintaining software.
Now, how do we answer this question of does it break successfully?
Basically, the first step is to do
everything we've already talked about, surprise, surprise.
We go through, make sure we know who is our user? What's on their A-list?
Let's not solve problems that they either don't
have or were the alternatives are really great.
We find an alternative that is not so great.
Let's formulate a testable idea about what
we might do that's better than that alternative,
and let's actually test that before we go off and over-invest in software,
and then, we have a nice strong input into our usability process,
which is that we understand this value proposition
both whether or not the user is sufficiently motivated,
and also how it will be delivered to them in their natural environment,
so that we're able to ask relevant questions and form
good emphatic testable hypotheses about usability.
We do parallel prototyping,
we A&B test are prototype's frequently and often,
and we don't do big design up front,
but we do drive to specifics amongst our team as a place to drive to
something that is both economical and clear
enough or ambiguous enough at least at the get-go to develop software against.
Once we're doing, so that is where so much of the money is wasted,
and so much effort can go to complete waste on the software side,
if we don't do, at least a reasonable execution of these other questions.
Now, our hypothesis in this area might look something like, well,
if we make a certain code change,
the system won't fail or we want to know whether or not the system will fail.
So, ideally, we have a lot of automation around building and testing,
because that allows us to test often and test with confidence,
and avoid the sort of drudgery and the time trap of investing in a lot of manual testing,
and then we just look at does it fail.
This is an exciting time for the junction of delivery and testing and
operations or sysadmin and those are some
of the functions that are most pivotal to this movement of DevOps,
which is a cultural movement that has to do with taking the Silo'ed IT,
rather the Silo'ed practice of development testing and operations,
and rather pushing towards interdisciplinary teams,
that work together on their delivery pipeline.
It's an exciting time I think,
because companies like Facebook are delivering code to production multiple times a day,
which I just think is so incredible.
I think that those teams must benefit so much from learning so quickly,
being able to execute with such confidence that they can deliver software this way.
The way we get there is,
we look at our delivery pipeline.
If you're delivering continuously,
which is this automated deploying of code multiple times a day,
this might be called the continuous delivery pipeline.
The idea is that, the pipeline begins whenever a new code is committed by a developer,
and the end of the pipeline is,
we're delivering software out to users,
actual real users and production.
Even if we're not delivering it to all production,
we're delivering it to some real-life users.
The things that we do, as we go through this are,
we do good unit test coverage.
These are the atomic individual tests that test very discrete functions in
the software integration tests test the way functions interact with each other,
and other subsystems like databases.
Then the system tests are where our prior work is particularly important,
because they help us understand what kind of happy path to test
at the system level and allows us to prioritize that work,
so that this is probably, generally speaking,
the most expensive and most kind of breaky part of the testing.
We really want to know, we want a lot of focus about what we should test.
We may have some manual validation and then we release.
Really, the key thing here is you want to
avoid hand-offs and you want to maximize automation.
It's great to say,
"Hey, this is what teams do and it's exciting."
My advice and this is what you'll learn how to do over the course of this module is,
just understand what practices are in place today and talk with your team about,
"Hey, what is our process look like here?"
Even if it's manual or not,
no where we want it to be.
Let's think about what it is right now and
then like let's make a top three lists for example,
of things that we could do that would make it even better.
So, regardless of where you are on your process of continuous delivery,
I think you'll get some valuable perspective and
some specific ideas that you can use to apply to this question of does it break,
to both improve how reliably and how
much it costs you to deliver software and also just
cultivate that culture of experimentation.