Monday, July 20, 2015

DevOps Days 2015

This year I was fortunate enough to attend the Australian 2015 DevOps Days held in Melbourne's exhibition centre. This is a community event ran by volunteers who are interested in promoting the tools, technologies and practices that make up DevOps. This was my first DevOps Days event and it was a different format that I had been used to in previous conferences - two days made up of a number of presentations in the morning, ignite talks (15 minute lightening talks) in the afternoon and finally open spaces where topics are suggested and voted on by the attendees to which are discussed in an open format.

Day 1 - Presentations

Nigel Dalton - REA - Keynote 
A talk about some of REA's journey towards DevOps from a management point of view. This was underpinned by a competition which was presented as a IP address on Nigel's shirt. This felt a little like splitting the audience where those with laptops and an interest were trying to solve the treasure hunt, and those listening to his talk. Some interesting points of view especially from the management perspective. A take away from this talk was about mixing your own team's and process to find something that works for you rather than trying to buy something off the shelf.

Javier Turegano - The DevOps Lab
This talk was all about changing team structure to introduce a mix of dev and ops in product teams. Although I had seen this presentation previously there was still something to take away from it - experimentation. Experiment with teams and what works for your company and don't be afraid to learn and shake things up. 

Lindsey Holmwood - Continuous Deployment for infrastructure
This talk started with the principles underpinning DevOps with some examples - such as the CI/CD pipeline, code as infrastructure, testing as a first class citizen and measurement. Lindsey did talk about changing and testing one thing at a time - for example change web tier, test the web tier - however he didn't touch on how this is completed when there is a dependency between tiers. One other take away from this talk was fast feedback, it's required and necessary to make sure your changes are validated. 





Day 1 - Ignite
Accenture - Maturity models - Interesting in the fact that my current company do the same thing - allows you to focus on where you efforts will be best spent

IOOF - Centralised logging - Using logstash and the scaling problems associated with that, nothing really new here just that logging is very necessary (but we already knew that right?)

Thoughtworks - Mobile Dev and microservices - Interesting in the fact that the focus was on the development of a new product and some solutions around how different versions of the code can work by ignoring events that it doesn't recognise - no automation of infrastructure - surprising but maybe not depending on the focus on the project.

Day 1 - OpenSpaces
There wasn't a large amount of talks as I think the audience was not overly comfortable with the format of the OpenSpaces. I suggested one myself and it was scheduled towards the end of the day - it was on microservices. It was good that a number of people came to my room, and I was able to get some information from people on how they were solving problems such as monitoring and problems associated with versioning and dependencies between services.

Day 1 - Afterparty
This was really good, having a chance to relax and mingle with the audience and talk about some of the presentations that day. All coupled with good pizza and arcade games - bonus nerd points. 

Day 2 - Presentations

Panel - The platform roadmap

Questions posed by a facilitator and answered by a number of companies around Australia and how they were solving the problems.




One of the key take aways, and indeed one which was shared around on twitter, was that security is not only for the security team, it's a shared responsibility owned by everyone. 

Steve Pereira - DevOps Traction
I really enjoyed this talk, it was all about the relationships that you needed to have in order to try and be successful in DevOps. It was very good as it didn't concentrate on the tools but more about the cultural significance of DevOps - which in my opinion is not given enough attention. Quote of the conference from this one as well "Empathy is a large part of DevOps" - when attempting to understand another persons point of view whether it's dev or ops. 

Mujtaba Hussain - Quit your job as a dev and go do Ops
Mujtaba is a very good speaker, a funny guy that keeps you engaged with a small amount of text on his slides and a good mix between experience and a call to action about putting yourself outside of your comfort zone and doing something that you're not good at in order to learn and become a better engineer. 

Shiva Narayanaswamy - Event driven infrastructure
FAAS - Function As A Service. Where you just write code (functions), which don't have dedicated servers but are run on services which the whole infrastructure is managed. Very much tailored for the AWS set of services, specifically AWS Lambda. Most interesting thing here was the potential of infrastructure to quickly respond to events.

Day 2 - OpenSpaces
Certainly more interest in the second day with about three times as many talks proposed, often multiple people suggesting several topics. The two that I was really interested in were "DevOps can't work from ground up, has to be from top down", provocative title and something which I don't particularly believe in. This was a good session talking about how we can start to implement DevOps practices even if it's between two small teams - it's still dev and ops working together more closely. 

The other interesting talk was around leadership and how technical people can struggle with leadership and management when stepping into that role. Some interesting points of view and Dan Pink - Drive will be the next book that I read! 

Conclusion
Overall a good experience at this conference, nothing overly eye opening in terms of what is out there, as it felt like the company that I work for are mature in terms of DevOps practices. This is worth it in itself as you get a spur on from just this fact, that others in the group were solving the same problems as I am having.

The format worked well, with the second day open spaces much more popular than the first day, as everyone got more comfortable with the format. A good event overall and I have certainly some ideas to take away and use in my DevOps journey.

Sunday, June 28, 2015

Plugin Development Environment Done right


I'm not normally one to endorse a particular product or company but in this case I really have to take my hat off to Atlassian and their plugin development environment for their products. I have had previous experience with contributing to a plugin but this was the first time I had started from scratch on a new plugin.

Getting going

First off the SDK creates you a skeleton of the project for which ever product you are creating the plugin for. This has an example of tests that you can run (very important) and also right away you can build your plugin, ready to install into the product.

Running in situ 

At this point you already have some working code, but the best thing is that you can run the full product right there from your plugin is built and then test it out on a running server. This allows extremely fast development, no downloading packages from a website, setting up a database, seeding it with data. One command at it's all up and running in a couple of minutes.

Rapid Development cycle

Since running up the server takes quite some time, the good people at Atlassian also provide a quick way to install the plugin once the server is running. All you need to do is to run another command, which compiles, packages and automatically installs your plugin to the running server in a matter of seconds, allowing you to test your changes in a very small amount of time. 

The above coupled with very good API documentation really eases the pain of development. 

And Finally...

All this means you get to focus hard on what your plugin is doing, and removes the cruft of having to worry about the environment that your plugin will run in. It is dead easy to get up and running and this is largely due to the work put in by the company in order to facilitate this. 

Being involved in DevOps I want to be creating this types of tools for developers in my organisation so that they can also develop quickly and easily and not worry about how hard it may be to get their code deployed to a running environment. 

Reference

https://developer.atlassian.com/docs/getting-started
https://developer.atlassian.com/static/
https://developer.atlassian.com/docs/getting-started/set-up-the-atlassian-plugin-sdk-and-build-a-project

Thursday, March 19, 2015

Why versioning is hard, but a good thing...


Why is it that when you see versioning done well it looks like a no brainer, surely everyone is doing it, and it is well understood by the parties developing the software that is being produced. Well that hasn't been my experience. 

Maybe I'm expecting a little bit too much here, I'm from a developer background, moved into a build engineering / DevOps role to fill a gap in both a company requirement and what was my build and release knowledge. Dev's want to churn out code, release new features and see people use them. That's all well and good however having the whole versioning thing down helps hugely when it comes to trying to manage the product once it's out in the wild. 

Dependencies are another part of this, I'll be talking about java projects but I'm pretty sure this applies to most other programming languages that I've had experience with, such as ruby gems, python packages, node modules, red hat packages (OK, that last one isn't actually a programming language but the principles are the same). 

Release versions are a one time build of your code base at a specific point in time. This is good because I can then rely on that version from a dependency. Furthermore I can be assured that I can develop my code on top of this version and it won't change. This is different to snapshot versions which my definition are not fixed but in a state of flux, but are good for development and getting the latest version. 

I've worked in a number of enterprise companies where versioning (especially of in house artefacts) have been really poor. It's makes such a difference as well, as left out of control your build times can exceed times of 50 minutes in some cases. This breaks the whole reason behind breaking up the code base to smaller independent modules in the first place and ends up in frustration and skipped tests.

Nowadays there really is no excuse for this with lightweight repositories such as git and mercurial and binary package managers like sonatype's nexus and jfrog's artifactory. Coupling that with a build tool, such as maven, ability to create releases, tag SCM and increment versions, the process doesn't have to be laborious. 

With the above it is then possible to start thinking about build pipelines, moving the same version of the code through different environments and eventually pushing that release to production without building it again. Admittedly there are plenty of other things that need to be done before this can be completed but it's a good start.

In reality it seems that versioning and dependency management seems to the the meagre allowance of the few, not the integral knowledge of the many. 

Wednesday, February 25, 2015

Automating the Automation Tool

Automation tools are excellent, no doubt, but as you get acclimatised to how good they are you always thinking of ways of trimming off the fat from the unnecessary time you spend using them. Take your CI (Continuous Integration) tool for example - at my current job we currently make heavy use of Atlassian's Bamboo for the majority of our automation. We have developed a number of scripts to do the actual work but we use Bamboo's interface to pass parameters to the build. 

Personally I'm really poor at doing repeatable, mundane tasks that don't really engage my brain however require a decent level of concentration. One such activity is going to Bamboo's UI, finding the correct plan and entering in a bunch of parameters in order to run our scripts in a particular way. 

This click happy process doesn't do much for my sanity, especially when you are doing this numerous times a day and are prone to mistakes due to complete boredom. 

Enter scriptural, my little repository of scripts that I'm putting together for mundane tasks that I can automate. One such task is running Bamboo builds from a little python script which will then check the progress up to a certain length of time. This allows me to define the parameters in a file for future use and change them as required. I don't have to even go to the Bamboo UI now as my script can wait until the build is complete and report back to me on success or failure.

Now you might say I'm being overly precious of my time here, but the next time you're clicking through a UI to complete a task that you've done twenty times before, copy pasting values, and maybe making a mistake - think to yourself, is there a better way to do this? Sure UI's are fancy and nice to look at, but if I've seen it before am I that interested? I wonder is there an API sitting behind this UI...

With the availability of API's that most tools now have, chances are that you too can be more productive by creating a little script that runs in a fraction of the time that you would have spent with your mouse clicking away. 

Check out my example's in the repository to see how I've done it. Suggestions welcome...
    

Thursday, November 20, 2014

Pessimism and Optimism - Finding the balance

Pessimism is often seen as the negative in many situations, however, is too much optimism just as bad?

In a uncharacteristic departure from my usual blogs on technical problems, and possible solutions, I'd like to offer some food for thought on the social factors that effect projects, timelines and expectations. 

As someone who tries wherever possible to be optimistic about whether or not I can deliver outcomes to the customer, I sometimes wonder whether or not being too optimistic is detrimental. 
      For example if I say that I'll get something done for a specific date (the optimist in me) but it turns out that it's a larger body of work than I originally thought, I'd rather go above and beyond in order to meet the date than let people down. Now, I'm not saying that doing extra to meet deadlines is incorrect, far from it, but if it's happening constantly it may mean I am not learning from this experience. Am I masking an underlying problem that I didn't understand the work to be done? Or that I am poor at estimating the time it needs to complete a body of work? Is optimism in these cases taking precedence of realism?

Enter pessimism. Should I be more pessimistic about what can be done in order that peoples expectations are met without me having to constantly do extra to get it done? Or will this lead to bad relations with the people that you are working with?

Like everything it's the balance that counts. Without a healthy dose of pessimism I think that our promises to get work done will end up in letting people down or burn out in trying to make up for poor estimation. In contrast without optimism we would not be able to have drive to reach beyond our current situation and make things better than the status quo. 
      Therefore, I think the balance in the two end up being realism - the ability to deliver the things that you say you'll deliver in a time frame, that's achievable in your working day. I'd be really interested in what others think on this topic, whether this rambling is right or wrong.

Let's cover pragmatism another day...

Tuesday, September 16, 2014

AWS RDS MySQL SSL

Following my previous post of SSL with ELB's and instances I'd like to write a quick post about SSL and RDS MySQL. This is worth another blog post because it behaves slightly differently than your normal apache http or tomcat servers. 

This is because the SSL is enforced at the user level and not at the server port level. Typically when you disable insecure traffic to a particular server you will disable the port that it is listening on for insecure traffic - such as port 80 for the apache web server. 

MySQL is different, you still connect through the default port (3306), or whatever port you have configured it to run, but the difference is the way you connect. In order to ensure secure communication you must first create a user which requires SSL to connect:

First connect to the RDS instance as the root user:
mysql -h myinstance.123456789012.us-east-1.rds.amazonaws.com -P 3306 -u root -p
You can then create users in the specific way:
GRANT SELECT, INSERT, UPDATE, DELETE on db_name.* to 'encrypted_user'@'%' IDENTIFIED BY 'suprsecret' REQUIRE SSL;FLUSH PRIVILEGES;
You can now test this with the mysql client that you used earlier to connect to the database earlier:
mysql -h myinstance.123456789012.us-east-1.rds.amazonaws.com -u encrypted_user -p --ssl_ca=mysql-ssl-ca-cert.pem --ssl-verify-server-cert
The above should prompt you for the password that you set up above, and allow you to connect to the database securely. 

This is the database server end complete. The next part is to configure your application to use SSL to connect to the database securely. There are a number of ways in which you can complete this, for this example I'm going to configure a java application.

For this example we're going to configure the JDK to allow the secure connection. There are a number of options here, including application container specific configurations but this way has the advantage that all java applications (container or otherwise) will be able to connect.

First get the RDS MySQL server certificate from AWS:
wget https://rds.amazonaws.com/doc/mysql-ssl-ca-cert.pem
Now import this into the java trust store (replace $JAVA_HOME as necessary):
$JAVA_HOME/bin/keytool -importcert -alias rds -keystore $JAVA_HOME/jre/lib/security/cacerts -storepass changeme -file ./auspost-root-ca.cer -noprompt
Connecting to the MySQL database is the same as before however you now specify three new options in your connection string which connect using SSL:
jdbc:mysql://myinstance.123456789012.us-east-1.rds.amazonaws.com:3306?useSSL=true&verifyServerCertificate=true&requireSSL=true
For brevity I will not include the rest of the code that you use in java in order to connect.

Thursday, September 4, 2014

AWS and SSL - ELB to instance, Instance to ELB

At a glance

Here's the top tips to remember when attempting to enable SSL from an ELB to an instance and from that instance to another ELB:


  • ELB's don't like backend self signed intermediate/root certificates, they need the actual certificate that the instance server is presenting
  • Instances are quite happy to use the root certificate when connecting to a server presenting a signed certificate.
  • Openssl and curl are your friends for testing (explained in detail below):

openssl s_client -connect <elb_dns>:443 -CAfile server.crt

  • You can redirect your ELB listeners to listen on an un-secure port but communicate to the instance securely which is handy for testing
  • Use ELB health checks with SSL in order to get continuous verification that SSL is working


In the detail - What we're attempting to achieve


SSL (TLS) is the standard for encrypting traffic between a client and a server. One of the best explanations that I've seen about it is here so I'll leave you to read through (I know I found it useful) and get an overview of SSL. 

In this exercise we'll be attempting to enable encrypted traffic from an ELB to an instance and from that instance to another ELB. For example say if I have a web tier and an application tier, I would typically have an ELB in front of each, therefore I need to configure both front and back end certificates on each ELB and also the instances.

I'll be using a certificate which has been signed by an internal root CA, as it's a little more complicated and more like a scenario that you'll be presented with when trying to complete SSL with your own server. Openssl have docs on creating signing requests and self signed certificates.


ELB configuration

A great guide for creating HTTPS ELB's is already defined by the kind people at AWS so I'll only concentrate on the parts that tripped me up.

Front End

Front end certificates in AWS are stored in IAM, allowing you to choose them when creating new ELB's. This is pretty smart as you'll typically use the same certificates for many ELB's. Here, it's just a matter of selecting an existing certificate or uploading a new one per the guide.


Back End

This was a real pain. I assumed that ELB's would work in the same way that the bundle file would in your OS, that is, that you could upload the root certificate to the backend of the ELB and it would be enough to work with the certificate being presented with by the instance. However this was not the case, so my number one tip, is to forego trying the certificate chain and just upload the certificate that is being presented by the instance, uploading the certificate to the backend is describe in the the AWS guide.


Testing

Finally we get to do some testing. I completed this using openssl to check the front end certificate and then curl to test getting a page from my web server once it was configured (see below).

openssl s_client -connect <elb_dns_name>:443 -CAfile /tmp/root.crt
This will connect to your ELB and verify that the root certificate you have specified locally will work with the certificate being presented by your ELB. You should get an OK if everything is configured correctly.


Instance Configuration

In this case I'm going to go with a simple example and use an apache web server configuration for SSL. There are numerous detailed posts out there about configuring apache for SSL so I'll go over the very minimum that will pertain to the AWS ELB specifics. 


Front End

Your instance (in this case apache) will need to be configured to present a SSL certificate to the client (the ELB). In order to do this edit the httpd.conf or vhost conf (more info here) that you have configured to listen on 443 to have these values:

SSLEngine on
SSLCertificateFile /path/to/www.example.com.crt
SSLCertificateKeyFile /path/to/www.example.com.key

As we learned in the explanation of SSL, the certificate, which is just a fancy public key, only works correctly with a corresponding private key file in order to decrypt the data that the client is sending to the server. After restarting the apache server you should be able to view the certificates that the server is presenting:


openssl s_client -showcerts -connect localhost:443


Back End

This is where the configuration is a bit easier and you can generally use a root certificate as opposed to the certificate that is being presented by the ELB. The advantage of this is that the root certificate will generally be valid for longer meaning you don't have to change them as often. 

For this example I'm importing the root certificate to the operating system bundle file that most tools use by default, including curl. Working with RHEL 6.5 we can complete the following command to import the certificate:

openssl x509 -text -in /tmp/server.crt >> /etc/pki/tls/certs/ca-bundle.crt

This can then be verified by the following command:

openssl verify -CAfile /etc/pki/tls/certs/ca-bundle.crt /tmp/server.crt

Testing

Front end testing of the apache web server can be completed by redirecting your un-secure (port 80) listener on your web ELB to talk to the secure port of your instance, like the following:


You can then connect to you ELB via http and know that it's connecting to your instance securely. This is just an option when you want to test individual configurations.

For backend, not only will you want to verify that the certificate is installed correctly (using openssl verify), but you'll also want to check that using the updated bundle to connect to the ELB will work correctly, this can be done with the following:

openssl s_client -connect <elb_dns_name>:443 

This should return you a bunch of text (including the certificate presented by the ELB) but at the bottom you should get an OK if everything is configured correctly. 

Finally: End to End testing

Considering that you'll have a page on your web server (like an index.html file or something) you can use curl to get the file securely through the ELB. I completed this on a VM that I had already installed the root certificate in the ca-bundle with the following command:

curl -v https://<elb_dns_name>/index.html

If everything is configured correctly then you should not notice anything different that using simple http, and you should get your page displayed on the command line.

Added Extras

Constant SSL certificate Health Check

One tip that I got from a colleague of mine is to configure your health check on the AWS ELB to use SSL in the health check, meaning that you'll get continuous verification that the SSL certificate is valid between your ELB and instance. This is particularly useful when considering that certificates expire:


Verify ssl key matches certificate

Another handy thing to verify, if you're not generating the public/private key yourself, is to make sure that the certificate matches the key. This is explained very well in another article.