For a while now, I was accustomed to deploying Haskell applications by building them in a CI like CircleCI or TravisCI, then copying the resulting binaries compressed (via UPX) to S3, which were then copied over to the target servers via Ansible - a very nice and simple strategy we employed back when I worked at Zalora.
At Anapi, I have some applications that we were deploying as microservices, and I didn't really want to burn up time with deployments, so we decided to run these apps on AWS Beanstalk, which allowed for some nice features like autoscaling and relieving me of tasks like managing the load balancing etc. The workflow I configured in CircleCI is pretty simple:
version: 2 jobs: build: # .. build and test instructions here deploy_staging: # ... docker: - image: kenny/foo:latest steps: - checkout - run: stack setup - run: stack install --local-bin-path . - setup_remote_docker - run: eval $(aws ecr get-login --region ap-southeast-1 --no-include-email) - run: docker build --rm=false -t foo.dkr.ecr.ap-southeast-1.amazonaws.com/kenny/foo:$CIRCLE_SHA1 -f Dockerfile . - run: docker push foo.dkr.ecr.ap-southeast-1.amazonaws.com/kenny/foo:$CIRCLE_SHA1 - run: aws/staging.sh $CIRCLE_SHA1 workflows: version: 2 pipeline: jobs: - build - deploy_staging: requires: - build filters: branches: only: - staging
For the Docker images I tend to start with a very minimal distro and maintain seperate images for CI and Beanstalk. The only caveat is that reloading the application environments take a bit longer compared to the "S3 copy and deploy via Ansible" route, but it's a tradeoff I can live with.