While I’ve wanted to post about the project for a while, not using the setup didn’t motivate me to write about it. Here in spring 2020, SARS-CoV-2 (COVID-19) means canceled weekend plans, no trips, and no commute. All of that means plenty of time to build a new exercise habit!
I’ve reinvigorated my love of the setup, the classes, and my motivation to share how I set it all up. So now that I’ve been pretty much relegated to home for 9 weeks, I’ve built a new habit, and made some tweaks to the setup. So let’s dive in!
Peloton has become an at home exercise giant, but it also comes with a significant price tag. For a while now, they’ve offered their digital content, almost all of the cycling workouts, strength training, and to the best of my knowledge, every new program they’ve added to their All-Access Membership. Take a look at the internet, and you’ll see lots of people who use the Peloton Digital Membership via tablets, phones, or the web.
From my perspective, this DIY setup was a great way to save some money, and see if this style of exercise would work well for me. I had a hunch that it would, given that I used to enjoy spin classes at a local gym (never got into that habit either, despite enjoying them). This investment felt way lower risk than the almost $3,000 first year for the Peloton bike, accessories, and 12 months of the subscription.
For the basic bike, I followed many internet recommendations and got the Sunny Health & Fitness 49 Lb Chromed Flywheel, Silent Belt Drive Indoor Cycle Bike with Leather Resistance Pad. Despite that insanely long name, the bike is a solid, and affordable (or at least it was much more affordable when I purchased it)
Overall, it is a solid bike, relatively quiet operation for friction resistance, easy to maintain, easy to enhance, and it feels like it will last a long time.
Just having the bike is the bare minimum to a great workout. There are a few items you’ll want to add right away to make the most of your bike and the Peloton Digital Membership.
Unless you can place your bike in front of a TV with compatible streaming support, you’ll need a tablet or phone in front of you. I opted to place both an old iPad I had already, and my phone, on the handlebars of the bike.
This allows me to stream the Peloton classes as well as have my phone for another task. I use the phone and a couple Bluetooth accessories to help with measuring my effort and keeping in sync with the class.
The cadence sensor is critical to measuring my effort during the class, and the instructors call out an expected cadence while the class is ongoing. Using the Wahoo app on my phone allows me to keep the display of cadence in front of my while the class content streams to the iPad. The heart monitor is useful to measure how hard I’m working, and helps the Wahoo app give me an estimate on the calories burned. Since neither the iPad app nor the Wahoo app can measure the resistance setting of the bike, heart rate is the best measure of output I’ve got to know how hard I’m working.
Finally, I want to make it easy to keep the iPad and my phone charged, so a simple USB charger and some long 6ft cables zip-tied to the bike frame make it easy.
The end result looks pretty good!
The Peloton Digital apps are pretty good. On my iPad it provides a decent view of all the classes, good options to filter, and the ability to preload the video so you don’t need to stream a previously recorded class on demand. With the Peloton Digital Membership, I’ve event been able to participate in live classes. My iPad is pretty old and no longer receives iOS updates from Apple, so I believe I’m no longer getting updates of the Peloton app itself, but so far it is still working.
On Android, the app is very similar, and would meet your basic needs. It doesn’t appear to allow for preload of classes, and the profile stats are more limited. The Android app also supports Chromecast, which is a handy way to get the video content onto a TV. I’ve used this approach to leverage some of the other content (stretching, strength, etc).
One other thing to know, the Peloton Digital Membership is only for a single member, where the Peloton All-Access Membership supports multiple users.
If you are going to have an issue with the bike off the bat, the fit is likely to be your issue. After several rides I have gotten the settings how I’d like, but you don’t have a ton of flexibility with the various adjustments. The other item to be aware of is that as a friction resistance bike, it can be a bit loud when you are moving quick under a higher resistance.
The largest annoyance by far is that you have no real measurement of resistance. The Peloton bike provides a numeric value for resistance and the instructors will call out ranges for you to set and work within. Without a numeric measurement and also that range not matching that of the Peloton bike, you are working blind in this area. The good news is that you’ll get a feel for how hard you should be working, just as relative effort compared to other moments, and the instructors will describe the effort as a flat road, moderate hill, etc.
Chromecast from Android, seriously, what? I haven’t tried this in a while, but the Android initiated Chromecast support is very problematic, and in my experience stutters and buffers pretty often, and then will rewind about a minute to resume playing. This is frustrating and really makes it useless.
What is more infuriating is that you can also initiate the Chromecast playback from the web version of the Peloton Digital app, and that plays fine. My understanding of Chromecast is that it sets up a direct stream from the content provider servers to the Chromecast, so the initiation method should be irrelevant to the quality of the playback. It seems that all Peloton Chromecasting should perform equally. It doesn’t truly matter however, as I really only use the iPad on the bike, and when I want to stream other classes to a Chromecast, I can initiate them from the Chrome browser on my phone.
As I started to get more active with the bike in March, I decided it would be nice to make a small enhancement. I opted to replace the pedals with SPD clip pedals, and bought some SPD shoes. I’ve never used clip-in pedals on any bike before. The pedals I bought are clip-in on one side, so I don’t need to put on the shoes every time, or someone else can use the bike SPD shoes. There is something nice about the ritual of clipping-in for a workout, as well as it does feel more solid and effective to cycle with the clip-in shoes.
The main goal of the DIY Peloton is to save money, so let’s take a look.
At the time of purchase:
Item | Cost September 2018 | Cost May 2020 |
---|---|---|
Bike | ~$300 | ~$600 |
Tablet Holder | ~$23 | ~$14 (another option) |
Phone Holder | ~$13 | ~$28 |
Bike Mat | ~$30 | ~$33 |
Cadence Sensor | ~$40 | ~$40 |
Heart Monitor | ~$50 | ~$80 (another option) |
USB Charging Block and Cables | ~$46 | ~$22 + Cables |
iPad | Already Owned | ~$330 |
Weights | Already Owned | ~$11 |
Total | ~$502 | ~$1158 |
Enhancement Items Added Spring 2020
Item | Cost |
---|---|
SPD Shoes | ~$100 |
SPD Pedals | ~$38 |
Recurring Cost:
Item | Peloton Digital | Peloton All-Access |
---|---|---|
Monthly | ~$14 | ~$39 |
Annually | ~$168 | ~$468 |
In May 2020, looking at the “Works Package” from Peloton, which has a similar setup of gear (shoes, mat, weights, heart monitor, etc), we are looking at a retail price of $2,494. Looking then at the 1 year costs, the ownership of the Peloton is $2,962. My first year cost of ownership (given my already purchased iPad and weights) was $670. One note, the Peloton Digital Membership used to be a bit more, but I don’t recall when it exactly went down. Even if I had needed to purchase the weights and iPad, I’d be looking at a one year cost of ownership of $1,011. For a new purchase today, with slightly higher costs on several items, the first year would cost $1,326. The DIY approach saves at least half the cost of the full retail Peloton.
This is a great option! You can get started for far less than the ‘real’ bike. There are downsides, but they don’t stop you from getting a great workout or building positive habits. I’m certain the Peloton Bike is very nice, and the software that goes with it appears to be much stronger and more engaging software. Ultimately I recommend the exercise program that Peloton has developed, and would recommend it to anyone interested. How you decide to get involved, and what level of investment you make is of course up to you!
It always helps to have a gym buddy!
]]>To accomplish this task, I set up pre-commit. Pre-commit is a really easy way to set up some useful git pre-commit hooks, as well as any custom command you want to run.
Pre-commit can be installed into your python setup, or on OSX, use homebrew and run brew install pre-commit
You’ll create a .pre-commit-config.yaml
in the root of the git working directory that describes the hooks you want to fire on pre-commit.
Then you can setup your actual hook to fire everything described in .pre-commit-config.yaml
by running pre-commit install
. You can then modify the .pre-commit-config.yaml
file and adjust your hooks without another install. If you ever need to skip this stuff for a commit, just add --no-verify
to the commit command (but don’t! fix the problem instead!)
I’ll start with the common hooks, defined in .pre-commit-config.yaml
:
- repo: git://github.com/pre-commit/pre-commit-hooks
sha: v1.4.0
hooks:
- id: trailing-whitespace
- id: check-merge-conflict
- id: check-yaml
- id: end-of-file-fixer
- id: no-commit-to-branch
args: [-b, master, -b, production, -b, staging]
Here I’m grabbing the hooks that are public and part of the pre-commit core. I’m using a couple specific ones for things I care about:
So those are the basics, now onto my Elixir specific fun.
Starting a new section in .pre-commit-config.yaml
, we’ll define an array of custom hooks:
- repo: local
hooks:
First, we’ll make sure the tests run, if any source or test file changes:
- id: mix-test
name: 'elixir: mix test'
entry: mix test
language: system
pass_filenames: false
files: \.exs*$
The files
pattern will be what changes this would trigger on, and we set pass_filenames
as false, so that we run the full suite.
Then, we’ll make sure all changed source and elixir script files are formatted correctly. Leaving the pass_filenames
at its default of true, will just run the formatter on changed files.
- id: mix-format
name: 'elixir: mix format'
entry: mix format --check-formatted
language: system
files: \.exs*$
Then, we’ll make sure everything compiles without warnings.
- id: mix-compile
name: 'elixir: mix compile'
entry: mix compile --force --warnings-as-errors
language: system
pass_filenames: false
files: \.ex$
Finally, we’ll run credo on the entire project if any elixir or elixir script file changes.
- id: mix-credo
name: 'elixir: mix credo'
entry: mix credo
language: system
pass_filenames: false
files: \.exs*$
Here is the final file. It may get to be too slow as the project gets larger, and since CI does a lot of these same checks, I’ll likely shorten it up, or move a few things to taking passed files again, rather than running on the full project even if only one piece changes.
.pre-commit-config.yaml
- repo: local
hooks:
- id: mix-test
name: 'elixir: mix test'
entry: mix test
language: system
pass_filenames: false
files: \.exs*$
- id: mix-format
name: 'elixir: mix format'
entry: mix format --check-formatted
language: system
files: \.exs*$
- id: mix-compile
name: 'elixir: mix compile'
entry: mix compile --force --warnings-as-errors
language: system
pass_filenames: false
files: \.ex$
- id: mix-credo
name: 'elixir: mix credo'
entry: mix credo
language: system
pass_filenames: false
files: \.exs*$
- repo: git://github.com/pre-commit/pre-commit-hooks
sha: v1.4.0
hooks:
- id: trailing-whitespace
- id: check-merge-conflict
- id: check-yaml
- id: end-of-file-fixer
- id: no-commit-to-branch
args: [-b, master, -b, production, -b, staging]
I hope this helps you keep your Elixir projects nice!
]]>Some of my motivations for giving it a try:
The biggest reason here is the free hours of Continuous Integration (CI). There are several guides for setting up testing for an Elixir Project, but my setup ended up a little different, so I figured I’d throw up a post about it (and have it as a reference for myself, for the future, one of the best reasons to blog!)
Everything starts with a .gitlab-ci.yml
file:
image: elixir:1.7.3
services:
- postgres:9.6
variables:
POSTGRES_DB: app_name_test
POSTGRES_HOST: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: "postgres"
MIX_ENV: "test"
This part is pretty straight forward, just set the running to use the right version of Elixir, connect PostgreSQL (if needed) and set up a few global variables
use Mix.Config
# We don't run a server during test. If one is required,
# you can enable the server option below.
config :app_name, Web.Endpoint,
http: [port: 4001],
server: false
# Print only warnings and errors during test
config :logger, level: :warn
# Configure your database
config :app_name, AppName.Repo,
adapter: Ecto.Adapters.Postgres,
username: System.get_env("POSTGRES_USER") || "postgres",
password: System.get_env("POSTGRES_PASSWORD") || "postgres",
database: System.get_env("POSTGRES_DB") || "app_name_test",
hostname: System.get_env("POSTGRES_HOST") || "localhost",
pool: Ecto.Adapters.SQL.Sandbox
This is set up to mostly just pull the env variables from the CI run, or default to something reasonable so that your tests will still run locally.
Next up in the .gitlab-ci.yml
file:
before_script:
- mix local.hex --force
- mix local.rebar --force
Here we are making sure that every job that runs (each stage of the pipeline) has hex and rebar ready to go. This is important when we add caching later, as these install outside the project directory.
Now we will add stages to the .gitlab-ci.yml
file:
compile:
stage: build
script:
- apt-get update && apt-get -y install postgresql-client
- mix deps.get --only test
- mix compile --warnings-as-errors
test:
stage: test
script:
- mix ecto.create
- mix ecto.migrate
- mix test
lint:
stage: test
script:
- mix format --check-formatted
- mix credo
The setup here runs the compile job in the build stage, so this will happen first. We’ll make sure we are ready with the postgres client, all the dependices for the app, and we’ll compile all our elixir code. I set --warnings-as-errors
to make sure I’m not leaving anything deprecated or unused behind in my code.
The test stage has two jobs, test and lint. These will be able to run in parallel on GitLab’s servers, or any other connected runner. The test job setup up ecto, and runs the test suite. The link job makes sure everything is formatted, and clears a credo check.
Here is the display from GitLab for the pipeline stages:
And the pipeline jobs:
Lastly, we want builds to go faster, so we add a block for caching configuration:
cache:
paths:
- _build
- deps
- assets/node_modules
This way our dependecy download, build output and anything with our node modules in assets are preserved and not built from scratch each time. Mix and Yarn should handle if your dependencies need updating due to a change you’ve made.
This approach seems to be working great! For my really simple starting Phoenix app with just a few tests, the original build took about 4.5 minutes, and each stage now runs in about 1.75 minutes. That is of course a lot more time than the tests take to run themselves, but there is a lot of overhead to get the tests ready to run. The formatting and credo runs (and tests and compile without warnings) I also do myself with git pre-commit hooks (I should do a post on that), so its unlikely something wrong would slip in, but I like the redundancy, and having this automated is a lot of fun!
Here is the GitLab interface for a successful pipeline run:
Happy CI-ing!
Here is the whole file for reference:
image: elixir:1.7.3
services:
- postgres:9.6
variables:
POSTGRES_DB: app_name_test
POSTGRES_HOST: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: "postgres"
MIX_ENV: "test"
cache:
paths:
- _build
- deps
- assets/node_modules
before_script:
- mix local.hex --force
- mix local.rebar --force
compile:
stage: build
script:
- apt-get update && apt-get -y install postgresql-client
- mix deps.get --only test
- mix compile --warnings-as-errors
test:
stage: test
script:
- mix ecto.create
- mix ecto.migrate
- mix test
lint:
stage: test
script:
- mix format --check-formatted
- mix credo
One thing that I recently embarked on was making an Ansible version of the Chef Cookbook that I use most often. The recipe is used to help me install the correct public keys into the authorized keys file for SSH access to servers I maintain. It uses GitHub Organizations and Users to fetch the public keys that GitHub users have added to their profiles. It also supports adding specific hardcoded keys, useful for deployment scripts or other types of programatic access.
You can view the Ansible Role that I created on Galaxy and the source.
Vagrant is your friend. I’ve used Vagrant for a number of different development tasks, but it really worked great for developing an Ansible role. It is always important to test your development, and Vagrant made it really easy to start fresh and test again and again.
Get started by reading the documentation from Ansible, it is a great guide. Pay particular attention to the good practices:
Ansible gives some really great building blocks, so I didn’t need to write any custom code, just leverage the built in modules. All in all, I used these modules:
Compared to Chef, it was a little harder to write since I couldn’t just mix in Ruby code to accomplish what I wanted, but stuck to the building blocks of Ansible. I think that in the end, this will make it more maintainable and more resistant to version changes of Ansible, which is one of my largest issues with Chef.
The tight integration of GitHub and Ansible’s Galaxy is awesome, making it easy to publish and keep things up-to-date.
I was able to pretty easily test Chef recipes with Vagrant or Docker. Testing with Ansible on Travis-CI is pretty easy, and I was able to also leverage Docker for testing using this great guide from Jeff Geerling.
Overall it was a great experience, and I’d look forward to doing it again if the need arises. There is also room for improvement in the current role. Right now, you can only use it to install keys into a single user account, unless you include the role more than once with different variables each time. It would be nice to give variables for configuration that allow the role to be included just once as setup all the user accounts necessary.
]]>I start with the basics in the first post, Web App Security Part 1: An Introduction.
The series continues, digging deeper into securing applications with best practices, and a few items beyond the basics. Check out the post, Web App Security Part 2: Digging Deeper.
The series rounds out with a walkthrough of how SmartLogic works to make security part of our every day work. Check out the concluding post, Web App Security Part 3: The SmartLogic Process.
I hope you enjoy the series!
]]>In the presentation I cover:
Here is the direct links for Introduction to Elixir
]]>I’ve used a variety of Markdown editors, and some with Preview features, but none that really gave me what I wanted.
When I looked around for alternatives, I found some that looked promising, but I settled on a free and open source solution.
I’d heard of, and tried, the Atom editor before, but my entire development workflow is so focused around tmux and VIm that I’ve never really gotten any windowed editor to stick as part of my workflow.
Atom is nice, very customizable, with a ton of great plugins.
It even has some that understand Jekyll blogs and their folder structure, etc. The feature I use most however, is a ability to show a great Markdown preview, right next to the Markdown document I’m editing.
If you are looking for a GUI editor for markdown documents, like writing blog posts, I suggest Atom.
]]>One thing that is crucial to any server setup is ensuring that your SSH configuration is sound.
A great role that I’ve used for my Ansible SSH configuration is ssh-hardening.
A pitfall that the author points out in the Readme, is that it is possible that your user account will be locked out after the role is applied. I’ve found this to be particularly true for the ubuntu
account on EC2 servers.
In order to make sure I can continue to get in to that user with my AWS key-pair, I’ve started adding this to a role that runs right after the ssh-hardening
role in my playbooks.
- name: Check if Ubuntu is locked
command: grep -q "ubuntu:!:" /etc/shadow
register: check_ubuntu_lock
ignore_errors: True
changed_when: False
become: true
- name: Unlock Ubuntu
command: usermod -p "*" ubuntu
when: check_ubuntu_lock.rc == 0
become: true
This checks the shadow file for the !
indicator that would lock the account, and sets the password has to *
which will unlock the account, but also ensure that the user can only log in via ssh keys.
I’m really enjoying using more and more Phoenix. For those that don’t know, Phoenix uses Brunch as it’s default way to manage javascript, image, and css assets.
I’m not opposed to Brunch, but I’d need to learn it. I also am not opposed to Webpack, and I need to learn more about that as well.
Given that both are new to me, and neither are completely similar to the Rails asset pipeline, I felt that it was a good time to step back and decide what I wanted to learn.
Right around this time, Rails also embraced Webpack. On top of that, many of my coworkers, who are way more excited about JavaScript than I am, seem to really prefer Webpack.
There are a number of tutorials out on the web to get this set up. I thank many of them for giving me the context to figure this out, but like all things, technology moves fast and our blog posts get out of date. Nothing I found worked out of the box with Phoenix 1.3 and Webpack 3.5.5.
The good news, in Phoenix 1.3 the assets folder is at the top level, and is really distinct from how the rest of our application functions.
You can start a new Phoenix app with the --no-brunch
flag, or you can remove brunch. I’ll point on the one major difference, but in general, if you already have a brunch assets
folder, you can remove it.
The folder structure you’ll want to have at this point in the process should look like:
assets
\_ js
\_ css
\_ static
\_ images
\_ robots.txt
Now you can create your JavaScript package file.
{
"dependencies": {
"phoenix": "file:../deps/phoenix",
"phoenix_html": "file:../deps/phoenix_html"
},
"devDependencies": {
"babel-core": "^6.26.0",
"babel-loader": "^7.1.2",
"babel-plugin-transform-es2015-modules-strip": "^0.1.1",
"babel-plugin-transform-object-rest-spread": "^6.3.13",
"babel-preset-es2015": "^6.24.1",
"babel-preset-react": "^6.24.1",
"bootstrap": "^4.0.0-beta",
"copy-webpack-plugin": "^4.0.1",
"css-loader": "^0.28.0",
"extract-text-webpack-plugin": "^3.0.0",
"import-glob-loader": "^1.1.0",
"jquery": "^3.2.1",
"node-sass": "^4.5.2",
"popper.js": "1",
"react": "^15.6.1",
"react-dom": "^15.6.1",
"sass-loader": "^6.0.3",
"standard": "^10.0.2",
"style-loader": "^0.16.1",
"webpack": "^3.5.5"
},
"scripts": {
"watch": "webpack --watch --color",
"deploy": "webpack -p"
}
}
There is a lot in there, but its the tooling I wanted to use on this project.
I recommend you use Yarn to install and manage this stuff.
Now the Webpack config file:
var path = require('path')
var ExtractTextPlugin = require('extract-text-webpack-plugin')
var CopyWebpackPlugin = require('copy-webpack-plugin')
var webpack = require('webpack')
var env = process.env.MIX_ENV || 'dev'
var isProduction = (env === 'prod')
module.exports = {
entry: {
'app': ['./js/app.js', './css/app.scss']
},
output: {
path: path.resolve(__dirname, '../priv/static/'),
filename: 'js/[name].js'
},
devtool: 'source-map',
resolve: {
extensions: ['.js', '.jsx']
},
module: {
rules: [{
test: /\.(sass|scss)$/,
include: /css/,
use: ExtractTextPlugin.extract({
fallback: 'style-loader',
use: [
{loader: 'css-loader'},
{
loader: 'sass-loader',
options: {
includePaths: [
path.resolve('node_modules/bootstrap/scss')
],
sourceComments: !isProduction
}
}
]
})
}, {
test: /\.(js|jsx)$/,
include: /js/,
use: [
{ loader: 'babel-loader' }
]
}]
},
plugins: [
new CopyWebpackPlugin([{ from: './static' }]),
new ExtractTextPlugin('css/app.css'),
new webpack.ProvidePlugin({
$: "jquery",
jQuery: "jquery",
"window.jQuery": "jquery",
Popper: ['popper.js', 'default']
})
]
}
Finally we need to tell Phoenix how to invoke webpack to watch our assets while we develop. Since Phoenix will expect our asset tooling (normally Brunch) to build into the priv/static/
folder, then everything Phoenix does to serve up those files, and hot reload when they change, will still work.
In your config/dev.exs
file:
config :appname, AppName.Endpoint,
http: [port: 4000],
debug_errors: true,
code_reloader: true,
check_origin: false,
watchers: [yarn: ["run", "watch",
cd: Path.expand("../assets", __DIR__)]]
If you didn’t include brunch, then the watchers:
key will be an empty list, and if you did, then you can just change it to what I have above. If you aren’t using yarn (you should be), then you’ll need to tweak this a bit.
Since this is the first time I’ve ever really dug deep into Webpack, let’s walk through the config file.
var path = require('path')
var ExtractTextPlugin = require('extract-text-webpack-plugin')
var CopyWebpackPlugin = require('copy-webpack-plugin')
var webpack = require('webpack')
var env = process.env.MIX_ENV || 'dev'
var isProduction = (env === 'prod')
Here we are just importing some things we will need, and setting up the environment, looking for our Elixir mix env, but defaulting to dev. We can use this to selectively do production optimizations like compressing and uglifying.
module.exports = {
entry: {
'app': ['./js/app.js', './css/app.scss']
},
output: {
path: path.resolve(__dirname, '../priv/static/'),
filename: 'js/[name].js'
},
devtool: 'source-map',
resolve: {
extensions: ['.js', '.jsx']
},
Here we set up the main entrypoint, the app, and it’s two major files. We will have a js/app.js
that will import all other files we need for the app. We will also have a css/app.scss
that will import the css and scss for our app.
Then we define where the outputs go, specifying that they should go in the Phoenix priv/static/
folder, and that the application javascript bundle should go into the js folder there, with the entry point name and js file extension.
Then we enable source maps.
Finally we specify the resolve extensions when doing JavaScript imports, so that we can include .jsx
files.
module: {
rules: [{
test: /\.(sass|scss)$/,
include: /css/,
use: ExtractTextPlugin.extract({
fallback: 'style-loader',
use: [
{loader: 'css-loader'},
{
loader: 'sass-loader',
options: {
includePaths: [
path.resolve('node_modules/bootstrap/scss')
],
sourceComments: !isProduction
}
}
]
})
}, {
test: /\.(js|jsx)$/,
include: /js/,
use: [
{ loader: 'babel-loader' }
]
}]
},
Here is the real meat. The first rule is what to do with our scss and sass files in the css folder. We are going to run them through the Extract Text Plugin, so that they will be in their own resultant file, with a fallback to the standard style-loader. Then we are going to use specific loaders to read in css files, and sass files. For the sass-loader, we are going to include sourceComments if we aren’t building a production bundle, and we are going to load up the bootstrap 4 scss path so that we can import them into our app.scss
file.
In the second part, we are going to pass any JavaScript and jsx files through the babel-loader.
For the record, here is the .babelrc
file I’ve got so far:
{
"presets":[
"es2015", "react"
]
}
Finally at the end of the Webpack config:
plugins: [
new CopyWebpackPlugin([{ from: './static' }]),
new ExtractTextPlugin('css/app.css'),
new webpack.ProvidePlugin({
$: "jquery",
jQuery: "jquery",
"window.jQuery": "jquery",
Popper: ['popper.js', 'default']
})
]
}
We set up the Copy Webpack Plugin to copy our static files, like the robots.txt or images into the priv folder. We also run the extract text plugin over a non scss app file (which I probably don’t actually need here). Then finally we set up a few global namespace items so that they can be bundled correctly by Webpack even if they aren’t imported specifically in the file that references them.
import 'phoenix_html';
import 'bootstrap';
@import "bootstrap";
body {
padding-top: 1.5rem;
padding-bottom: 1.5rem;
}
.container {
padding-top: 1.5rem;
padding-bottom: 1.5rem;
}
Pretty simple, you can import the JavaScript and scss for Bootstrap, and add your own times easily. You’ll still have access to the Phoenix JavaScript from your mix deps folder.
When you run your mix phx.server
you should see the Webpack watcher boot and emit your bundle files.
Generated appname app
[info] Running AppName.Endpoint with Cowboy using http://0.0.0.0:4000
yarn run v0.23.4
$ webpack --watch --color
Webpack is watching the files…
Hash: 19c0661b1f8ddf6c7912
Version: webpack 3.5.5
Time: 5233ms
Asset Size Chunks Chunk Names
js/app.js 478 kB 0 [emitted] [big] app
css/app.css 234 kB 0 [emitted] app
js/app.js.map 900 kB 0 [emitted] app
css/app.css.map 88 bytes 0 [emitted] app
robots.txt 205 bytes [emitted]
If you change an asset file, you will see Webpack emit the update bundle.
Hash: 9797a337be458d27d712
Version: webpack 3.5.5
Time: 882ms
Asset Size Chunks Chunk Names
js/app.js 478 kB 0 [emitted] [big] app
css/app.css 234 kB 0 [emitted] app
js/app.js.map 900 kB 0 [emitted] app
css/app.css.map 88 bytes 0 [emitted] app
[1] ./js/app.js 62 bytes {0} [built]
I haven’t really messed with this yet, so far just focusing on locally development. That said, it should be the same general approach as with Brunch, build the assets to the priv/static
folder then run phx.digest
. If I get any more info on this, it will be great content for another post.
When using Chef, I would always create a git repository just for managing servers, with a combination of recipes and node files to cover all the various aspects of a projects infrastructure.
For Ansible, I’ve found it easiest to just store the configuration alongside the application code. The bulk of the configuration goes in a deploy
directory, with an ansible.cfg
at the root.
app_root
|- ansible.cfg
|- deploy
|- galaxy-roles
|- .gitkeep
|- group_vars
|- all.yml
|- hosts
|- requirements.yml
|- roles
|- app
...
|- setup.yml
To start, we’ll run through the various base configuration files for the setup. This setup works well to manage both custom roles created for the project, but also roles downloaded from Anisble Galaxy. The setup also explains how to deal with a mixture of servers with different deployment environments like staging and production.
[defaults]
inventory=deploy/hosts
remote_user=ubuntu
roles_path=deploy/galaxy-roles:deploy/roles
[ssh_connection]
ssh_args="-o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=5m"
pipelining=True
scp_if_ssh=True
This file sets up some basic configuration for my setup. The inventory of servers will default to the file deploy/hosts
, and by default we will connect as the ubuntu user (common with AWS and Vagrant).
Then I specify the directories for roles to be installed in. First I specify the deploy/galaxy-roles
directory, so that when I install roles from the Ansible Galaxy, they install into this directory, and then we have deploy/roles
which is where we will put roles we write just for this deployment.
In the ssh_connection section, we set some defaults that we want. The one unusual thing here is the scp_if_ssh=True
which is necessary because the SSH hardening role disables sftp, so we want to use scp for file transfers.
[local]
vagrant_local_vm ansible_host=127.0.0.1 ansible_port=2222 ansible_ssh_private_key_file=.vagrant/machines/default/virtualbox/private_key
[staging]
staging ansible_ssh_private_key_file=~/.ssh/aws-key.pem
[prod_web]
prod_web_1 ansible_ssh_private_key_file=~/.ssh/aws-key.pem
[prod_db]
prod_db_1 ansible_ssh_private_key_file=~/.ssh/aws-key.pem
Here we set various environments and the servers within those groups. We do this so we can filter the inventory we are applying a playbook on when we invoke ansible-playbook
For each server, if there a specific connection parameters we need, we apply them for each node. I generally set these hostnames in my ~/.ssh/config
so that I don’t need to set IP addresses or some other configuration parameters when I want to ssh to these machines later.
Here I set specific values for the roles I’m pulling from Ansible Galaxy. I’ve found that many roles provide nice tables of all the default variables and options you have, and I keep this file neat by alphabetizing the variables as they are usually prefixed by the role they are part of.
As defined in ansible.cfg
the roles path begins with galaxy-roles
, so when we run ansible-galaxy -r deploy/requirements.yml
, they install into this directory.
I add a gitignore deploy/.gitignore
to keep some files out of git within the ansible deploy directory. I add a .gitkeep
file to the galaxy-roles directory to ensure we’ve got the folder.
*.retry
galaxy-roles/**
!galaxy-roles/.gitkeep
Here is a sample of some roles I’ve been using:
- src: franklinkim.newrelic
version: 1.6.0
- src: geerlingguy.nginx
version: 2.1.0
- src: dev-sec.ssh-hardening
version: 4.1.2
- src: https://github.com/ANXS/postgresql
version: 9446ab512ff2a7c7bae21bc7ebba515192809433
- src: jnv.unattended-upgrades
version: v1.3.0
- src: geerlingguy.ntp
version: 1.4.2
- src: smartlogic.github_keys
version: 0.1
These can be installed when you checkout the repo using
$ ansible-galaxy -r deploy/requirements.yml
Depending on the situation, I can create a playbook file for various situations. The most common is a setup.yml
---
- name: Basic Setup
gather_facts: yes
hosts: all
roles:
- { role: dev-sec.ssh-hardening, become: yes }
- { role: deploy_user }
- { role: smartlogic.github_keys }
- { role: jnv.unattended-upgrades, become: yes }
- { role: postgresql, become: yes }
- { role: geerlingguy.nginx, become: yes }
- { role: geerlingguy.ntp, become: yes }
- { role: app }
- name: Production servers get newrelic
gather_facts: yes
hosts: prod_*
roles:
- { role: franklinkim.newrelic, become: yes, tags: ['newrelic'] }
The basic setup is applied to all hosts. It is a mix of roles from Ansible Galaxy and ones local to this repository. The second play applies to all servers that match the pattern prod_*
, so that servers that begin with prod_
in the deploy/hosts
file get those roles. In this case we do that to only install the NewRelic server monitoring into our production servers.
Other files might be deploy.yml
or migrate.yml
with roles and variables set for those actions. More complex examples of these is probably another post of its own.
With this setup, any shared behavior between various deployments you may do should be handled through a shared role in git or Ansible Galaxy. That way you don’t do any copy and paste between setups you manage.
The few roles that are specific to your web application would be in the deploy/roles
folder, and should be very specific to the application you are deploying. Examples of this might be creating user accounts or the specific nginx configuration files for your service.
A simple task to template our nginx configuration based on SSL certificate presence:
- name: Check for app certs
stat:
path: /etc/ssl/private/app.crt
register: app_cert
become: true
- name: Template app no-ssl version
copy:
src: app.nossl.conf
dest: /etc/nginx/sites-available/app.conf
owner: www-data
group: www-data
when: app_cert.stat.islnk is not defined
notify: restart nginx
become: true
- name: Template app ssl version
copy:
src: app.ssl.conf
dest: /etc/nginx/sites-available/app.conf
owner: www-data
group: www-data
when: app_cert.stat.islnk is defined
notify: restart nginx
become: true
One thing I’ve considered with this setup is that if you have a more service oriented architecture, you might want to use a common Ansible repository that is shared for all your various services. I think that might make sense at some scale.
For most of what I do, simply creating reusable and shared roles keeps the amount of duplication between setups minimal. Yes, if you run two project’s Ansible playbooks, there might be a lot of duplication in what actual tasks are run, but they should be idempotent and guarded by the right checks to prevent duplicate work.
So far this setup has worked well for me. I’m sure it will continue to evolve and I do more and more with Ansible, but the end result is that I have a simple configuration along side my application code that is making setting up and maintaining server easier than it has ever been.
]]>