TinyDevCRM Update #19: Okay, taking a break...and working on another project...

This is a summary of TinyDevCRM development for the week of June 13th, 2020 to June 20th, 2020.

Goals from last week

  • [❓] Begin production Dockerization; pip install daphne and remove uwsgi, and finalize Dockerfile definitions for docker-compose.aws.yaml.
  • [❓] Publish Dockerfile images to Elastic Container Registry.
  • [❓] Update app.yaml CloudFormation stack definition in order to integrate Pushpin reverse proxy.
  • [❓] Re-deploy CloudFormation stack to release Pushpin into production.
  • [❓] Connect AWS application to AWS database, and add migration task definition.
  • [❓] Add back security group restrictions (policies and security group ingress/egress rules) for all CloudFormation resources
  • [❓] Deploy database cluster underneath private subnets + NAT instead of public subnet, and turn off direct SSH access and mapping of EC2 instances to IPV4 addresses
  • [❓] Add HTTP redirect to HTTPS using ACM-defined *.tinydevcrm.com SSL certificate.
  • [❓] Alias the DNS-qualified name of the app load balancer to api.tinydevcrm.com.

What I got done this week

  • [❌] Begin production Dockerization; pip install daphne and remove uwsgi, and finalize Dockerfile definitions for docker-compose.aws.yaml.

  • [❌] Publish Dockerfile images to Elastic Container Registry.


  • [❌] Update app.yaml CloudFormation stack definition in order to integrate Pushpin reverse proxy.

  • [❌] Re-deploy CloudFormation stack to release Pushpin into production.

  • [❌] Connect AWS application to AWS database, and add migration task definition.

  • [❌] Add back security group restrictions (policies and security group ingress/egress rules) for all CloudFormation resources

  • [❌] Deploy database cluster underneath private subnets + NAT instead of public subnet, and turn off direct SSH access and mapping of EC2 instances to IPV4 addresses

  • [❌] Add HTTP redirect to HTTPS using ACM-defined *.tinydevcrm.com SSL certificate.

  • [❌] Alias the DNS-qualified name of the app load balancer to api.tinydevcrm.com.

  • [✔] Published blog post on deploying Postgres as an app, in order to consolidate chief learnings from sabbatical. Part 1 and Part 2.

  • [✔] Get landing page stood up for a new project: FormDeploy.

I think the blog posts underlined just how friggin’ painful it'd be to lock everything down just the way I like it. It's literally a tiny stack on AWS, yet even the documentation aspect is extremely long-winded. I think the good news is, the stack should be reproducible long into the future. It does seem a little time-capsule-ish, though.

Yeah, so I think I'm going to take a littttttle break from this project because I haven't shipped yet, and there are still yet more technical challenges getting in my way. At this point, I'm not really beating a dead horse, I'm more beating an organic patch of soil, because the horse is long gone.

If I never come back to finish this project, then…I guess I never come back to finish this project. I can't believe I'm typing all this out. I think I will come back to finish this project. I'm not ready to give up on it yet. But I think the primary benefits could be accrued in a much easier way if I don't include the realtime stuff, and if I just kick all the underlying data towards the user. Which is what a spreadsheet is supposed to do, hence FormDeploy.

If I can do that, at least I know I'll be able to run a Docker container in production on AWS. Wouldn't that be a sight.

Metrics

  • Weeks to launch (primary KPI): 1 (15 weeks after declared KPI of 1 week)
  • Users talked to total: 1

RescueTime statistics

  • 62h 50m (54% productive)
    • 21h 52m “software development”
    • 21h 6m “utilities”
    • 6h 59m “communication & scheduling”
    • 6h 30m “news & opinion”
    • 4h 32m “uncategorized”

iPhone screen time (assumed all unproductive)

  • Total: 25h 32m
  • Average: 3h 38m
  • Performance: 0% change (3m total change over the week…I'm pretty surprised it's this consistent)

Hourly journal

https://hourly-journal.yingw787.com

Goals for next week

  • [❓] See what I can ship for FormDeploy, optimizing for minimizing time instead of maxmizing new learnings.

    Ideally, I'd like to ship a Heroku app + Postgres addon for the backend, and a CloudFormation-defined static site for the frontend, so that I can run it for free long-term. I'm not sure how much of this I can get done, and so this is the reach goal. I'll be happy if I get an API running on Heroku and figure out migration stuff, because the other stuff should be fairly easy as I've done it a million times.

Things I've learned this week

  • ECS task definitions are garbage. Specifically in the realm of task networking, I don't think there's any good examples out there of how to connect a container within one ECS task definition with a container in another ECS task definition, even if they're sitting on the same EC2 host. All the examples I've seen are one container on one task definition, underneath a load balancer exposed to the Internet. You have to install Consul, or some other kind of networking tool, in order to connect those two containers together. The best example I've seen is how to get the private IPV4 address from an nginx Docker container, but not how to get that parameter with AWS CloudFormation. The other one would be Route 53-based ECS service discovery that maps an A or AAAA record or SRV record to an IPV4 address. I was going to say this was the worst blog post I've seen on the topic, but it's not too bad after glancing at it a few more times. I mean, you can register containers on the same host using a bridge network, but then you can't use a load balancer and the autoscaling group makes less sense.

    In the end, I put both containers within the same ECS task definition, even though they weren't really related, because then you can use localhost in order to refer to the same network. All in all, this was an extremely frustrating experience, though I think network discovery is the last step towards actually scaling out a solution with a dynamic network configuration.

    This is so weird. I never would have thought docker networking would be a problem area for me. Kind of makes you wish for an easy mesh network so that you don't have to deal with networking stuff.

    This Stack Overflow answer looks like another good resource to come back to.

  • No-code isn't that easy to set up. If I wanted to build my own no-code stack, I think PostgreSQL + PostgREST + PostGUI would be a great stack, but it still involves some amount of devops orchestration. Which would be fine,

  • Low-code ops tools kind of suck… I have a custom local SaaS stack that I define using docker-compose, and translating that stack to Heroku or Elastic Beanstalk is such a painful experience. I really do wish I could just deploy a container by itself, but even building and shipping a single container results in weird failures (like my entrypoint script not being found…even though I built it locally and therefore it should be part of the Docker image definition).

    AWS ECS has this “uncanny valley” feel where you have to modify an existing docker-compose.yaml file just the right way in order to run the cluster. I don't think that's the right answer. I'm kind of surprised that AWS doesn't have a straight up aws ecs deploy --file docker-compose.yaml, in order to create an ECS agent with the docker containers on every EC2 host, with load balancing and networking and all that enabled, and give you the CloudFormation template underlying it. It wouldn't be the most efficient solution, but it'd be so much more powerful than anything we have now.

    So, yeah. Heroku-like stuff for multi-container docker-compose, with logging and monitoring for each VM instance and container runtime. That'd solve my pain point.

Subscribe to my mailing list