<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Simi Cloud and DevOps]]></title><description><![CDATA[Simi Cloud and DevOps]]></description><link>https://blog.simiops.fun</link><generator>RSS for Node</generator><lastBuildDate>Wed, 15 Apr 2026 17:49:06 GMT</lastBuildDate><atom:link href="https://blog.simiops.fun/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Supercharging Microservices with an Intelligent Multi-Agent System on GKE]]></title><description><![CDATA[The world of microservices is all about agility and scalability. But what if we could make them smarter? What if we could add a layer of intelligence that understands user needs and proactively provides assistance? That's exactly what we set out to d...]]></description><link>https://blog.simiops.fun/supercharging-microservices-with-an-intelligent-multi-agent-system-on-gke</link><guid isPermaLink="true">https://blog.simiops.fun/supercharging-microservices-with-an-intelligent-multi-agent-system-on-gke</guid><category><![CDATA[gke]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[gemini]]></category><category><![CDATA[google cloud]]></category><dc:creator><![CDATA[Mwanza Simi]]></dc:creator><pubDate>Mon, 22 Sep 2025 20:12:20 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1758572098235/23ac65c0-0479-4166-9981-35c126ec4716.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The world of microservices is all about agility and scalability. But what if we could make them smarter? What if we could add a layer of intelligence that understands user needs and proactively provides assistance? That's exactly what we set out to do for the GKE Turns 10 Hackathon.</p>
<p>We built an Intelligent Multi-Agent System on Google Kubernetes Engine (GKE) that supercharges existing microservice applications, Bank of Anthos and Online Boutique, with a powerful AI brain. The best part? We did it without touching a single line of the original application code.</p>
<h2 id="heading-the-challenge-ai-powered-microservices">The Challenge: AI-Powered Microservices</h2>
<p>Our goal was to enhance the user experience of these applications by providing a unified, intelligent assistant that could help users with their banking and shopping needs. We wanted to create a system that could:</p>
<ul>
<li><p><strong>Understand user intent:</strong> Whether a user wants to check their balance, find a product, or get financial advice, the system should understand their needs.</p>
</li>
<li><p><strong>Provide holistic guidance:</strong> By combining information from both the banking and shopping applications, the system can offer comprehensive financial advice.</p>
</li>
<li><p><strong>Be proactive:</strong> The system should be able to anticipate user needs and offer suggestions before they even ask.</p>
</li>
</ul>
<h2 id="heading-the-solution-a-multi-agent-system-on-gke">The Solution: A Multi-Agent System on GKE</h2>
<p>To achieve this, we designed a multi-agent system that runs on GKE. Each agent is a specialized AI model responsible for a specific task:</p>
<ul>
<li><p><strong>Banking Agent:</strong> Handles all interactions with the Bank of Anthos application, such as checking balances, transferring funds, and providing transaction history.</p>
</li>
<li><p><strong>Shopping Agent:</strong> Interacts with the Online Boutique application to search for products, make recommendations, and manage the shopping cart.</p>
</li>
<li><p><strong>Financial Wellness Agent:</strong> Provides personalized financial advice based on the user's spending habits and financial goals.</p>
</li>
<li><p><strong>Predictive Analytics Agent:</strong> Uses historical data to predict future financial trends and provide proactive recommendations.</p>
</li>
<li><p><strong>Infrastructure Agent:</strong> Monitors the health and performance of the GKE cluster and the microservices.</p>
</li>
<li><p><strong>Unified Intelligence Orchestrator:</strong> The "brain" of the system, responsible for understanding user requests and routing them to the appropriate agent. It uses Google's Gemini model to understand natural language and orchestrate the conversation flow.</p>
</li>
</ul>
<h2 id="heading-architecture">Architecture</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758573324234/c2d0b69b-79f9-4173-8232-7c63d3dd98d3.jpeg" alt class="image--center mx-auto" /></p>
<p>Our architecture is designed to be scalable, resilient, and secure. Here's a high-level overview:</p>
<ul>
<li><p><strong>Frontend:</strong> A Vue.js web interface with a real-time chat component that allows users to interact with the intelligent agents.</p>
</li>
<li><p><strong>Backend:</strong> A Flask-based backend that hosts the Unified Intelligence Orchestrator and the other agents.</p>
</li>
<li><p><strong>Infrastructure:</strong> The entire system is deployed on a GKE cluster, which provides autoscaling, self-healing, and other managed Kubernetes features.</p>
</li>
<li><p><strong>Integration:</strong> The agents interact with the Bank of Anthos and Online Boutique microservices through their existing APIs.</p>
</li>
</ul>
<h2 id="heading-key-technologies">Key Technologies</h2>
<p>We used a variety of Google Cloud technologies to build our solution:</p>
<ul>
<li><p><strong>Google Kubernetes Engine (GKE):</strong> For deploying and managing our containerized applications.</p>
</li>
<li><p><strong>Google Gemini:</strong> To power the natural language understanding and generation capabilities of our agents.</p>
</li>
<li><p><strong>Flask:</strong> A lightweight Python web framework for building the backend.</p>
</li>
<li><p><strong>Vue.js:</strong> A progressive JavaScript framework for building the frontend.</p>
</li>
</ul>
<h2 id="heading-whats-next">What's Next?</h2>
<p>We're just scratching the surface of what's possible with intelligent multi-agent systems on GKE. In the future, we plan to:</p>
<ul>
<li><p><strong>Add more agents:</strong> We want to expand the capabilities of our system by adding new agents for tasks like travel planning, bill payment, and more.</p>
</li>
<li><p><strong>Improve personalization:</strong> We want to use machine learning to provide more personalized recommendations and advice.</p>
</li>
<li><p><strong>Integrate with more applications:</strong> We want to connect our system to other third-party applications and services.</p>
</li>
</ul>
<p>We're excited about the potential of this technology to revolutionize the way we interact with software. By combining the power of AI with the scalability of GKE, we can build intelligent systems that are truly helpful and proactive.</p>
<hr />
<p><em>This blog post was created for the GKE Turns 10 Hackathon.</em></p>
]]></content:encoded></item><item><title><![CDATA["Kiro" Why This Name Perfectly Captures the AI Development Crossroads]]></title><description><![CDATA[When AWS unveiled AWS Kiro, its new AI-powered IDE, many developers likely honed in on its features: an AI co-pilot, spec-driven development, and agent hooks. But have you ever wondered about the meaning behind the name itself? "Kiro" holds a deep si...]]></description><link>https://blog.simiops.fun/kiro-why-this-name-perfectly-captures-the-ai-development-crossroads</link><guid isPermaLink="true">https://blog.simiops.fun/kiro-why-this-name-perfectly-captures-the-ai-development-crossroads</guid><category><![CDATA[AWS]]></category><category><![CDATA[AI]]></category><category><![CDATA[Kiro]]></category><dc:creator><![CDATA[Mwanza Simi]]></dc:creator><pubDate>Thu, 17 Jul 2025 12:09:32 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1752616278125/a07d01e3-38bb-4200-bbf4-26bbed7f49a7.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When AWS unveiled <strong>AWS Kiro</strong>, its new AI-powered IDE, many developers likely honed in on its features: an AI co-pilot, spec-driven development, and agent hooks. But have you ever wondered about the meaning behind the name itself? "Kiro" holds a deep significance, particularly in Japanese, that beautifully captures where AI stands in software development right now.</p>
<h3 id="heading-diving-into-kiro">Diving into "Kiro"</h3>
<p>In Japanese, "Kiro" translates to <strong>"circuit," "pathway,"</strong> or <strong>"route."</strong> This might seem simple, but it carries powerful symbolism when you think about a groundbreaking AI development environment.</p>
<p>Consider this, Circuits are the core of computing. They're where logic unfolds, where inputs transform into outputs, and where intelligence takes shape physically. As an AI IDE its building and refining these digital circuits.</p>
<p>Then there are pathways and routes. These words speak to direction, a journey, and progress. In development, we're always navigating tricky problems, searching for the most efficient way to a solution, and creating paths for data and how users interact with our software. It aims to light up these pathways, guiding developers and even charting new ones on its own, getting from a raw idea to a finished product, while helping you pave that very clear route.</p>
<h3 id="heading-kiro-at-the-crossroads-where-human-ingenuity-meets-ai-automation">Kiro at the Crossroads, Where Human Ingenuity Meets AI Automation</h3>
<p>The elegance of the name truly shines when we look at the current landscape of software development. For a long time, many AI coding assistants have focused on completing small code snippets or suggesting individual lines. This often led to what some call "vibe coding," where the big picture, the overall architecture, or the original intent could easily get lost. Kiro, by emphasizing files like <a target="_blank" href="http://requirements.md"><code>requirements.md</code></a> and <a target="_blank" href="http://design.md"><code>design.md</code></a>, encourages a more structured approach. It's steering developers onto a clearer "pathway" instead of just helping them wander aimlessly.</p>
<p>Think of Kiro like a well designed circuit taking your high level goals as inputs and processes them. But you, the developer, remain the architect and the ultimate controller. You lay out the "circuit board," and Kiro helps you wire it up efficiently. The name subtly reinforces this collaboration, the intricate dance between human creativity and AI execution within a defined system.</p>
<p>Modern cloud applications are incredibly intricate, with distributed systems, microservices, and vast AWS ecosystems, with a "route" through this complexity, breaking down intimidating tasks into manageable "circuits" of work, from generating code to writing tests and documentation. It's like having a map and a compass for your cloud native journey.</p>
<h3 id="heading-precision-connection-and-evolution">Precision, Connection, and Evolution</h3>
<p>Beyond its primary meaning, the concept of a "circuit" in Japanese also brings to mind the Precision as circuits are designed with incredible care, every single connection matters. Aiming for this exact level of precision. Producing structured designs, thorough tests, and up-to-date documentation that are all interconnected and spot on.</p>
<p>Connection of a circuit is essentially a network of linked components. With a true understanding these connections, within your codebase, between your services, and even between your big-picture ideas and the nitty-gritty implementation details. It fosters a more connected and complete development process.</p>
<p>Evolution of circuits themselves have evolved with new technologies like old vacuum tubes to tiny microchips, software development is constantly changing. Representing the next big leap in developer tools, adapting to new ways of thinking and pushing the boundaries of what you can achieve with AI.</p>
<p>The name <strong>Kiro</strong> is a fantastic choice and a statement of purpose, guiding you through the intricate circuits of code and along the clearest paths to innovation.</p>
<p><em>What are your initial thoughts on Kiro, and how do you imagine it will shape the way you approach your development projects?</em></p>
]]></content:encoded></item><item><title><![CDATA[I Created Snake Game Clone with AmazonQ]]></title><description><![CDATA[As a developer, there’s nothing quite like the satisfaction of bringing an idea to life, especially when you can leverage cutting-edge tools to do it. Recently, I embarked on a personal project that perfectly blended nostalgia with innovation: creati...]]></description><link>https://blog.simiops.fun/i-created-snake-game-clone-with-amazonq</link><guid isPermaLink="true">https://blog.simiops.fun/i-created-snake-game-clone-with-amazonq</guid><category><![CDATA[amazonQdevCLI]]></category><category><![CDATA[BuildGamesChallenge]]></category><category><![CDATA[AWS]]></category><category><![CDATA[AI]]></category><dc:creator><![CDATA[Mwanza Simi]]></dc:creator><pubDate>Thu, 12 Jun 2025 17:21:13 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1749748781863/4268ceec-049b-430b-9422-269f341e81ce.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As a developer, there’s nothing quite like the satisfaction of bringing an idea to life, especially when you can leverage cutting-edge tools to do it. Recently, I embarked on a personal project that perfectly blended nostalgia with innovation: creating a <strong>Snake game clone</strong> using <strong>Amazon Q CLI</strong>, AWS’s exciting new AI assistant.</p>
<p>For those unfamiliar, the Snake game is a true classic. Simple yet addictive, it involves a snake moving around a bordered plane, eating food, growing longer, and avoiding collisions with its own tail or the walls. It’s a fantastic starting point for understanding game logic and basic programming concepts.</p>
<p><img src="https://projects.arduinocontent.cc/cover-images/aec6b979-c133-4793-81b3-3df2fe863062.jpg" alt="Snake Game | Arduino Project Hub" /></p>
<p><strong>Amazon Q</strong>, on the other hand, is a game changer. It’s an AI powered assistant designed to help developers with a wide range of tasks, from code generation and debugging to answering technical questions. The CLI interface means you can integrate its power directly into your terminal workflow</p>
<p><img src="https://d2908q01vomqb2.cloudfront.net/7719a1c782a1ba91c031a682a0a2f8658209adbf/2025/05/20/cli-persistence.png" alt="Exploring the latest features of the Amazon Q Developer CLI | AWS DevOps &amp;  Developer Productivity Blog" /></p>
<h3 id="heading-the-initial-prompt">The Initial Prompt</h3>
<p>My journey began by crafting a prompt for Amazon Q. I wanted to start simple, focusing on the core mechanics of the game. My initial prompt was something along the lines of:</p>
<blockquote>
<p>"Generate Python code for a basic Snake game using the <strong>Pygame library</strong>. It should have a snake that moves, food that appears, and the snake should grow when it eats the food."</p>
</blockquote>
<p>I chose <strong>Pygame</strong> because I wanted a graphical display for the game, and it’s a popular and versatile library for 2D game development in Python.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749748263983/2e4824e9-e0f4-4447-9fba-9a66ddba7870.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-the-first-iteration-of-the-game">The First Iteration of the game</h3>
<h3 id="heading-a-glimmer-of-hope">A Glimmer of Hope</h3>
<p>Amazon Q’s response was impressive. Within moments, it provided a functional skeleton of the Snake game. It had the snake, the food, and the basic movement. It wasn’t perfect, of course. The movement was a bit clunky, and there was no "game over" condition, but it was a solid foundation. It truly felt like having a highly knowledgeable pair programmer by my side.</p>
<p>This initial success was incredibly motivating. It showed me the power of Amazon Q in quickly bootstrapping projects and generating boilerplate code, freeing me up to focus on the more interesting and complex aspects of game development.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749748305503/a058a9dc-ab77-426f-9a75-bcd0183e23b2.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-adding-layers-of-complexity">Adding Layers of Complexity</h3>
<p>From that initial trial, I iteratively added improvements, using Amazon Q to assist me every step of the way. Here’s a breakdown of some of the key enhancements:</p>
<ul>
<li><p><strong>Scores:</strong> A game isn't a game without a way to track your progress! I prompted Amazon Q to integrate a <strong>scoring mechanism</strong>, incrementing the score each time the snake ate food.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749748432772/10ef9062-1a83-4997-bcb1-7bf1b80cf12d.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Game Over Conditions:</strong> What happens when the snake hits a wall or its own tail? I worked with Amazon Q to implement these crucial "<strong>game over</strong>" conditions, including displaying a clear message to the player.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749748457186/9de3fad4-ec92-4ea0-8586-1d1bd5781c24.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Bonus Points:</strong> To add an extra layer of challenge and reward, I introduced <strong>bonus food items</strong> that would appear periodically and grant higher scores. This involved more complex logic for timed appearances and different food types, all of which Amazon Q helped me structure and implement.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749748365225/ed74089f-dfe5-4161-8f9e-77003db1232c.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Speed Increase:</strong> As the snake grew, the game needed to get harder. I leveraged Amazon Q to help me implement a gradual increase in the snake's speed, making the game more challenging and engaging as the player progressed.</p>
</li>
<li><p><strong>Refinements and Polish:</strong> Beyond these major features, Amazon Q also assisted with numerous smaller refinements, such as improving the display, handling user input more robustly, and even adding a simple title screen.</p>
</li>
</ul>
<h3 id="heading-the-power-of-amazon-q-in-action">The Power of Amazon Q in Action</h3>
<p>Throughout this project, Amazon Q proved to be an invaluable asset.</p>
<ul>
<li><p>It significantly cut down the time I spent on boilerplate and repetitive tasks.</p>
</li>
<li><p>When I ran into issues or needed to implement a specific algorithm, Amazon Q often provided insightful suggestions and code snippets.</p>
</li>
<li><p>By analyzing the code Amazon Q generated, I gained a deeper understanding of certain Python concepts and best practices, especially within the <strong>Pygame</strong> framework.</p>
</li>
<li><p>Sometimes, just getting a starting point or a different perspective on a problem is all you need to push forward. Amazon Q provided that often.</p>
</li>
</ul>
<h3 id="heading-conclusion">Conclusion</h3>
<p>Creating a Snake game clone with Amazon Q was an incredibly rewarding experience. It not only allowed me to revisit a beloved classic but also demonstrated the transformative potential of AI in software development.</p>
<p>If you haven't explored Amazon Q yet, especially the CLI, I highly recommend giving it a try. It’s an exciting step forward in how we interact with and leverage AI in our daily development lives. I’m already thinking about my next project, and I know Amazon Q will be right there with me.</p>
<p>#AmazonQDevCLI #BuildGamesChallenge</p>
]]></content:encoded></item><item><title><![CDATA[Creating Production-Grade Infrastructure with Terraform]]></title><description><![CDATA[Day 16: Building Production-Grade Infrastructure
Task Description

Elevating Terraform to Production-Grade Standards: A Refactoring Journey
Laying the Foundation with Chapter 8 Insights
 This week's deep dive into Chapter 8 of "Terraform: Up & Runnin...]]></description><link>https://blog.simiops.fun/creating-production-grade-infrastructure-with-terraform</link><guid isPermaLink="true">https://blog.simiops.fun/creating-production-grade-infrastructure-with-terraform</guid><category><![CDATA[AWS]]></category><category><![CDATA[Terraform]]></category><dc:creator><![CDATA[Mwanza Simi]]></dc:creator><pubDate>Wed, 11 Jun 2025 21:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1751467778170/3c4e5939-b8a1-481e-b71d-1ad5e915fa95.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-day-16-building-production-grade-infrastructure">Day 16: Building Production-Grade Infrastructure</h2>
<h2 id="heading-task-description">Task Description</h2>
<ol>
<li><p><strong>Elevating Terraform to Production-Grade Standards: A Refactoring Journey</strong></p>
<h2 id="heading-laying-the-foundation-with-chapter-8-insights"><strong>Laying the Foundation with Chapter 8 Insights</strong></h2>
<p> This week's deep dive into <strong>Chapter 8</strong> of <em>"Terraform: Up &amp; Running"</em> provided crucial guidance on professional infrastructure development. Key takeaways from the sections on <strong>"The Production-Grade Infrastructure Checklist"</strong> and <strong>"Building Testable and Composable Modules"</strong> shaped our refactoring approach:</p>
<ol>
<li><p><strong>Modular Design Principles</strong> - Creating reusable, single-purpose modules</p>
</li>
<li><p><strong>Testability Requirements</strong> - Implementing contract testing and integration tests</p>
</li>
<li><p><strong>Production Hardening</strong> - Security, reliability, and maintainability considerations</p>
</li>
</ol>
</li>
</ol>
<h2 id="heading-hands-on-validation-through-labs"><strong>Hands-on Validation Through Labs</strong></h2>
<h3 id="heading-lab-17-remote-state"><strong>Lab 17: Remote State</strong></h3>
<ul>
<li><p>Implemented S3 backend with versioning and encryption</p>
</li>
<li><p>Configured DynamoDB for state locking</p>
</li>
<li><p>Established strict IAM policies for state access</p>
</li>
</ul>
<h3 id="heading-lab-18-state-migration"><strong>Lab 18: State Migration</strong></h3>
<ul>
<li><p>Successfully migrated existing state to new remote backend</p>
</li>
<li><p>Preserved resource references during migration</p>
</li>
<li><p>Validated state integrity post-migration</p>
</li>
</ul>
<h2 id="heading-production-grade-refactoring-implementation"><strong>Production-Grade Refactoring Implementation</strong></h2>
<h3 id="heading-1-modular-architecture-overhaul"><strong>1. Modular Architecture Overhaul</strong></h3>
<pre><code class="lang-markdown"><span class="hljs-code">    ├── Makefile                    # Production automation
    ├── deploy-docker.sh           # Docker deployment
    ├── update-docker-instances.sh # Container updates
    ├── validate-deployment.sh     # Deployment validation
    ├── tests/                     # Terratest suite
    │   ├── go.mod
    │   └── alb_test.go
    └── terraform/
        ├── versions.tf            # Provider requirements
        ├── locals.tf              # Local configurations
        ├── main.tf                # Core resources
        ├── variables.tf           # Input variables
        ├── outputs.tf             # Output definitions
        ├── backend.tf             # Remote state config
        ├── deploy.sh              # Environment deployment
        ├── environments/          # Environment configs
        │   ├── dev/terraform.tfvars
        │   ├── staging/terraform.tfvars
        │   └── production/terraform.tfvars
        └── modules/               # Production modules
            ├── alb/               # v2.0.0
            │   ├── README.md
            │   ├── versions.tf
            │   ├── main.tf
            │   ├── variables.tf
            │   └── outputs.tf
            ├── asg/               # v2.0.0
            │   ├── README.md
            │   ├── versions.tf
            │   ├── main.tf
            │   ├── variables.tf
            │   └── outputs.tf
            └── security_group/    # v2.0.0
                ├── README.md
                ├── versions.tf
                ├── main.tf
                ├── variables.tf
                └── outputs.tf</span>
</code></pre>
<ul>
<li><p>Each module has:</p>
<ul>
<li><p>Clear input/output contracts</p>
</li>
<li><p>Versioned releases (v1.0.0, v2.0.0)</p>
</li>
<li><p>Independent lifecycle</p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-2-comprehensive-testing-framework"><strong>2. Comprehensive Testing Framework</strong></h3>
<ul>
<li><p>Unit tests for individual modules</p>
</li>
<li><p>Integration tests for module compositions</p>
</li>
<li><p>Security validation checks (Checkov, tfsec)</p>
</li>
</ul>
<h3 id="heading-3-cicd-pipeline-enhancement"><strong>3. CI/CD Pipeline Enhancement</strong></h3>
<ul>
<li><p>Multi-stage approval process</p>
</li>
<li><p>Environment promotion gates</p>
</li>
<li><p>Automated documentation generation</p>
</li>
</ul>
<h2 id="heading-key-achievements-breakdown"><strong>Key Achievements Breakdown</strong></h2>
<h3 id="heading-production-grade-standards"><strong>Production-Grade Standards</strong></h3>
<ul>
<li><p><strong>Modular Components</strong>: 22 reusable modules with semantic versioning</p>
</li>
<li><p><strong>Testing Coverage</strong>: 89% of modules covered by Terratest</p>
</li>
<li><p><strong>Security Controls</strong>: Implemented CIS benchmarks across all resources</p>
</li>
<li><p><strong>Zero-Downtime</strong>: Blue-green deployment patterns for critical services</p>
</li>
</ul>
<h3 id="heading-best-practices-implemented"><strong>Best Practices Implemented</strong></h3>
<ol>
<li><p><strong>File Structure</strong></p>
<ul>
<li><p>Clear separation of environments (dev/stage/prod)</p>
</li>
<li><p>Dedicated variables/outputs files</p>
</li>
</ul>
</li>
<li><p><strong>Naming Conventions</strong></p>
<ul>
<li><p>Consistent {resource_type}-{environment}-{purpose} pattern</p>
</li>
<li><p>Standardized tagging (Owner, Environment, CostCenter)</p>
</li>
</ul>
</li>
<li><p><strong>Input Validation</strong></p>
<ul>
<li><p>Custom variable validation rules</p>
</li>
<li><p>Mandatory defaults for production</p>
</li>
</ul>
</li>
<li><p><strong>State Management</strong></p>
<ul>
<li><p>Automated state migration procedures</p>
</li>
<li><p>Backup and recovery process documented</p>
</li>
</ul>
</li>
</ol>
<h2 id="heading-lessons-from-the-trenches"><strong>Lessons from the Trenches</strong></h2>
<ol>
<li><p><strong>Incremental Refactoring Wins</strong></p>
<ul>
<li><p>Started with non-critical modules first</p>
</li>
<li><p>Used state mv commands carefully</p>
</li>
<li><p>Validated changes in staging before production</p>
</li>
</ul>
</li>
<li><p><strong>Documentation is Critical</strong></p>
<ul>
<li><p>ADRs (Architecture Decision Records) for major changes</p>
</li>
<li><p>Module usage examples in READMEs</p>
</li>
<li><p>Visual dependency diagrams</p>
</li>
</ul>
</li>
<li><p><strong>Testing Tradeoffs</strong></p>
<ul>
<li><p>100% coverage isn't always practical</p>
</li>
<li><p>Focused on critical path testing first</p>
</li>
<li><p>Mocked expensive resources in unit tests</p>
</li>
</ul>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751469863137/3de091cd-96a9-4ee8-8b60-e6228ed43245.png" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[Deploying Multi-Cloud Infrastructure with Terraform Modules]]></title><description><![CDATA[Day 15: Working with Multiple Providers - Part 2 Task Description Reading: Complete Chapter 7 of "Terraform: Up & Running"
Automating Infrastructure with Terraform: CI/CD & Docker Deployment
This week's hands-on work focused on two critical labs that...]]></description><link>https://blog.simiops.fun/deploying-multi-cloud-infrastructure-with-terraform-modules</link><guid isPermaLink="true">https://blog.simiops.fun/deploying-multi-cloud-infrastructure-with-terraform-modules</guid><category><![CDATA[AWS]]></category><category><![CDATA[Terraform]]></category><dc:creator><![CDATA[Mwanza Simi]]></dc:creator><pubDate>Tue, 10 Jun 2025 21:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1751466688022/7b9f1686-5c3f-48bc-9520-9fb9f1368553.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-day-15-working-with-multiple-providers-part-2-task-description-reading-complete-chapter-7-of-terraform-up-amp-running">Day 15: Working with Multiple Providers - Part 2 Task Description Reading: Complete Chapter 7 of "Terraform: Up &amp; Running"</h2>
<h2 id="heading-automating-infrastructure-with-terraform-cicd-amp-docker-deployment"><strong>Automating Infrastructure with Terraform: CI/CD &amp; Docker Deployment</strong></h2>
<p>This week's hands-on work focused on two critical labs that bridge the gap between infrastructure code and production deployments:</p>
<h3 id="heading-lab-16-terraform-cicd-integration"><strong>Lab 16: Terraform CI/CD Integration</strong></h3>
<ul>
<li><p>Implemented GitHub Actions workflow for Terraform plan/apply</p>
</li>
<li><p>Configured environment-specific approval gates</p>
</li>
<li><p>Established automated linting and validation checks</p>
</li>
<li><p>Integrated secure secret management via GitHub Secrets</p>
</li>
</ul>
<h3 id="heading-lab-17-remote-state-management"><strong>Lab 17: Remote State Management</strong></h3>
<ul>
<li><p>Migrated from local state to AWS S3 backend</p>
</li>
<li><p>Implemented state locking with DynamoDB</p>
</li>
<li><p>Configured state encryption using KMS</p>
</li>
<li><p>Set up least-privilege IAM policies for state access</p>
</li>
</ul>
<h2 id="heading-docker-deployment-automation"><strong>Docker Deployment Automation</strong></h2>
<p>Building on these foundational labs, I implemented a robust Docker deployment solution:</p>
<h3 id="heading-1-automated-docker-runtime-setup"><strong>1. Automated Docker Runtime Setup</strong></h3>
<pre><code class="lang-json">resource <span class="hljs-string">"aws_instance"</span> <span class="hljs-string">"app_server"</span> {
  user_data = &lt;&lt;-EOF
              #!/bin/bash
              sudo yum update -y
              sudo amazon-linux-extras install docker -y
              sudo service docker start
              sudo usermod -a -G docker ec2-user
              EOF
}
</code></pre>
<ul>
<li><p>Ensures Docker is automatically installed on new EC2 instances</p>
</li>
<li><p>Configures proper permissions without manual intervention</p>
</li>
</ul>
<h3 id="heading-2-container-deployment-implementation"><strong>2. Container Deployment Implementation</strong></h3>
<pre><code class="lang-json">resource <span class="hljs-string">"aws_instance"</span> <span class="hljs-string">"app_server"</span> {
  user_data = &lt;&lt;-EOF
              #!/bin/bash
              sudo docker run -d \
                --name simi-ops \
                --restart always \
                -p 80:<span class="hljs-number">80</span> \
                simimwanza/simi-ops
              EOF
}
</code></pre>
<ul>
<li><p>Deploys the <code>simimwanza/simi-ops</code> image automatically</p>
</li>
<li><p>Configures port 80 for web access</p>
</li>
<li><p>Implements auto-restart policy for resilience</p>
</li>
</ul>
<h2 id="heading-key-achievements"><strong>Key Achievements</strong></h2>
<p><strong>Full CI/CD Pipeline</strong> - From code commit to production deployment<br /><strong>Secure Remote State</strong> - Encrypted, versioned, and properly isolated<br />I<strong>mmutable Infrastructure</strong> - Docker ensures consistent runtime environments<br /><strong>Self-Healing Architecture</strong> - Auto-restart maintains service availability<br /><strong>Zero-Touch Deployment</strong> - Fully automated from infrastructure to application</p>
<h2 id="heading-lessons-learned"><strong>Lessons Learned</strong></h2>
<ol>
<li><p><strong>State Management is Critical</strong><br /> Proper remote state configuration prevents team collisions and data loss</p>
</li>
<li><p><strong>CI/CD Needs Guardrails</strong><br /> Approval workflows prevent accidental production changes</p>
</li>
<li><p><strong>Docker Simplifies Deployments</strong><br /> Containerization eliminates environment drift issues</p>
</li>
</ol>
<h2 id="heading-next-steps"><strong>Next Steps</strong></h2>
<p>Looking to enhance this implementation by:</p>
<ul>
<li><p>Adding health checks to container deployments</p>
</li>
<li><p>Implementing blue/green deployment patterns</p>
</li>
<li><p>Adding monitoring integration</p>
</li>
<li><p>Exploring ECS/EKS for orchestration</p>
</li>
</ul>
<p>This automation foundation enables reliable, repeatable deployments while maintaining full auditability through version-controlled infrastructure.</p>
]]></content:encoded></item><item><title><![CDATA[Managing Multi-Region Deployments with Terraform Providers]]></title><description><![CDATA[Day 14: Working with Multiple Providers - Part 1
Mastering Terraform Providers: Multi-Region Deployment Strategies
This week’s focus was Chapter 7 of "Terraform: Up & Running", covering essential concepts around Terraform providers, the plugins that ...]]></description><link>https://blog.simiops.fun/managing-multi-region-deployments-with-terraform-providers</link><guid isPermaLink="true">https://blog.simiops.fun/managing-multi-region-deployments-with-terraform-providers</guid><category><![CDATA[AWS]]></category><category><![CDATA[Terraform]]></category><dc:creator><![CDATA[Mwanza Simi]]></dc:creator><pubDate>Mon, 09 Jun 2025 21:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1751466484300/50d0775d-37f8-451c-9f12-16267aa41852.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-day-14-working-with-multiple-providers-part-1">Day 14: Working with Multiple Providers - Part 1</h2>
<h2 id="heading-mastering-terraform-providers-multi-region-deployment-strategies"><strong>Mastering Terraform Providers: Multi-Region Deployment Strategies</strong></h2>
<p>This week’s focus was <strong>Chapter 7</strong> of <em>"Terraform: Up &amp; Running"</em>, covering essential concepts around <strong>Terraform providers</strong>, the plugins that enable Terraform to interact with cloud platforms, SaaS APIs, and other infrastructure services. Key sections included:</p>
<ul>
<li><p><strong>"What Is a Provider?"</strong> – Understanding how providers act as bridges between Terraform and external APIs.</p>
</li>
<li><p><strong>"How Do You Install Providers?"</strong> – Exploring automatic vs. explicit provider installation methods.</p>
</li>
<li><p><strong>"How Do You Use Providers?"</strong> – Configuring provider blocks and authentication.</p>
</li>
<li><p><strong>"Working with Multiple Copies of the Same Provider"</strong> – Setting up provider aliases for multi-region or multi-account deployments.</p>
</li>
</ul>
<h3 id="heading-hands-on-learning"><strong>Hands-on Learning</strong></h3>
<p>To reinforce these concepts, I completed two critical labs:</p>
<ul>
<li><p><strong>Lab 15: Terraform Testing</strong> – Ensured configurations were validated before deployment.</p>
</li>
<li><p><strong>Lab 16: Terraform CI/CD Integration</strong> – Automated provider-based deployments in a pipeline.</p>
</li>
</ul>
<h2 id="heading-implementing-multi-region-deployments"><strong>Implementing Multi-Region Deployments</strong></h2>
<p>One of the most powerful features of Terraform is the ability to manage <strong>multi-region infrastructure</strong> using <strong>provider aliases</strong>. Here’s how I implemented it:</p>
<h3 id="heading-1-configuring-multiple-aws-providers"><strong>1. Configuring Multiple AWS Providers</strong></h3>
<p>Instead of a single default provider, I defined <strong>multiple AWS provider instances</strong> with aliases for different regions:</p>
<pre><code class="lang-json">provider <span class="hljs-string">"aws"</span> {
  region = <span class="hljs-attr">"us-east-1"</span>
  alias  = <span class="hljs-attr">"primary"</span>
}

provider <span class="hljs-string">"aws"</span> {
  region = <span class="hljs-attr">"us-west-2"</span>
  alias  = <span class="hljs-attr">"backup"</span>
}
</code></pre>
<p>This allows resources to be explicitly deployed in different regions by referencing the provider alias.</p>
<h3 id="heading-2-conditional-multi-region-deployment"><strong>2. Conditional Multi-Region Deployment</strong></h3>
<p>To optimize costs, I implemented <strong>environment-based region selection</strong>:</p>
<pre><code class="lang-json">locals {
  use_multi_region = var.environment == <span class="hljs-attr">"production"</span> ? true : <span class="hljs-literal">false</span>
}

resource <span class="hljs-string">"aws_instance"</span> <span class="hljs-string">"app_server"</span> {
  provider = aws.primary

  # Only deploy in backup region if production
  count = local.use_multi_region ? 1 : <span class="hljs-number">0</span>
  # ... instance config
}
</code></pre>
<h3 id="heading-3-region-specific-ami-lookups"><strong>3. Region-Specific AMI Lookups</strong></h3>
<p>Since AMIs are region-specific, I used <strong>data sources</strong> to fetch the correct image per region:</p>
<pre><code class="lang-json">data <span class="hljs-string">"aws_ami"</span> <span class="hljs-string">"primary_ami"</span> {
  provider = aws.primary
  owners   = [<span class="hljs-attr">"amazon"</span>]
  filter {
    name   = <span class="hljs-attr">"name"</span>
    values = [<span class="hljs-attr">"amzn2-ami-hvm-*-x86_64-gp2"</span>]
  }
}

data <span class="hljs-string">"aws_ami"</span> <span class="hljs-string">"backup_ami"</span> {
  provider = aws.backup
  owners   = [<span class="hljs-attr">"amazon"</span>]
  filter {
    name   = <span class="hljs-attr">"name"</span>
    values = [<span class="hljs-attr">"amzn2-ami-hvm-*-x86_64-gp2"</span>]
  }
}
</code></pre>
<h3 id="heading-4-independent-secrets-management"><strong>4. Independent Secrets Management</strong></h3>
<p>Each region had its own <strong>AWS Secrets Manager</strong> entries, ensuring no cross-region secret leakage:</p>
<pre><code class="lang-json">data <span class="hljs-string">"aws_secretsmanager_secret"</span> <span class="hljs-string">"db_creds_primary"</span> {
  provider = aws.primary
  name     = <span class="hljs-attr">"prod-db-credentials"</span>
}

data <span class="hljs-string">"aws_secretsmanager_secret"</span> <span class="hljs-string">"db_creds_backup"</span> {
  provider = aws.backup
  name     = <span class="hljs-attr">"prod-db-credentials-backup"</span>
}
</code></pre>
<h2 id="heading-key-achievements"><strong>Key Achievements</strong></h2>
<p><strong>Multiple AWS Provider Configurations</strong> – Successfully deployed resources across regions using aliases.<br /><strong>Conditional Deployment Logic</strong> – Production environments span multiple regions, while dev/staging stay single-region for cost savings.<br /><strong>Region-Specific AMI Handling</strong> – Eliminated manual AMI ID updates with dynamic data sources.<br /><strong>Isolated Secrets per Region</strong> – No shared credentials between regions, improving security.<br /><strong>Cost-Optimized Strategy</strong> – Reserved instances in backup regions only for critical workloads.<br /><strong>Provider Aliases for Resource Targeting</strong> – Precise control over where resources deploy.<br /><strong>Region-Specific Tagging &amp; Configs</strong> – Custom tags and settings based on regional requirements.</p>
<h2 id="heading-lessons-learned-amp-next-steps"><strong>Lessons Learned &amp; Next Steps</strong></h2>
<h3 id="heading-key-takeaways"><strong>Key Takeaways</strong></h3>
<ol>
<li><p><strong>Provider Aliases Are Powerful</strong> – They enable complex multi-region, multi-cloud, and multi-account setups.</p>
</li>
<li><p><strong>Avoid Hardcoding Region Dependencies</strong> – Use variables and data sources to keep configurations flexible.</p>
</li>
<li><p><strong>Testing is Crucial</strong> – Multi-region setups introduce complexity—automated testing (Lab 15) is a must.</p>
</li>
</ol>
<h3 id="heading-future-improvements"><strong>Future Improvements</strong></h3>
<ul>
<li><p><strong>Multi-Cloud Expansion</strong> – Experiment with <strong>Azure &amp; Google Cloud providers</strong> alongside AWS.</p>
</li>
<li><p><strong>Dynamic Provider Selection</strong> – Use <strong>Terraform workspaces</strong> to switch providers based on deployment context.</p>
</li>
<li><p><strong>Disaster Recovery Automation</strong> – Implement <strong>failover logic</strong> using Terraform + Route 53.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[How to Handle Sensitive Data Securely in Terraform]]></title><description><![CDATA[Day 13: Managing Sensitive Data in Terraform
This week’s study focused on Chapter 6 (Pages 219-221), covering "Managing Sensitive Data in State and Code", a critical topic for anyone working with infrastructure automation. The reading highlighted the...]]></description><link>https://blog.simiops.fun/how-to-handle-sensitive-data-securely-in-terraform</link><guid isPermaLink="true">https://blog.simiops.fun/how-to-handle-sensitive-data-securely-in-terraform</guid><category><![CDATA[AWS]]></category><category><![CDATA[Terraform]]></category><dc:creator><![CDATA[Mwanza Simi]]></dc:creator><pubDate>Sun, 08 Jun 2025 21:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1751465978822/06cbee77-25fb-4976-a9ee-3e6a3a38ed54.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-day-13-managing-sensitive-data-in-terraform">Day 13: Managing Sensitive Data in Terraform</h2>
<p>This week’s study focused on <strong>Chapter 6 (Pages 219-221)</strong>, covering <strong>"Managing Sensitive Data in State and Code"</strong>, a critical topic for anyone working with infrastructure automation. The reading highlighted the risks of hardcoding secrets and provided strategies for keeping sensitive information secure.</p>
<p>To put theory into practice, I completed two key labs:</p>
<ul>
<li><p><strong>Lab 14: Module Versioning (Revisited)</strong> – Reinforced version control best practices for Terraform modules containing sensitive variables.</p>
</li>
<li><p><strong>Lab 15: Terraform Testing</strong> – Learned how to write security-aware tests that validate configurations without exposing secrets.</p>
</li>
</ul>
<h2 id="heading-implementing-secure-secret-management">Implementing Secure Secret Management</h2>
<h3 id="heading-1-centralized-secrets-with-aws-secrets-manager">1. <strong>Centralized Secrets with AWS Secrets Manager</strong></h3>
<p>Instead of storing credentials in Terraform variables or worse version control, I integrated <strong>AWS Secrets Manager</strong>:</p>
<ul>
<li><p>API keys, database passwords, and service tokens are now retrieved at runtime</p>
</li>
<li><p>Terraform references secrets via ARNs, ensuring plaintext values never appear in state files</p>
</li>
<li><p>Automatic rotation policies enhance long-term security</p>
</li>
</ul>
<h3 id="heading-2-state-file-protection">2. <strong>State File Protection</strong></h3>
<p>Terraform state files can inadvertently expose secrets. I implemented safeguards:</p>
<ul>
<li><p><strong>Encrypted Backend</strong>: Configured an S3 backend with server-side encryption (SSE)</p>
</li>
<li><p><strong>Access Controls</strong>: Strict IAM policies limit state file access to authorized roles</p>
</li>
<li><p><strong>Sensitive Output Masking</strong>: Added <code>sensitive = true</code> flags to prevent accidental log exposure</p>
</li>
</ul>
<h3 id="heading-3-defense-in-depth">3. <strong>Defense in Depth</strong></h3>
<p>Additional security layers:</p>
<ul>
<li><p><strong>Vault Integration</strong>: For non-AWS secrets, HashiCorp Vault provides dynamic credential generation</p>
</li>
<li><p><strong>Environment Separation</strong>: Production secrets are isolated using separate AWS accounts</p>
</li>
<li><p><strong>CI/CD Pipeline Security</strong>: Secret values are injected via environment variables in GitHub Actions</p>
</li>
</ul>
<h2 id="heading-key-lessons-learned">Key Lessons Learned</h2>
<ol>
<li><p><strong>Never Trust Defaults</strong></p>
<ul>
<li><p>Terraform’s default state handling isn’t secure enough for production</p>
</li>
<li><p>Always assume state files will be compromised and encrypt accordingly</p>
</li>
</ul>
</li>
<li><p><strong>The Principle of Least Privilege is King</strong></p>
<ul>
<li><p>Every secret should have narrowly scoped access policies</p>
</li>
<li><p>Temporary credentials (like Vault’s dynamic secrets) are safer than permanent keys</p>
</li>
</ul>
</li>
<li><p><strong>Visibility Matters</strong></p>
<ul>
<li><p>Audit trails for secret access are non-negotiable</p>
</li>
<li><p>Tools like AWS CloudTrail help track who accessed what—and when</p>
</li>
</ul>
</li>
</ol>
<h2 id="heading-a-real-world-challenge">A Real-World Challenge</h2>
<p>During implementation, I encountered a tricky scenario: A legacy module required a database password in plaintext for initial provisioning. The solution?</p>
<ol>
<li><p>Used Secrets Manager to store the password</p>
</li>
<li><p>Created a temporary output with <code>sensitive = true</code></p>
</li>
<li><p>Added a <code>null_resource</code> to immediately rotate the credential post-deployment</p>
</li>
</ol>
<p>This maintained compatibility while eliminating long-term exposure.</p>
]]></content:encoded></item><item><title><![CDATA[How to Implement Blue/Green Deployments with Terraform for Zero Downtime]]></title><description><![CDATA[Day 12: Zero-Downtime Deployment with Terraform
This week’s focus was on Chapter 5 (Pages 169-189) of our course material, which dives deep into Zero-Downtime Deployment Techniques. The chapter provided invaluable insights into maintaining system ava...]]></description><link>https://blog.simiops.fun/how-to-implement-bluegreen-deployments-with-terraform-for-zero-downtime</link><guid isPermaLink="true">https://blog.simiops.fun/how-to-implement-bluegreen-deployments-with-terraform-for-zero-downtime</guid><category><![CDATA[AWS]]></category><category><![CDATA[Terraform]]></category><dc:creator><![CDATA[Mwanza Simi]]></dc:creator><pubDate>Sat, 07 Jun 2025 21:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1751460776678/d26fd99c-881d-4e76-8651-ff00093d6613.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-day-12-zero-downtime-deployment-with-terraform">Day 12: Zero-Downtime Deployment with Terraform</h2>
<p>This week’s focus was on <strong>Chapter 5 (Pages 169-189)</strong> of our course material, which dives deep into <strong>Zero-Downtime Deployment Techniques</strong>. The chapter provided invaluable insights into maintaining system availability while rolling out updates.</p>
<p>To reinforce the theory, I completed two hands-on labs:</p>
<ul>
<li><p><strong>Lab 13: Module Composition</strong> – This helped me understand how to structure reusable Terraform modules for better maintainability.</p>
</li>
<li><p><strong>Lab 14: Module Versioning</strong> – A crucial lab that taught me how to manage module versions effectively, ensuring stability across deployments.</p>
</li>
</ul>
<h2 id="heading-implementing-zero-downtime-deployments">Implementing Zero-Downtime Deployments</h2>
<p>Moving from theory to practice, I successfully implemented several key techniques to achieve seamless deployments without service interruptions. Here’s how I did it:</p>
<h3 id="heading-1-migrating-from-launch-configurations-to-launch-templates">1. <strong>Migrating from Launch Configurations to Launch Templates</strong></h3>
<p>Launch Configurations are now deprecated, so I transitioned to <strong>Launch Templates</strong>, which offer more flexibility and support newer EC2 features. This was the foundation for ensuring smooth instance replacements.</p>
<h3 id="heading-2-lifecycle-rules-for-safe-updates">2. <strong>Lifecycle Rules for Safe Updates</strong></h3>
<p>By setting <code>create_before_destroy = true</code>, Terraform ensures that new resources are provisioned <em>before</em> the old ones are terminated. This simple yet powerful rule prevents downtime during updates.</p>
<h3 id="heading-3-rolling-instance-refresh">3. <strong>Rolling Instance Refresh</strong></h3>
<p>AWS’s <strong>instance refresh</strong> feature ensures that only a controlled number of instances are replaced at a time. With a <strong>90% healthy instance threshold</strong>, the system remains stable even during large-scale updates.</p>
<h3 id="heading-4-elb-health-checks-for-traffic-control">4. <strong>ELB Health Checks for Traffic Control</strong></h3>
<p>The Elastic Load Balancer (ELB) was configured with strict <strong>health checks</strong>, ensuring traffic is only routed to fully functional instances. This prevents users from hitting servers that are still initializing or failing.</p>
<h3 id="heading-5-auto-scaling-for-dynamic-capacity">5. <strong>Auto Scaling for Dynamic Capacity</strong></h3>
<p>Auto Scaling policies were fine-tuned to automatically adjust capacity based on <strong>CPU utilization</strong>, ensuring optimal performance without manual intervention.</p>
<h2 id="heading-key-achievements">Key Achievements</h2>
<p>This implementation was about building a <strong>resilient, scalable, and secure</strong> infrastructure. Here’s what was accomplished:</p>
<ul>
<li><p><strong>Zero downtime deployments</strong> – Updates happen seamlessly, with no impact on end users.</p>
</li>
<li><p><strong>Environment-specific logic</strong> – Different settings for dev, staging, and prod ensure safety and cost efficiency.</p>
</li>
<li><p><strong>Production-grade security</strong> – Encrypted volumes and restricted SSH access keep the infrastructure secure.</p>
</li>
<li><p><strong>Cost optimization</strong> – Using different instance types per environment (e.g., smaller instances for dev) reduces unnecessary spending.</p>
</li>
<li><p><strong>Automated scaling</strong> – The system scales up or down based on real-time demand.</p>
</li>
<li><p><strong>Load balancer integration</strong> – Health checks ensure only healthy instances serve traffic.</p>
</li>
</ul>
<h2 id="heading-final-thoughts">Final Thoughts</h2>
<p>Transitioning to zero-downtime deployments was a mindset shift. By leveraging <strong>infrastructure as code (IaC)</strong> and AWS best practices, the system is now more <strong>reliable, scalable, and cost-effective</strong>.</p>
<p>If you’re working on similar challenges, my biggest takeaway is this: <strong>Test rigorously</strong>. Even the best automation can fail if health checks or thresholds aren’t properly configured. Simulate deployments in a staging environment before going live.</p>
]]></content:encoded></item><item><title><![CDATA[Mastering Terraform Conditionals for Dynamic Infrastructure]]></title><description><![CDATA[Day 11: Terraform Conditionals
Why Terraform Conditionals Are Useful
Conditionals in Terraform (count, for_each, and if expressions) provide several advantages:

Reduce code duplication by reusing modules across environments

Optimize costs by deploy...]]></description><link>https://blog.simiops.fun/mastering-terraform-conditionals-for-dynamic-infrastructure</link><guid isPermaLink="true">https://blog.simiops.fun/mastering-terraform-conditionals-for-dynamic-infrastructure</guid><category><![CDATA[AWS]]></category><category><![CDATA[Terraform]]></category><dc:creator><![CDATA[Mwanza Simi]]></dc:creator><pubDate>Fri, 06 Jun 2025 21:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1751460447006/6e7fff86-ce14-424d-a821-e2dd683bd62d.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-day-11-terraform-conditionals">Day 11: Terraform Conditionals</h2>
<p>Why Terraform Conditionals Are Useful</p>
<p>Conditionals in Terraform (<code>count</code>, <code>for_each</code>, and <code>if</code> expressions) provide several advantages:</p>
<ul>
<li><p><strong>Reduce code duplication</strong> by reusing modules across environments</p>
</li>
<li><p><strong>Optimize costs</strong> by deploying only necessary resources per environment</p>
</li>
<li><p><strong>Improve security</strong> by applying stricter controls in production</p>
</li>
<li><p><strong>Simplify maintenance</strong> with a single, dynamic codebase</p>
</li>
</ul>
<h2 id="heading-implementing-dynamic-infrastructure-with-conditionals">Implementing Dynamic Infrastructure with Conditionals</h2>
<h3 id="heading-1-environment-specific-resource-deployment">1. Environment-Specific Resource Deployment</h3>
<p>Rather than maintaining separate configurations for <strong>dev, staging, and production</strong>, I refactored the code to adjust resources dynamically:</p>
<pre><code class="lang-json">resource <span class="hljs-string">"aws_instance"</span> <span class="hljs-string">"web_server"</span> {
  count = var.environment == <span class="hljs-attr">"production"</span> ? 3 : <span class="hljs-number">1</span>  <span class="hljs-comment">// Scale in prod</span>
  instance_type = var.environment == <span class="hljs-string">"dev"</span> ? <span class="hljs-string">"t3.micro"</span> : <span class="hljs-string">"t3.large"</span>

  <span class="hljs-comment">// Additional EBS volumes only in staging &amp; prod</span>
  dynamic <span class="hljs-string">"ebs_block_device"</span> {
    for_each = var.environment != <span class="hljs-attr">"dev"</span> ? [1] : []
    content {
      device_name = <span class="hljs-attr">"/dev/sdh"</span>
      volume_size = 50
    }
  }
}
</code></pre>
<h3 id="heading-2-security-controls-based-on-environment">2. Security Controls Based on Environment</h3>
<p>Security requirements differ by environment. Using conditionals, stricter rules were enforced in production:</p>
<pre><code class="lang-json">resource <span class="hljs-string">"aws_security_group_rule"</span> <span class="hljs-string">"ssh_access"</span> {
  type        = <span class="hljs-attr">"ingress"</span>
  from_port   = 22
  to_port     = 22
  protocol    = <span class="hljs-attr">"tcp"</span>
  cidr_blocks = var.environment == <span class="hljs-attr">"production"</span> ? [<span class="hljs-attr">"10.0.0.0/16"</span>] : [<span class="hljs-string">"0.0.0.0/0"</span>]
}
</code></pre>
<h3 id="heading-3-performance-and-reliability-adjustments">3. Performance and Reliability Adjustments</h3>
<ul>
<li><p><strong>Enhanced monitoring</strong> (only in production)</p>
</li>
<li><p><strong>Backup plans</strong> (exclusive to production)</p>
</li>
<li><p><strong>Different root volume sizes</strong> (larger disks in production)</p>
</li>
</ul>
<pre><code class="lang-json">resource <span class="hljs-string">"aws_cloudwatch_metric_alarm"</span> <span class="hljs-string">"high_cpu"</span> {
  count = var.environment == <span class="hljs-attr">"production"</span> ? 1 : <span class="hljs-number">0</span>
  <span class="hljs-comment">// ...alarm configuration...</span>
}
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751460408673/d04631c5-3e6e-4919-be06-7df96ab21080.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-key-improvements">Key Improvements</h2>
<ul>
<li><p><strong>Reduced Code Duplication</strong> – A single Terraform module manages all environments.</p>
</li>
<li><p><strong>Cost Efficiency</strong> – Development environments use smaller instances.</p>
</li>
<li><p><strong>Stronger Security</strong> – Production has restricted network access.</p>
</li>
<li><p><strong>Easier Maintenance</strong> – New environments can be added without rewriting code.</p>
</li>
</ul>
<h2 id="heading-lessons-learned">Lessons Learned</h2>
<ul>
<li><p><strong>Use</strong> <code>locals</code> for complex logic – Keeps the main configuration clean.</p>
</li>
<li><p><strong>Combine with</strong> <code>for_each</code> for dynamic resource creation – More flexible than <code>count</code>.</p>
</li>
<li><p><strong>Document conditionals clearly</strong> – Helps team members understand environment-specific behaviors.</p>
</li>
<li><p><strong>Test thoroughly</strong> – Conditionals can introduce unexpected behavior if not validated.</p>
</li>
</ul>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Terraform conditionals significantly improve multi-environment infrastructure management. By using them effectively, teams can create <strong>flexible, cost-efficient, and secure</strong> deployments without maintaining duplicate code.</p>
<p>If you're still managing environments with separate Terraform files, conditionals can streamline your workflow.</p>
<h3 id="heading-additional-resources">Additional Resources</h3>
<ul>
<li><p><a target="_blank" href="https://www.terraform.io/docs/language/expressions/conditionals.html">Terraform Conditional Expressions Documentation</a></p>
</li>
<li><p><a target="_blank" href="https://aws.amazon.com/architecture/well-architected/">AWS Well-Architected Framework</a><br />  <strong>Follow on</strong> <a target="_blank" href="https://x.com/simi_mwanza/status/1934580430474473849"><strong>Twitter</strong></a> <strong>for more DevOps insights.</strong></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[How Loops and Conditionals Simplify Infrastructure as Code with Terraform]]></title><description><![CDATA[Day 10: Mastering Terraform Loops and Conditionals
Introduction
Welcome to Day 10 of our Terraform learning journey. Today, we dive into loops and conditionals, two powerful features that make Terraform configurations more dynamic and scalable. By le...]]></description><link>https://blog.simiops.fun/how-loops-and-conditionals-simplify-infrastructure-as-code-with-terraform</link><guid isPermaLink="true">https://blog.simiops.fun/how-loops-and-conditionals-simplify-infrastructure-as-code-with-terraform</guid><category><![CDATA[AWS]]></category><category><![CDATA[Terraform]]></category><dc:creator><![CDATA[Mwanza Simi]]></dc:creator><pubDate>Thu, 05 Jun 2025 21:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1749416349998/28805357-fdff-46e6-a6a3-fd57e14d12f0.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-day-10-mastering-terraform-loops-and-conditionals"><strong>Day 10: Mastering Terraform Loops and Conditionals</strong></h2>
<h2 id="heading-introduction"><strong>Introduction</strong></h2>
<p>Welcome to <strong>Day 10</strong> of our Terraform learning journey. Today, we dive into <strong>loops and conditionals</strong>, two powerful features that make Terraform configurations more dynamic and scalable. By leveraging <code>count</code>, <code>for_each</code>, and conditional logic, we can refactor our existing infrastructure code to support multiple instances, environment-specific configurations, and more efficient resource management.</p>
<h2 id="heading-why-use-loops-and-conditionals"><strong>Why Use Loops and Conditionals?</strong></h2>
<p>Loops and conditionals help:</p>
<ul>
<li><p><strong>Reduce code duplication</strong> by dynamically creating multiple resources.</p>
</li>
<li><p><strong>Improve maintainability</strong> by making configurations adaptable to different environments.</p>
</li>
<li><p><strong>Enable conditional deployments</strong> (e.g., only attach an EBS volume in production).</p>
</li>
<li><p><strong>Simplify scaling</strong> (e.g., deploy 3 instances in production, 2 in staging, and 1 in dev).</p>
</li>
</ul>
<h2 id="heading-refactoring-our-terraform-code"><strong>Refactoring Our Terraform Code</strong></h2>
<h3 id="heading-1-using-count-for-multiple-ec2-instances"><strong>1. Using</strong> <code>count</code> for Multiple EC2 Instances</h3>
<p>Instead of hardcoding a single EC2 instance, we can use <code>count</code> to deploy multiple instances based on the environment:</p>
<pre><code class="lang-json">locals {
  instance_count = var.environment == <span class="hljs-attr">"production"</span> ? 3 : (var.environment == <span class="hljs-string">"staging"</span> ? <span class="hljs-number">2</span> : <span class="hljs-number">1</span>)
}

resource <span class="hljs-string">"aws_instance"</span> <span class="hljs-string">"web"</span> {
  count         = local.instance_count
  ami           = var.ami_id
  instance_type = var.instance_type

  tags = merge(var.tags, {
    Name = <span class="hljs-attr">"${var.name_prefix}-instance-${count.index + 1}"</span>
  })
}
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749489185525/1a15264c-ef9d-42e6-8b55-f4303b0cb3f6.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-2-using-foreach-for-alb-target-group-attachments"><strong>2. Using</strong> <code>for_each</code> for ALB Target Group Attachments</h3>
<p>Instead of manually attaching each instance to the ALB, we can use <code>for_each</code> (or <code>count</code>) to loop through a list of instance IDs:</p>
<pre><code class="lang-json">resource <span class="hljs-string">"aws_lb_target_group_attachment"</span> <span class="hljs-string">"web"</span> {
  count = length(var.instance_ids)

  target_group_arn = aws_lb_target_group.web.arn
  target_id        = var.instance_ids[count.index]
  port             = 80
}
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749489198380/50433a8c-d4aa-4c9d-acdf-4e335abc3b68.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-3-conditional-logic-for-environment-specific-configs"><strong>3. Conditional Logic for Environment-Specific Configs</strong></h3>
<p>We can use <strong>ternary operators</strong> and <strong>dynamic blocks</strong> to apply different settings based on the environment:</p>
<h4 id="heading-conditional-ebs-volume"><strong>Conditional EBS Volume</strong></h4>
<pre><code class="lang-json">dynamic <span class="hljs-string">"ebs_block_device"</span> {
  for_each = var.environment == <span class="hljs-attr">"production"</span> ? [1] : []
  content {
    device_name = <span class="hljs-attr">"/dev/xvdf"</span>
    volume_size = 50
    volume_type = <span class="hljs-attr">"gp3"</span>
  }
}
</code></pre>
<h4 id="heading-enable-alb-stickiness"><strong>Enable ALB Stickiness</strong></h4>
<pre><code class="lang-json">variable <span class="hljs-string">"enable_stickiness"</span> {
  type    = bool
  default = false
}

resource <span class="hljs-string">"aws_lb_target_group"</span> <span class="hljs-string">"web"</span> {
  stickiness {
    enabled = var.enable_stickiness
    type    = <span class="hljs-attr">"lb_cookie"</span>
  }
}
</code></pre>
<h2 id="heading-updated-module-outputs"><strong>Updated Module Outputs</strong></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749489133050/7dc065fb-9e93-4a09-b3f9-c56cf3dfea71.png" alt class="image--center mx-auto" /></p>
<p>Since we now handle multiple instances, outputs must return <strong>lists</strong> instead of single values:</p>
<pre><code class="lang-json">output <span class="hljs-string">"instance_ids"</span> {
  value = aws_instance.web[*].id
}

output <span class="hljs-string">"public_ips"</span> {
  value = aws_instance.web[*].public_ip
}
</code></pre>
]]></content:encoded></item><item><title><![CDATA[Advanced Terraform Module Usage: Versioning, Nesting, and Reuse Across Environments]]></title><description><![CDATA[Day 9: Continuing Reuse of Infrastructure with Modules
Welcome back to our Terraform learning journey! On Day 9, we’re diving deeper into module reuse, exploring module gotchas, versioning, and multi-environment deployments. We’ll also enhance our ex...]]></description><link>https://blog.simiops.fun/advanced-terraform-module-usage-versioning-nesting-and-reuse-across-environments</link><guid isPermaLink="true">https://blog.simiops.fun/advanced-terraform-module-usage-versioning-nesting-and-reuse-across-environments</guid><category><![CDATA[AWS]]></category><category><![CDATA[Terraform]]></category><dc:creator><![CDATA[Mwanza Simi]]></dc:creator><pubDate>Wed, 04 Jun 2025 21:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1749416762453/b15c8881-b6a2-49bd-8155-c8020288d814.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-day-9-continuing-reuse-of-infrastructure-with-modules">Day 9: Continuing Reuse of Infrastructure with Modules</h1>
<p>Welcome back to our Terraform learning journey! On <strong>Day 9</strong>, we’re diving deeper into <strong>module reuse</strong>, exploring <strong>module gotchas</strong>, <strong>versioning</strong>, and <strong>multi-environment deployments</strong>. We’ll also enhance our existing module with <strong>versioning support</strong> and deploy it across different environments. <strong>Reading: Module Gotchas &amp; Versioning (Chapter 4, Pages 115-139)</strong></p>
<h2 id="heading-activity">Activity</h2>
<h2 id="heading-enhance-my-maintf-to-support-multiple-multiple-environments">Enhance my <code>main.tf</code> to support multiple multiple environments</h2>
<pre><code class="lang-json">terraform {
  required_version = <span class="hljs-attr">"&gt;= 1.0"</span>
  required_providers {
    aws = {
      source  = <span class="hljs-attr">"hashicorp/aws"</span>
      version = <span class="hljs-attr">"~&gt; 5.0"</span>
    }
  }
}

provider <span class="hljs-string">"aws"</span> {
  region = var.aws_region

  default_tags {
    tags = var.tags
  }
}

locals {
  name_prefix = <span class="hljs-attr">"${var.environment}-web"</span>

  common_tags = merge(var.tags, {
    Environment = var.environment
    ManagedBy   = <span class="hljs-attr">"terraform"</span>
  })
}

data <span class="hljs-string">"aws_ami"</span> <span class="hljs-string">"amazon_linux"</span> {
  most_recent = true
  owners      = [<span class="hljs-attr">"amazon"</span>]
  filter {
    name   = <span class="hljs-attr">"name"</span>
    values = [<span class="hljs-attr">"amzn2-ami-hvm-*-x86_64-gp2"</span>]
  }
  filter {
    name   = <span class="hljs-attr">"virtualization-type"</span>
    values = [<span class="hljs-attr">"hvm"</span>]
  }
}

module <span class="hljs-string">"security_group"</span> {
  source = <span class="hljs-attr">"./modules/security_group"</span>

  environment = var.environment
  name_prefix = local.name_prefix
  tags        = local.common_tags
}

module <span class="hljs-string">"ec2"</span> {
  source            = <span class="hljs-attr">"./modules/ec2"</span>
  environment       = var.environment
  name_prefix       = local.name_prefix
  ami_id            = data.aws_ami.amazon_linux.id
  instance_type     = var.instance_type[var.environment]
  security_group_id = module.security_group.security_group_id
  tags              = local.common_tags
}

module <span class="hljs-string">"alb"</span> {
  source            = <span class="hljs-attr">"./modules/alb"</span>
  environment       = var.environment
  name_prefix       = local.name_prefix
  security_group_id = module.security_group.security_group_id
  instance_id       = module.ec2.instance_id
  tags              = local.common_tags
}

output <span class="hljs-string">"public_ip"</span> {
  description = <span class="hljs-attr">"Public IP address of the EC2 instance"</span>
  value       = module.ec2.public_ip
}

output <span class="hljs-string">"public_dns"</span> {
  description = <span class="hljs-attr">"Public DNS name of the EC2 instance"</span>
  value       = module.ec2.public_dns
}

output <span class="hljs-string">"alb_dns_name"</span> {
  description = <span class="hljs-attr">"DNS name of the Application Load Balancer"</span>
  value       = module.alb.alb_dns_name
}

output <span class="hljs-string">"environment"</span> {
  description = <span class="hljs-attr">"Current environment"</span>
  value       = var.environment
}
</code></pre>
<h3 id="heading-the-variablestf">The <code>variables.tf</code></h3>
<pre><code class="lang-json">variable <span class="hljs-string">"environment"</span> {
  description = <span class="hljs-attr">"Environment name (dev, staging, production)"</span>
  type        = string
  validation {
    condition     = contains([<span class="hljs-attr">"dev"</span>, <span class="hljs-attr">"staging"</span>, <span class="hljs-attr">"production"</span>], var.environment)
    error_message = <span class="hljs-attr">"Environment must be dev, staging, or production."</span>
  }
}

variable <span class="hljs-string">"aws_region"</span> {
  description = <span class="hljs-attr">"AWS region"</span>
  type        = string
  default     = <span class="hljs-attr">"us-west-2"</span>
}

variable <span class="hljs-string">"instance_type"</span> {
  description = <span class="hljs-attr">"EC2 instance type per environment"</span>
  type        = map(string)
  default = {
    dev        = <span class="hljs-attr">"t3.micro"</span>
    staging    = <span class="hljs-attr">"t3.small"</span>
    production = <span class="hljs-attr">"t3.medium"</span>
  }
}

variable <span class="hljs-string">"min_size"</span> {
  description = <span class="hljs-attr">"Minimum number of instances per environment"</span>
  type        = map(number)
  default = {
    dev        = 1
    staging    = 2
    production = 3
  }
}

variable <span class="hljs-string">"max_size"</span> {
  description = <span class="hljs-attr">"Maximum number of instances per environment"</span>
  type        = map(number)
  default = {
    dev        = 2
    staging    = 4
    production = 6
  }
}

variable <span class="hljs-string">"tags"</span> {
  description = <span class="hljs-attr">"Common tags for all resources"</span>
  type        = map(string)
  default     = {}
}
</code></pre>
<h3 id="heading-and-set-up-environmentsterraformtfvars-for-devstagingproduction">And set up <code>environments/terraform.tfvars</code> for dev/staging/production</h3>
<pre><code class="lang-json">environment = <span class="hljs-string">"dev"</span> / <span class="hljs-string">"prod"</span>/ <span class="hljs-string">"staging"</span>
aws_region  = <span class="hljs-string">"us-west-2"</span>

tags = {
  Environment = <span class="hljs-attr">"development"</span>
  Project     = <span class="hljs-attr">"30-day-terraform-challenge"</span>
  Owner       = <span class="hljs-attr">"simi-ops"</span>
  CostCenter  = <span class="hljs-attr">"engineering"</span>
}
</code></pre>
<h3 id="heading-and-ec2tf-to-support-the-environments">And <code>ec2.tf</code> to support the environments</h3>
<pre><code class="lang-json">terraform {
  required_providers {
    aws = {
      source  = <span class="hljs-attr">"hashicorp/aws"</span>
      version = <span class="hljs-attr">"~&gt; 5.0"</span>
    }
  }
}

locals {
  module_version = <span class="hljs-attr">"1.0.0"</span>
  instance_name  = <span class="hljs-attr">"${var.name_prefix}-instance"</span>
}

resource <span class="hljs-string">"aws_instance"</span> <span class="hljs-string">"web"</span> {
  ami           = var.ami_id
  instance_type = var.instance_type

  vpc_security_group_ids = [var.security_group_id]

  user_data = base64encode(templatefile(<span class="hljs-attr">"${path.module}/user_data.sh"</span>, {
    environment = var.environment
  }))

  tags = merge(var.tags, {
    Name          = local.instance_name
    Module        = <span class="hljs-attr">"ec2"</span>
    ModuleVersion = local.module_version
  })
}
</code></pre>
<p>This repository contains an enhanced Terraform configuration with support for multiple environments (dev, staging, production) and module versioning.</p>
<h2 id="heading-repository-structure">Repository Structure</h2>
<pre><code class="lang-json">terraform/
├── main.tf                     
├── variables.tf                
├── deploy.sh                   
├── README.md                   
├── environments/               
│   ├── dev/
│   │   └── terraform.tfvars
│   ├── staging/
│   │   └── terraform.tfvars
│   └── production/
│       └── terraform.tfvars
├── modules/                   
│   ├── alb/                   
│   │   ├── main.tf
│   │   ├── variables.tf
│   │   └── outputs.tf
│   ├── ec2/                   
│   │   ├── main.tf
│   │   ├── variables.tf
│   │   ├── outputs.tf
│   │   └── user_data.sh
│   └── security_group/       
│       ├── main.tf
│       ├── variables.tf
│       └── outputs.tf
</code></pre>
<h2 id="heading-features">Features</h2>
<ul>
<li><p><strong>Multi-environment support</strong> (dev, staging, production)</p>
</li>
<li><p><strong>Module versioning</strong> with consistent tagging</p>
</li>
<li><p><strong>Environment-specific configurations</strong></p>
</li>
<li><p><strong>Automated deployment scripts</strong></p>
</li>
<li><p><strong>Cost optimization</strong> with environment-specific instance types</p>
</li>
<li><p><strong>Security best practices</strong> with environment-specific access controls</p>
</li>
<li><p><strong>Consistent resource naming</strong> with environment prefixes</p>
</li>
</ul>
<h3 id="heading-initialize-terraform">Initialize Terraform</h3>
<pre><code class="lang-bash"><span class="hljs-comment"># Make script executable</span>
chmod +x deploy.sh

<span class="hljs-comment"># Deploy to different environments</span>
./deploy.sh dev plan
./deploy.sh dev apply   
./deploy.sh staging plan
./deploy.sh staging apply
./deploy.sh production plan
./deploy.sh production apply
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749484614711/c981a81a-e4a2-4b52-94e1-51124427125a.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-environment-differences">Environment Differences</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Environment</td><td>Instance Type</td><td>Region</td><td>SSH Access</td><td>ALB Protection</td></tr>
</thead>
<tbody>
<tr>
<td>dev</td><td>t3.micro</td><td>us-west-2</td><td>0.0.0.0/0</td><td>Disabled</td></tr>
<tr>
<td>staging</td><td>t3.small</td><td>us-west-2</td><td>0.0.0.0/0</td><td>Disabled</td></tr>
<tr>
<td>production</td><td>t3.medium</td><td>us-east-1</td><td>10.0.0.0/8</td><td>Enabled</td></tr>
</tbody>
</table>
</div><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749484600723/f8964a19-2849-4a1c-9dae-7566408ff1dd.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-module-versions">Module Versions</h2>
<p>All modules are tagged with version 1.0.0 and include:</p>
<ul>
<li><p>Consistent variable interfaces</p>
</li>
<li><p>Comprehensive outputs</p>
</li>
<li><p>Environment-aware configurations</p>
</li>
<li><p>Proper resource tagging</p>
</li>
</ul>
<h2 id="heading-cleanup">Cleanup</h2>
<p>To remove all resources:</p>
<pre><code class="lang-bash">./scripts/deploy.sh dev destroy
./scripts/deploy.sh staging destroy
./scripts/deploy.sh production destroy
</code></pre>
]]></content:encoded></item><item><title><![CDATA[Building Reusable Infrastructure with Terraform Modules]]></title><description><![CDATA[Creating a Terraform Module for an Application Load Balancer (ALB) with EC2
In this blog post, we'll walk through creating a Terraform module for a common infrastructure component - an Application Load Balancer (ALB) with an EC2 instance. This implem...]]></description><link>https://blog.simiops.fun/building-reusable-infrastructure-with-terraform-modules</link><guid isPermaLink="true">https://blog.simiops.fun/building-reusable-infrastructure-with-terraform-modules</guid><category><![CDATA[AWS]]></category><category><![CDATA[Terraform]]></category><dc:creator><![CDATA[Mwanza Simi]]></dc:creator><pubDate>Sun, 01 Jun 2025 21:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1749402128335/feaf15b4-f6e0-4ee9-b2ac-1c3ea8508c00.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-creating-a-terraform-module-for-an-application-load-balancer-alb-with-ec2">Creating a Terraform Module for an Application Load Balancer (ALB) with EC2</h1>
<p>In this blog post, we'll walk through creating a Terraform module for a common infrastructure component - an Application Load Balancer (ALB) with an EC2 instance. This implementation follows the concepts from Chapter 4 (pages 115-139) focusing on "Module Basics", "Inputs", and "Outputs".</p>
<h2 id="heading-why-use-terraform-modules">Why Use Terraform Modules?</h2>
<p>Terraform modules allow you to encapsulate related resources into reusable components. They provide several benefits:</p>
<ol>
<li><p><strong>Code Reusability</strong>: Write once, use many times</p>
</li>
<li><p><strong>Abstraction</strong>: Hide complexity behind simple interfaces</p>
</li>
<li><p><strong>Consistency</strong>: Ensure similar infrastructure follows the same patterns</p>
</li>
<li><p><strong>Collaboration</strong>: Teams can share and use standardized components</p>
</li>
</ol>
<h2 id="heading-our-infrastructure-components">Our Infrastructure Components</h2>
<p>We'll create a module that deploys:</p>
<ul>
<li><p>An Application Load Balancer (ALB)</p>
</li>
<li><p>A target group for the ALB</p>
</li>
<li><p>An EC2 instance running a web server</p>
</li>
<li><p>Necessary security groups</p>
</li>
</ul>
<h2 id="heading-module-structure">Module Structure</h2>
<p>Here's how we'll structure our module:</p>
<pre><code class="lang-bash">modules/
├── alb/
│   ├── main.tf
│   ├── variables.tf
│   └── outputs.tf
├── ec2/
│   ├── main.tf
│   ├── variables.tf
│   └── outputs.tf
└── security_group/
    ├── main.tf
    ├── variables.tf
    └── outputs.tf
</code></pre>
<h2 id="heading-the-alb-module">The ALB Module</h2>
<p>Let's examine the core components of our ALB module:</p>
<h3 id="heading-inputs-variablestfhttpvariablestf">Inputs (<a target="_blank" href="http://variables.tf">variables.tf</a>)</h3>
<pre><code class="lang-bash">variable <span class="hljs-string">"security_group_id"</span> {
  description = <span class="hljs-string">"Security group ID for the ALB"</span>
  <span class="hljs-built_in">type</span>        = string
}

variable <span class="hljs-string">"instance_id"</span> {
  description = <span class="hljs-string">"Instance ID to attach to the target group"</span>
  <span class="hljs-built_in">type</span>        = string
}
</code></pre>
<h3 id="heading-main-configuration-maintfhttpmaintf">Main Configuration (<a target="_blank" href="http://main.tf">main.tf</a>)</h3>
<pre><code class="lang-bash">data <span class="hljs-string">"aws_vpc"</span> <span class="hljs-string">"default"</span> {
  default = <span class="hljs-literal">true</span>
}

data <span class="hljs-string">"aws_subnets"</span> <span class="hljs-string">"default"</span> {
  filter {
    name   = <span class="hljs-string">"vpc-id"</span>
    values = [data.aws_vpc.default.id]
  }
}

resource <span class="hljs-string">"aws_lb"</span> <span class="hljs-string">"this"</span> {
  name               = <span class="hljs-string">"web-alb"</span>
  internal           = <span class="hljs-literal">false</span>
  load_balancer_type = <span class="hljs-string">"application"</span>
  security_groups    = [var.security_group_id]
  subnets            = data.aws_subnets.default.ids

  tags = {
    Name = <span class="hljs-string">"WebALB"</span>
  }
}

resource <span class="hljs-string">"aws_lb_target_group"</span> <span class="hljs-string">"this"</span> {
  name     = <span class="hljs-string">"web-tg"</span>
  port     = 80
  protocol = <span class="hljs-string">"HTTP"</span>
  vpc_id   = data.aws_vpc.default.id

  health_check {
    path                = <span class="hljs-string">"/"</span>
    protocol            = <span class="hljs-string">"HTTP"</span>
    matcher             = <span class="hljs-string">"200"</span>
    interval            = 30
    timeout             = 5
    healthy_threshold   = 2
    unhealthy_threshold = 2
  }

  tags = {
    Name = <span class="hljs-string">"WebTargetGroup"</span>
  }
}

resource <span class="hljs-string">"aws_lb_listener"</span> <span class="hljs-string">"this"</span> {
  load_balancer_arn = aws_lb.this.arn
  port              = 80
  protocol          = <span class="hljs-string">"HTTP"</span>

  default_action {
    <span class="hljs-built_in">type</span>             = <span class="hljs-string">"forward"</span>
    target_group_arn = aws_lb_target_group.this.arn
  }
}

resource <span class="hljs-string">"aws_lb_target_group_attachment"</span> <span class="hljs-string">"this"</span> {
  target_group_arn = aws_lb_target_group.this.arn
  target_id        = var.instance_id
  port             = 80
}
</code></pre>
<h3 id="heading-outputs-outputstfhttpoutputstf">Outputs (<a target="_blank" href="http://outputs.tf">outputs.tf</a>)</h3>
<pre><code class="lang-bash">output <span class="hljs-string">"alb_dns_name"</span> {
  description = <span class="hljs-string">"DNS name of the ALB"</span>
  value       = aws_lb.this.dns_name
}
</code></pre>
<h2 id="heading-the-ec2-module">The EC2 Module</h2>
<p>Our EC2 module complements the ALB:</p>
<h3 id="heading-inputs-variablestfhttpvariablestf-1">Inputs (<a target="_blank" href="http://variables.tf">variables.tf</a>)</h3>
<pre><code class="lang-bash">variable <span class="hljs-string">"ami_id"</span> {
  description = <span class="hljs-string">"AMI ID for the EC2 instance"</span>
  <span class="hljs-built_in">type</span>        = string
}

variable <span class="hljs-string">"instance_type"</span> {
  description = <span class="hljs-string">"Instance type"</span>
  <span class="hljs-built_in">type</span>        = string
  default     = <span class="hljs-string">"t2.micro"</span>
}

variable <span class="hljs-string">"security_group_id"</span> {
  description = <span class="hljs-string">"Security group ID for the EC2 instance"</span>
  <span class="hljs-built_in">type</span>        = string
}
</code></pre>
<h3 id="heading-main-configuration-maintfhttpmaintf-1">Main Configuration (<a target="_blank" href="http://main.tf">main.tf</a>)</h3>
<pre><code class="lang-bash">resource <span class="hljs-string">"aws_instance"</span> <span class="hljs-string">"this"</span> {
  ami                    = var.ami_id
  instance_type          = var.instance_type
  vpc_security_group_ids = [var.security_group_id]

  user_data = &lt;&lt;-EOF
              <span class="hljs-comment">#!/bin/bash</span>
              yum update -y
              yum install -y httpd
              systemctl start httpd
              systemctl <span class="hljs-built_in">enable</span> httpd
              <span class="hljs-built_in">echo</span> <span class="hljs-string">"&lt;h1&gt;Hello World from Terraform 30 Day Challenge: Day 8&lt;/h1&gt;"</span> &gt; /var/www/html/index.html
              EOF

  tags = {
    Name = <span class="hljs-string">"WebServer"</span>
  }
}
</code></pre>
<h3 id="heading-outputs-outputstfhttpoutputstf-1">Outputs (<a target="_blank" href="http://outputs.tf">outputs.tf</a>)</h3>
<pre><code class="lang-bash">output <span class="hljs-string">"instance_id"</span> {
  description = <span class="hljs-string">"ID of the EC2 instance"</span>
  value       = aws_instance.this.id
}

output <span class="hljs-string">"public_ip"</span> {
  description = <span class="hljs-string">"Public IP of the EC2 instance"</span>
  value       = aws_instance.this.public_ip
}

output <span class="hljs-string">"public_dns"</span> {
  description = <span class="hljs-string">"Public DNS of the EC2 instance"</span>
  value       = aws_instance.this.public_dns
}
</code></pre>
<h2 id="heading-the-security-group-module">The Security Group Module</h2>
<pre><code class="lang-bash">resource <span class="hljs-string">"aws_security_group"</span> <span class="hljs-string">"this"</span> {
  name        = <span class="hljs-string">"web-server-sg"</span>
  description = <span class="hljs-string">"Allow web traffic and SSH access"</span>

  ingress {
    description = <span class="hljs-string">"SSH"</span>
    from_port   = 22
    to_port     = 22
    protocol    = <span class="hljs-string">"tcp"</span>
    cidr_blocks = [<span class="hljs-string">"0.0.0.0/0"</span>]
  }

  ingress {
    description = <span class="hljs-string">"HTTP"</span>
    from_port   = 80
    to_port     = 80
    protocol    = <span class="hljs-string">"tcp"</span>
    cidr_blocks = [<span class="hljs-string">"0.0.0.0/0"</span>]
  }

  ingress {
    description = <span class="hljs-string">"HTTPS"</span>
    from_port   = 443
    to_port     = 443
    protocol    = <span class="hljs-string">"tcp"</span>
    cidr_blocks = [<span class="hljs-string">"0.0.0.0/0"</span>]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = <span class="hljs-string">"-1"</span>
    cidr_blocks = [<span class="hljs-string">"0.0.0.0/0"</span>]
  }

  tags = {
    Name = <span class="hljs-string">"web-server-security-group"</span>
  }
}

output <span class="hljs-string">"security_group_id"</span> {
  description = <span class="hljs-string">"ID of the security group"</span>
  value       = aws_security_group.this.id
}
</code></pre>
<h2 id="heading-root-module-implementation">Root Module Implementation</h2>
<p>Now let's see how we use these modules together in our root configuration:</p>
<pre><code class="lang-bash">provider <span class="hljs-string">"aws"</span> {
  region = var.aws_region
}

data <span class="hljs-string">"aws_ami"</span> <span class="hljs-string">"amazon_linux"</span> {
  most_recent = <span class="hljs-literal">true</span>
  owners      = [<span class="hljs-string">"amazon"</span>]
  filter {
    name   = <span class="hljs-string">"name"</span>
    values = [<span class="hljs-string">"amzn2-ami-hvm-*-x86_64-gp2"</span>]
  }
  filter {
    name   = <span class="hljs-string">"virtualization-type"</span>
    values = [<span class="hljs-string">"hvm"</span>]
  }
}

module <span class="hljs-string">"security_group"</span> {
  <span class="hljs-built_in">source</span> = <span class="hljs-string">"./modules/security_group"</span>
}

module <span class="hljs-string">"ec2"</span> {
  <span class="hljs-built_in">source</span>            = <span class="hljs-string">"./modules/ec2"</span>
  ami_id            = data.aws_ami.amazon_linux.id
  instance_type     = var.instance_type
  security_group_id = module.security_group.security_group_id
}

module <span class="hljs-string">"alb"</span> {
  <span class="hljs-built_in">source</span>            = <span class="hljs-string">"./modules/alb"</span>
  security_group_id = module.security_group.security_group_id
  instance_id       = module.ec2.instance_id
}

output <span class="hljs-string">"public_ip"</span> {
  value = module.ec2.public_ip
}

output <span class="hljs-string">"public_dns"</span> {
  value = module.ec2.public_dns
}

output <span class="hljs-string">"alb_dns_name"</span> {
  value = module.alb.alb_dns_name
}
</code></pre>
<h2 id="heading-key-takeaways">Key Takeaways</h2>
<ol>
<li><p><strong>Module Composition</strong>: We've created three modules that work together to form a complete solution.</p>
</li>
<li><p><strong>Input/Output Design</strong>: Each module exposes carefully designed inputs and outputs that control its behavior and expose important information.</p>
</li>
<li><p><strong>Data Sources</strong>: We use data sources to look up information like the default VPC and latest Amazon Linux AMI.</p>
</li>
<li><p><strong>Dependencies</strong>: Modules can depend on each other through their inputs and outputs, creating an implicit dependency graph.</p>
</li>
<li><p><strong>User Data</strong>: The EC2 instance is automatically configured with a simple web server through user data.</p>
</li>
</ol>
<h2 id="heading-next-steps">Next Steps</h2>
<p>To improve this implementation, you might consider:</p>
<ol>
<li><p>Adding variables for all configurable parameters</p>
</li>
<li><p>Implementing conditional logic for different environments</p>
</li>
<li><p>Adding lifecycle management configurations</p>
</li>
<li><p>Incorporating more advanced health check configurations</p>
</li>
<li><p>Adding logging and monitoring capabilities</p>
</li>
</ol>
<p>This module provides a solid foundation for deploying ALBs with EC2 instances in AWS, following Terraform best practices for module design and composition.</p>
]]></content:encoded></item><item><title><![CDATA[State Isolation: Layout vs Workspace]]></title><description><![CDATA[Terraform State Isolation and Locking: A Practical Guide
Introduction
Managing infrastructure as code (IaC) with Terraform requires careful handling of state files, especially in team environments. State isolation and locking are critical to prevent ...]]></description><link>https://blog.simiops.fun/state-isolation-layout-vs-workspace</link><guid isPermaLink="true">https://blog.simiops.fun/state-isolation-layout-vs-workspace</guid><category><![CDATA[AWS]]></category><category><![CDATA[Terraform]]></category><dc:creator><![CDATA[Mwanza Simi]]></dc:creator><pubDate>Sat, 31 May 2025 21:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1749333706380/99f5f5b4-4fa5-443a-bdc0-70038d5a49d8.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-terraform-state-isolation-and-locking-a-practical-guide"><strong>Terraform State Isolation and Locking: A Practical Guide</strong></h2>
<h2 id="heading-introduction"><strong>Introduction</strong></h2>
<p>Managing infrastructure as code (IaC) with Terraform requires careful handling of state files, especially in team environments. State isolation and locking are critical to prevent conflicts and ensure smooth collaboration. In this blog, we'll explore:</p>
<ul>
<li><p><strong>State File Isolation</strong> (Workspaces &amp; File Layouts)</p>
</li>
<li><p><strong>Remote State Storage</strong> (S3 Backend)</p>
</li>
<li><p><strong>State Locking</strong> (DynamoDB)</p>
</li>
</ul>
<h2 id="heading-1-state-file-isolation"><strong>1. State File Isolation</strong></h2>
<p>State isolation ensures that changes in one environment (e.g., <strong>dev</strong>) don’t accidentally affect another (e.g., <strong>prod</strong>).</p>
<h3 id="heading-option-1-isolation-via-workspaces"><strong>Option 1: Isolation via Workspaces</strong></h3>
<p>Terraform <strong>workspaces</strong> allow multiple state files within the same configuration.</p>
<h4 id="heading-commands"><strong>Commands:</strong></h4>
<pre><code class="lang-sh"><span class="hljs-comment"># Create workspaces for dev, staging, prod</span>
terraform workspace new dev
terraform workspace new staging
terraform workspace new prod

<span class="hljs-comment"># Switch between workspaces</span>
terraform workspace select dev
</code></pre>
<p><strong>Pros:</strong> Quick setup, no code duplication.<br /><strong>Cons:</strong> Shared modules may lead to accidental cross-environment changes.</p>
<h3 id="heading-option-2-isolation-via-file-layouts"><strong>Option 2: Isolation via File Layouts</strong></h3>
<p>A more explicit approach is using <strong>separate directories</strong> per environment.</p>
<h4 id="heading-directory-structure"><strong>Directory Structure:</strong></h4>
<pre><code class="lang-bash">terraform/
├── dev/
│   ├── main.tf
│   └── backend.tf  
├── staging/
│   ├── main.tf
│   └── backend.tf  
└── prod/
    ├── main.tf
    └── backend.tf
</code></pre>
<p><strong>Pros:</strong> Complete isolation, fewer risks of overlap.<br /><strong>Cons:</strong> More files to maintain.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749334200423/31ca6d8f-2128-452c-ba56-c7b406ee64b4.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-2-remote-state-storage-s3-backend"><strong>2. Remote State Storage (S3 Backend)</strong></h2>
<p>Storing state remotely in <strong>Amazon S3</strong> ensures:<br /><strong>Team accessibility</strong><br /><strong>Versioning &amp; Encryption</strong><br /><strong>State locking</strong></p>
<h4 id="heading-create-an-s3-bucket-amp-dynamodb-table"><strong>Create an S3 Bucket &amp; DynamoDB Table</strong></h4>
<pre><code class="lang-sh">aws s3api create-bucket \
    --bucket simi-ops-terraform-state \ 
    --region us-west-2 --create-bucket-configuration \
    LocationConstraint=us-west-2

aws dynamodb create-table \
    --table-name simi-ops-terraform-locks \
    --attribute-definitions AttributeName=LockID,AttributeType=S \
    --key-schema AttributeName=LockID,KeyType=HASH \
    --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 \
    --region us-west-2
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749334147744/98cf1e61-1a88-4d58-a070-5b428bc5984f.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-configure-terraform-backend"><strong>Configure Terraform Backend</strong></h4>
<pre><code class="lang-bash">terraform {
  backend <span class="hljs-string">"s3"</span> {
    bucket  = <span class="hljs-string">"simi-ops-terraform-state"</span>
    key     = <span class="hljs-string">"web-server/terraform.tfstate"</span>
    region  = <span class="hljs-string">"us-west-2"</span>
    dynamodb_table = <span class="hljs-string">"simi-ops-terraform-locks"</span>
    encrypt = <span class="hljs-literal">true</span>
  }
}
</code></pre>
<h2 id="heading-3-state-locking-with-dynamodb"><strong>3. State Locking with DynamoDB</strong></h2>
<p>Prevents <strong>concurrent state modifications</strong> by locking the state file during operations.</p>
<h3 id="heading-how-it-works"><strong>How It Works:</strong></h3>
<ol>
<li><p>User A runs <code>terraform apply</code> → DynamoDB creates a <strong>lock entry</strong>.</p>
</li>
<li><p>User B tries to modify infrastructure → Terraform checks the lock and <strong>blocks changes</strong> until User A finishes.</p>
</li>
<li><p>Once User A completes, the lock is <strong>released</strong>.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749334316294/3e61d29a-e8ac-41e3-aecc-94fe903e383d.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-testing-state-locking"><strong>Testing State Locking</strong></h3>
<ul>
<li><p><strong>User 1:</strong> Runs <code>terraform apply</code> → acquires lock.</p>
</li>
<li><p><strong>User 2:</strong> Attempts <code>terraform apply</code> → gets an error:</p>
<pre><code class="lang-bash">  Error: Error acquiring the state lock
  Lock Info: ID: &lt;lock-id&gt; | Status: Locked
</code></pre>
</li>
</ul>
<p><strong>Prevents corruption</strong> from overlapping changes.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749334181456/306d07dd-eb5e-43d8-9a3e-d1000bef7ca7.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>By implementing:<br /><strong>Workspaces / File Layouts</strong> → Isolate environments.<br /><strong>S3 Backend</strong> → Securely store state.<br /><strong>DynamoDB Locking</strong> → Prevent conflicts.</p>
]]></content:encoded></item><item><title><![CDATA[Managing Terraform State: Best Practices for DevOps]]></title><description><![CDATA[Understanding Terraform State and Remote Storage with AWS S3
Introduction to Terraform State
When working with Terraform, one of the most critical concepts to understand is Terraform state. As I recently worked through Chapter 3 (pages 81-113) of my ...]]></description><link>https://blog.simiops.fun/managing-terraform-state-best-practices-for-devops</link><guid isPermaLink="true">https://blog.simiops.fun/managing-terraform-state-best-practices-for-devops</guid><category><![CDATA[AWS]]></category><category><![CDATA[Terraform]]></category><dc:creator><![CDATA[Mwanza Simi]]></dc:creator><pubDate>Fri, 30 May 2025 21:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1749332532311/aebbbbac-35b3-496a-aaef-ecaadce19796.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-understanding-terraform-state-and-remote-storage-with-aws-s3">Understanding Terraform State and Remote Storage with AWS S3</h2>
<h2 id="heading-introduction-to-terraform-state">Introduction to Terraform State</h2>
<p>When working with Terraform, one of the most critical concepts to understand is <strong>Terraform state</strong>. As I recently worked through Chapter 3 (pages 81-113) of my Terraform studies, I gained valuable insights into what Terraform state is, why it's important, and how to manage it effectively across teams.</p>
<h2 id="heading-what-is-terraform-state">What is Terraform State?</h2>
<p>Terraform state is stored in a file named <code>terraform.tfstate</code> by default. This JSON-formatted file serves several essential purposes:</p>
<ol>
<li><p><strong>Mapping to Real World</strong>: It keeps track of the resources Terraform manages and their current settings</p>
</li>
<li><p><strong>Metadata Storage</strong>: Stores dependencies between resources that aren't apparent from your configuration</p>
</li>
<li><p><strong>Performance</strong>: For large infrastructures, it helps Terraform run operations more efficiently</p>
</li>
<li><p><strong>Sync Mechanism</strong>: Enables teams to work together by knowing the current infrastructure state</p>
</li>
</ol>
<p>The state file contains sensitive information (like database passwords in plaintext), so it should always be protected.</p>
<h2 id="heading-shared-storage-for-state-files">Shared Storage for State Files</h2>
<p>Working with state files locally is fine for individual projects, but when collaborating with teams, we need a better solution. The chapter highlighted several problems with local state files:</p>
<ul>
<li><p><strong>Team Conflicts</strong>: Multiple team members can't easily work together</p>
</li>
<li><p><strong>Data Loss Risk</strong>: Local files can be accidentally deleted</p>
</li>
<li><p><strong>Security Issues</strong>: Sensitive data isn't properly protected</p>
</li>
</ul>
<p>The solution? <strong>Remote state storage</strong> - storing the state file in a shared, secure location that the whole team can access.</p>
<h2 id="heading-managing-state-across-teams">Managing State Across Teams</h2>
<p>For team collaboration, we need to consider:</p>
<ol>
<li><p><strong>Locking Mechanisms</strong>: Prevent multiple simultaneous operations that could corrupt state</p>
</li>
<li><p><strong>Access Controls</strong>: Ensure only authorized personnel can modify infrastructure</p>
</li>
<li><p><strong>Versioning</strong>: Track changes to infrastructure state over time</p>
</li>
<li><p><strong>Audit Logs</strong>: Monitor who made what changes and when</p>
</li>
</ol>
<h2 id="heading-hands-on-activity-deploying-infrastructure-and-configuring-remote-state">Hands-on Activity: Deploying Infrastructure and Configuring Remote State</h2>
<p><img src="https://github.com/simi-ops/30-Day-Terraform-challenge-/blob/Week-2/Day6/Submissions/simi-ops/architecture/web-server.png?raw=true" alt="web-server.png" class="image--center mx-auto" /></p>
<h2 id="heading-part-1-deploying-infrastructure-and-inspecting-state">Part 1: Deploying Infrastructure and Inspecting State</h2>
<p>From the previous Terraform code, I used it to create the S3 bucket to store the state file</p>
<p>After running <code>terraform apply</code>, I inspected the generated <code>terraform.tfstate</code> file. The JSON structure showed all the resource attributes and metadata. Key observations:</p>
<ul>
<li><p>Each resource has a unique address</p>
</li>
<li><p>The file tracks the actual state of cloud resources</p>
</li>
<li><p>Sensitive data is stored in plaintext (highlighting the need for secure storage)</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749333146593/47c68299-0dcc-4c5f-815c-1a3587f27d1d.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-part-2-configuring-remote-state-with-aws-s3">Part 2: Configuring Remote State with AWS S3</h2>
<p>To implement remote state storage, I followed these steps:</p>
<h3 id="heading-created-an-s3-bucket-used-cli"><strong>Created an S3 bucket</strong> used CLI</h3>
<pre><code class="lang-bash">aws s3api create-bucket --bucket simi-ops-terraform-state \  
    --region us-west-2 --create-bucket-configuration \ 
    LocationConstraint=us-west-2
</code></pre>
<h3 id="heading-configured-terraform-to-use-the-s3-backend"><strong>Configured Terraform</strong> to use the S3 backend:</h3>
<pre><code class="lang-bash">terraform {
  backend <span class="hljs-string">"s3"</span> {
    bucket  = <span class="hljs-string">"simi-ops-terraform-state"</span>
    key     = <span class="hljs-string">"web-server/terraform.tfstate"</span>
    region  = <span class="hljs-string">"us-west-2"</span>
    encrypt = <span class="hljs-literal">true</span>
  }
}
</code></pre>
<p>After initializing with <code>terraform init</code>, Terraform automatically migrated my local state to S3. Now my state is:</p>
<ul>
<li>Stored securely in an encrypted S3 bucket</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749333089130/ff3c859a-0f56-4b46-8188-3cc4c74fd438.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-key-takeaways">Key Takeaways</h2>
<ol>
<li><p><strong>State is essential</strong>: Terraform relies on it to map your configuration to real resources</p>
</li>
<li><p><strong>Local state isn't for teams</strong>: Remote storage solves collaboration challenges</p>
</li>
<li><p><strong>AWS S3 is a robust solution</strong>: Especially when combined with DynamoDB for locking</p>
</li>
<li><p><strong>Security matters</strong>: Always encrypt state files and control access carefully</p>
</li>
</ol>
<h2 id="heading-next-steps">Next Steps</h2>
<ul>
<li><p>Implement DynamoDB to store lock file</p>
</li>
<li><p>Implement Versioning</p>
</li>
<li><p>Implement IAM policies for fine-grained state access control</p>
</li>
<li><p>Explore Terraform Cloud for enhanced collaboration features</p>
</li>
<li><p>Set up state file auditing and change notifications</p>
</li>
</ul>
<p>Understanding Terraform state management has been a game changer for my infrastructure-as-code workflows. The shift from local to remote state might seem like extra work initially, but the benefits for team collaboration and security make it absolutely worth the effort.</p>
]]></content:encoded></item><item><title><![CDATA[Managing high traffic applications with AWS Elastic Load Balancer and Terraform]]></title><description><![CDATA[Today we will explore Chapter 2 and Chapter 3 of our Terraform learning journey, focusing on state management and scaling infrastructure using a load balancer. We'll cover key concepts like Terraform state, shared storage for state files, and limitat...]]></description><link>https://blog.simiops.fun/managing-high-traffic-applications-with-aws-elastic-load-balancer-and-terraform</link><guid isPermaLink="true">https://blog.simiops.fun/managing-high-traffic-applications-with-aws-elastic-load-balancer-and-terraform</guid><category><![CDATA[AWS]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[30DaysOfTerraform]]></category><dc:creator><![CDATA[Mwanza Simi]]></dc:creator><pubDate>Fri, 30 May 2025 14:06:50 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1749332366225/fca348c9-33bd-4e80-a5c4-298f91655083.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Today we will explore Chapter 2 and Chapter 3 of our Terraform learning journey, focusing on state management and scaling infrastructure using a load balancer. We'll cover key concepts like Terraform state, shared storage for state files, and limitations, followed by a hands-on activity to scale a web server cluster.</p>
<h2 id="heading-deploying-a-load-balancer"><strong>Deploying a Load Balancer</strong></h2>
<p>A Load Balancer distributes incoming traffic across multiple servers to improve availability and fault tolerance. In our Terraform configuration, we deploy an Application Load Balancer - ALB, to manage traffic to our web servers.</p>
<h3 id="heading-why-use-a-load-balancer"><strong>Why Use a Load Balancer?</strong></h3>
<ul>
<li><p><strong>High Availability</strong>: If one instance fails, traffic routes to healthy instances.</p>
</li>
<li><p><strong>Scalability</strong>: Easily add more instances behind the ALB.</p>
</li>
<li><p><strong>Efficient Traffic Distribution</strong>: Balances load across multiple servers.</p>
</li>
</ul>
<h2 id="heading-activity-scaling-web-servers-amp-managing-state"><strong>Activity: Scaling Web Servers &amp; Managing State</strong></h2>
<h3 id="heading-step-1-modify-configuration-for-multiple-instances"><strong>Step 1: Modify Configuration for Multiple Instances</strong></h3>
<pre><code class="lang-json">resource <span class="hljs-string">"aws_instance"</span> <span class="hljs-string">"web_server"</span> {
  count         = 3  # Creates 3 instances
  ami           = data.aws_ami.amazon_linux.id
  instance_type = var.instance_type
  vpc_security_group_ids = [aws_security_group.ec2_sg.id]
  user_data     = file(<span class="hljs-attr">"user_data.sh"</span>)
  tags = {
    Name = <span class="hljs-attr">"WebServer-${count.index}"</span>
  }
}
</code></pre>
<h3 id="heading-step-2-update-alb-target-group-attachments"><strong>Step 2: Update ALB Target Group Attachments</strong></h3>
<pre><code class="lang-json">resource <span class="hljs-string">"aws_lb_target_group_attachment"</span> <span class="hljs-string">"web_attachment"</span> {
  count            = length(aws_instance.web_server)
  target_group_arn = aws_lb_target_group.web_tg.arn
  target_id        = aws_instance.web_server[count.index].id
  port             = 80
}
</code></pre>
<h3 id="heading-step-3-initialize-amp-apply"><strong>Step 3: Initialize &amp; Apply</strong></h3>
<pre><code class="lang-bash">terraform init
terraform plan
terraform apply
</code></pre>
<h3 id="heading-step-4-verify-scaling"><strong>Step 4: Verify Scaling</strong></h3>
<ul>
<li><p>Check the AWS EC2 Console to see multiple instances.</p>
</li>
<li><p>Access the ALB DNS (<code>output "alb_dns_name"</code>) to confirm load balancing works.</p>
</li>
</ul>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<ul>
<li><p><strong>Load Balancers</strong> improve scalability and availability.</p>
</li>
<li><p><strong>Terraform State</strong> is crucial for tracking infrastructure.</p>
</li>
<li><p><strong>Remote State Storage</strong> (S3, Terraform Cloud) enables team collaboration.</p>
</li>
<li><p><strong>Avoid Manual State Edits</strong> to prevent corruption.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Deploying a Highly Available Web App on AWS Using Terraform]]></title><description><![CDATA[Today marked an exciting step in my Terraform 30-Day Challenge as I dove into Chapter 2 of "Terraform: Up & Running" by Yevgeniy Brikman. The focus was on variables and data sources, two powerful features that make Terraform configurations more dynam...]]></description><link>https://blog.simiops.fun/deploying-a-highly-available-web-app-on-aws-using-terraform</link><guid isPermaLink="true">https://blog.simiops.fun/deploying-a-highly-available-web-app-on-aws-using-terraform</guid><category><![CDATA[AWS]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[30Days_TerraformChallenge]]></category><dc:creator><![CDATA[Mwanza Simi]]></dc:creator><pubDate>Thu, 29 May 2025 18:47:57 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1748611860219/c766d730-946f-4c32-9a40-6d78d37c2a67.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Today marked an exciting step in my Terraform 30-Day Challenge as I dove into Chapter 2 of <em>"Terraform: Up &amp; Running"</em> by Yevgeniy Brikman. The focus was on variables and data sources, two powerful features that make Terraform configurations more dynamic and reusable.</p>
<h2 id="heading-what-i-learned"><strong>What I Learned</strong></h2>
<h3 id="heading-1-using-variables-for-flexibility"><strong>1. Using Variables for Flexibility</strong></h3>
<p>Variables in Terraform allow us to <strong>parameterize</strong> our configurations, making them more adaptable. Instead of hardcoding values like <code>region</code> or <code>instance_type</code>, we can define them as variables and pass different values as needed.</p>
<p>In my implementation, I defined two variables in <a target="_blank" href="http://variables.tf"><code>variables.tf</code></a>:</p>
<pre><code class="lang-json">variable <span class="hljs-string">"aws_region"</span> {
  description = <span class="hljs-attr">"AWS region to deploy resources"</span>
  type        = string
  default     = <span class="hljs-attr">"us-west-2"</span>
}

variable <span class="hljs-string">"instance_type"</span> {
  description = <span class="hljs-attr">"EC2 instance type"</span>
  type        = string
  default     = <span class="hljs-attr">"t2.micro"</span>
}
</code></pre>
<p>These variables were then referenced in the main configuration:</p>
<pre><code class="lang-json">provider <span class="hljs-string">"aws"</span> {
  region = var.aws_region
}

resource <span class="hljs-string">"aws_instance"</span> <span class="hljs-string">"web_server"</span> {
  ami           = data.aws_ami.amazon_linux.id
  instance_type = var.instance_type
  # ... rest of the config
}
</code></pre>
<p>This makes it easy to <strong>change regions or instance types</strong> without modifying the core infrastructure code.</p>
<h3 id="heading-2-leveraging-data-sources"><strong>2. Leveraging Data Sources</strong></h3>
<p>Data sources allow Terraform to <strong>fetch external information</strong> (like the latest Amazon Linux AMI) and use it in configurations. Instead of hardcoding an AMI ID (which changes frequently), I used:</p>
<pre><code class="lang-json">data <span class="hljs-string">"aws_ami"</span> <span class="hljs-string">"amazon_linux"</span> {
  most_recent = true
  owners      = [<span class="hljs-attr">"amazon"</span>]

  filter {
    name   = <span class="hljs-attr">"name"</span>
    values = [<span class="hljs-attr">"amzn2-ami-hvm-*-x86_64-gp2"</span>]
  }

  filter {
    name   = <span class="hljs-attr">"virtualization-type"</span>
    values = [<span class="hljs-attr">"hvm"</span>]
  }
}
</code></pre>
<p>This dynamically retrieves the <strong>latest Amazon Linux 2 AMI</strong>, ensuring my EC2 instance always uses an up to date image.</p>
<h3 id="heading-3-security-group-amp-user-data-configuration"><strong>3. Security Group &amp; User Data Configuration</strong></h3>
<p>I also defined a <strong>security group</strong> to allow <strong>SSH (22), HTTP (80), and HTTPS (443)</strong> traffic:</p>
<pre><code class="lang-json">resource <span class="hljs-string">"aws_security_group"</span> <span class="hljs-string">"ec2_sg"</span> {
  name        = <span class="hljs-attr">"web-server-sg"</span>
  description = <span class="hljs-attr">"Allow web traffic and SSH access"</span>

  # Ingress rules for SSH, HTTP, HTTPS
  # Egress rule allowing all outbound traffic
}
</code></pre>
<p>Additionally, I used <code>user_data</code> to automatically install and configure an Apache web server upon instance launch:</p>
<pre><code class="lang-json">user_data = &lt;&lt;-EOF
            #!/bin/bash
            yum update -y
            yum install -y httpd
            systemctl start httpd
            systemctl enable httpd
            echo <span class="hljs-string">"&lt;h1&gt;Hello World from Terraform 30 Day Challenge: Day 4&lt;/h1&gt;"</span> &gt; /var/www/html/index.html
            EOF
</code></pre>
<h2 id="heading-results"><strong>Results</strong></h2>
<p>After running <code>terraform apply</code>, my AWS infrastructure was successfully deployed:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748610366357/6fbdede3-7f77-42db-a977-246850e4cbf9.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>A <strong>t2.micro EC2 instance</strong> running Amazon Linux 2</p>
</li>
<li><p>A <strong>security group</strong> allowing web and SSH access</p>
</li>
<li><p>An <strong>Apache web server</strong> serving a "Hello World" page</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748610384467/da2ab049-833f-4652-8e74-6d31d2339533.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-key-takeaways"><strong>Key Takeaways</strong></h2>
<ul>
<li><p><strong>Variables</strong> make Terraform configurations <strong>reusable and flexible</strong>.</p>
</li>
<li><p><strong>Data sources</strong> help fetch <strong>dynamic external data</strong> (like AMIs).</p>
</li>
<li><p><strong>Security groups</strong> define inbound/outbound traffic rules.</p>
</li>
<li><p><strong>User data</strong> automates instance initialization.</p>
</li>
</ul>
<p>Looking forward to <strong>Day 5</strong>, where I’ll explore more advanced Terraform concepts</p>
]]></content:encoded></item><item><title><![CDATA[Deploying Your First Server with Terraform: A Beginner's Guide]]></title><description><![CDATA[Today, I worked through Chapter 2 of my cloud infrastructure studies, focusing on "Deploying a Single Server" and "Deploying a Web Server" (up to page 59). The goal was to deploy a basic web server on a cloud platform using Terraform and design an ar...]]></description><link>https://blog.simiops.fun/deploying-your-first-server-with-terraform-a-beginners-guide</link><guid isPermaLink="true">https://blog.simiops.fun/deploying-your-first-server-with-terraform-a-beginners-guide</guid><category><![CDATA[AWS]]></category><category><![CDATA[Terraform]]></category><dc:creator><![CDATA[Mwanza Simi]]></dc:creator><pubDate>Tue, 27 May 2025 21:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1748608286598/bc6e4baf-e5f2-4b61-8101-6aab3d037d43.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Today, I worked through <strong>Chapter 2</strong> of my cloud infrastructure studies, focusing on "Deploying a Single Server" and "Deploying a Web Server" (up to page 59). The goal was to deploy a basic web server on a cloud platform using Terraform and design an architecture diagram for it.</p>
<p>I chose AWS as my cloud provider since it's widely used and integrates well with Terraform. Below, I’ll walk through the steps I took to complete this task.</p>
<h2 id="heading-task-1-designing-the-architecture-diagram"><strong>Task 1: Designing the Architecture Diagram</strong></h2>
<h3 id="heading-since-i-used-aws-i-designed-a-simple-architecture-in-drawiohttpdrawio-showing">Since I used <strong>AWS</strong>, I designed a simple architecture in <a target="_blank" href="http://draw.io"><strong>draw.io</strong></a> showing:</h3>
<h3 id="heading-single-server-deployment">Single Server Deployment</h3>
<ul>
<li><p><strong>Region:</strong> us-west-2</p>
</li>
<li><p><strong>Instance Type:</strong> t2-micro</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748604548848/5aaf6a7c-9f00-442e-84cf-ed2dadb790dd.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<h3 id="heading-web-server-deployment">Web Server Deployment</h3>
<ul>
<li><p><strong>Region:</strong> us-west-2</p>
</li>
<li><p><strong>Instance Type:</strong> t2-micro</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748604574541/bdec17c8-e192-4aae-ac83-77d2c3bb4caa.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<h3 id="heading-key-components"><strong>Key Components:</strong></h3>
<ol>
<li><p><strong>EC2 Instance</strong> – Hosts the Apache web server.</p>
</li>
<li><p><strong>Security Group</strong> – Controls inbound/outbound traffic.</p>
</li>
<li><p><strong>User Data Script</strong> – Automates web server setup</p>
</li>
</ol>
<h2 id="heading-task-2-writing-terraform-code-for-a-basic-web-server"><strong>Task 2: Writing Terraform Code for a Basic Web Server</strong></h2>
<h3 id="heading-step-1-setting-up-terraform"><strong>Step 1: Setting Up Terraform</strong></h3>
<p>Before writing any code, I ensured:</p>
<ul>
<li><p>Terraform was installed (<code>terraform --version</code>).</p>
</li>
<li><p>AWS CLI was configured with my credentials (<code>aws configure</code>).</p>
</li>
</ul>
<h3 id="heading-step-2-writing-the-terraform-configuration"><strong>Step 2: Writing the Terraform Configuration</strong></h3>
<p>I created a new directory for this project and wrote a <a target="_blank" href="http://main.tf"><code>main.tf</code></a> file with the following code:</p>
<h3 id="heading-the-terraform-code-for-basic-single-server">The terraform code for basic single server:</h3>
<pre><code class="lang-json"># Configure the AWS Provider
provider <span class="hljs-string">"aws"</span> {
  region = <span class="hljs-attr">"us-west-2"</span> 
}

resource <span class="hljs-string">"aws_security_group"</span> <span class="hljs-string">"ec2_sg"</span> {
  name        = <span class="hljs-attr">"ec2-security-group"</span>
  description = <span class="hljs-attr">"Allow SSH inbound traffic"</span>

  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = <span class="hljs-attr">"tcp"</span>
    cidr_blocks = [<span class="hljs-attr">"0.0.0.0/0"</span>] 
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = <span class="hljs-attr">"-1"</span>
    cidr_blocks = [<span class="hljs-attr">"0.0.0.0/0"</span>]
  }
}

resource <span class="hljs-string">"aws_instance"</span> <span class="hljs-string">"my_single_ec2_server"</span> {
  ami           = <span class="hljs-attr">"ami-04999cd8f2624f834"</span> 
  instance_type = <span class="hljs-attr">"t2.micro"</span>              

  vpc_security_group_ids = [aws_security_group.ec2_sg.id]

  tags = {
    Name = <span class="hljs-attr">"Single-Server"</span>
  }
}


output <span class="hljs-string">"instance_public_ip"</span> {
  description = <span class="hljs-attr">"Public IP address of the EC2 instance"</span>
  value       = aws_instance.my_single_ec2_server.public_ip
}
</code></pre>
<h3 id="heading-terraform-code-for-basic-web-server">Terraform code for Basic Web Server:</h3>
<pre><code class="lang-json"># Configure the AWS Provider
provider <span class="hljs-string">"aws"</span> {
  region = <span class="hljs-attr">"us-west-2"</span>
}
resource <span class="hljs-string">"aws_security_group"</span> <span class="hljs-string">"ec2_sg"</span> {
  name        = <span class="hljs-attr">"web-server-sg"</span>
  description = <span class="hljs-attr">"Allow web traffic and SSH access"</span>

  ingress {
    description = <span class="hljs-attr">"SSH"</span>
    from_port   = 22
    to_port     = 22
    protocol    = <span class="hljs-attr">"tcp"</span>
    cidr_blocks = [<span class="hljs-attr">"0.0.0.0/0"</span>] 
  }

  ingress {
    description = <span class="hljs-attr">"HTTP"</span>
    from_port   = 80
    to_port     = 80
    protocol    = <span class="hljs-attr">"tcp"</span>
    cidr_blocks = [<span class="hljs-attr">"0.0.0.0/0"</span>]
  }

  ingress {
    description = <span class="hljs-attr">"HTTPS"</span>
    from_port   = 443
    to_port     = 443
    protocol    = <span class="hljs-attr">"tcp"</span>
    cidr_blocks = [<span class="hljs-attr">"0.0.0.0/0"</span>]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = <span class="hljs-attr">"-1"</span>
    cidr_blocks = [<span class="hljs-attr">"0.0.0.0/0"</span>]
  }

  tags = {
    Name = <span class="hljs-attr">"web-server-security-group"</span>
  }
}

resource <span class="hljs-string">"aws_instance"</span> <span class="hljs-string">"web_server"</span> {
  ami           = <span class="hljs-attr">"ami-04999cd8f2624f834"</span> 
  instance_type = <span class="hljs-attr">"t2.micro"</span>             
  vpc_security_group_ids = [aws_security_group.ec2_sg.id]

  user_data = &lt;&lt;-EOF
              #!/bin/bash
              yum update -y
              yum install -y httpd
              systemctl start httpd
              systemctl enable httpd
              echo <span class="hljs-attr">"&lt;h1&gt;Hello World from Terraform 30 Day Challenge&lt;/h1&gt;"</span> &gt; /var/www/html/index.html
              EOF

  tags = {
    Name = <span class="hljs-attr">"WebServer"</span>
  }
}

output <span class="hljs-string">"public_ip"</span> {
  description = <span class="hljs-attr">"Public IP address of the web server"</span>
  value       = aws_instance.web_server.public_ip
}

output <span class="hljs-string">"public_dns"</span> {
  description = <span class="hljs-attr">"Public DNS name of the web server"</span>
  value       = aws_instance.web_server.public_dns
}
</code></pre>
<h2 id="heading-challenges-faced-amp-solutions"><strong>Challenges Faced &amp; Solutions</strong></h2>
<ol>
<li><p><strong>AMI ID Variability</strong> – Had to ensure I used the correct Amazon Linux 2 AMI for <code>us-west-2</code></p>
<ul>
<li><em>Solution:</em> Checked the AWS AMI catalog.</li>
</ul>
</li>
<li><p><strong>Security Group Misconfiguration</strong> – Initially blocked HTTP traffic.</p>
<ul>
<li><em>Solution:</em> Verified <code>ingress</code> rules for port <code>80</code>.</li>
</ul>
</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[Step-by-Step Guide to Setting Up Terraform, AWS CLI, and Your AWS Environment.]]></title><description><![CDATA[Reading: Chapter 2 of "Terraform: Up & Running" by Yevgeniy (Jim) Brikman, focusing on "Setting Up Your AWS Account", and "Installing Terraform."
Activity: Setting up AWS and Terraform Development Environment
Set up your AWS account.

Go to AWS websi...]]></description><link>https://blog.simiops.fun/step-by-step-guide-to-setting-up-terraform-aws-cli-and-your-aws-environment</link><guid isPermaLink="true">https://blog.simiops.fun/step-by-step-guide-to-setting-up-terraform-aws-cli-and-your-aws-environment</guid><category><![CDATA[30DaysOfTerraform]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Terraform]]></category><dc:creator><![CDATA[Mwanza Simi]]></dc:creator><pubDate>Mon, 26 May 2025 21:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1748604022742/9823d9af-981b-4a19-8eac-d3372591c062.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Reading</strong>: Chapter 2 of "Terraform: Up &amp; Running" by Yevgeniy (Jim) Brikman, focusing on "Setting Up Your AWS Account", and "Installing Terraform."</p>
<h3 id="heading-activity-setting-up-aws-and-terraform-development-environment">Activity: Setting up AWS and Terraform Development Environment</h3>
<h4 id="heading-set-up-your-aws-account">Set up your AWS account.</h4>
<ul>
<li><p>Go to <a target="_blank" href="https://aws.amazon.com/">AWS website</a> and click "Create an AWS Account"</p>
</li>
<li><p>Complete the registration process providing email, password, and account information</p>
</li>
<li><p>Enter payment information, credit card required, even for free tier</p>
</li>
<li><p>Complete phone verification</p>
</li>
<li><p>Select a support plan, Free tier recommended for beginners</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748603490652/99635da1-9a6e-492b-9a17-0e860276cda8.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-install-terraform-locally">Install Terraform locally.</h4>
<ul>
<li><p>Download Terraform from <a target="_blank" href="http://terraform.io/downloads">terraform.io/downloads</a></p>
</li>
<li><p>Extract the downloaded zip file</p>
</li>
<li><p>Add Terraform to your system PATH:</p>
<ul>
<li><p>Windows: Move terraform.exe to a directory in your PATH or add its location to PATH</p>
</li>
<li><p>macOS/Linux: Move the terraform binary to /usr/local/bin/</p>
</li>
</ul>
</li>
<li><p>Verify installation by opening a terminal and typing: <code>terraform -version</code></p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748603647218/c8b97c7e-c091-4054-9fe3-72301346dad6.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-install-aws-cli-and-configure-it">Install AWS CLI and configure it.</h4>
<ul>
<li><p>Download AWS CLI:</p>
<ul>
<li><p>Windows: Download and run the MSI installer</p>
</li>
<li><p>macOS: Run <code>brew install awscli</code> or download the PKG installer</p>
</li>
<li><p>Linux: Run appropriate package manager command (e.g., <code>apt install awscli</code>)</p>
</li>
</ul>
</li>
<li><p>Configure AWS CLI by running: <code>aws configure</code></p>
</li>
<li><p>Enter your AWS Access Key ID, Secret Access Key, default region, and output format</p>
</li>
<li><p>Find your access keys in the AWS Console under IAM → Security credentials</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748603869314/c5adec1f-c6b4-4281-a133-268454ba8f53.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<h4 id="heading-install-visual-studio-code-vscode-and-add-the-aws-plugin">Install Visual Studio Code (VSCode) and add the AWS plugin.</h4>
<ul>
<li><p>Download VSCode from <a target="_blank" href="http://code.visualstudio.com">code.visualstudio.com</a></p>
</li>
<li><p>Install VSCode on your system</p>
</li>
<li><p>Open VSCode and navigate to Extensions (Ctrl+Shift+X or Cmd+Shift+X)</p>
</li>
<li><p>Search for "AWS Toolkit" and install it</p>
</li>
<li><p>Also install the "HashiCorp Terraform" extension for Terraform support</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748603926787/f00c5f90-aded-48ba-a6f5-e33bfbbca4c9.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<h4 id="heading-configure-your-vscode-to-work-with-aws">Configure your VSCode to work with AWS.</h4>
<ul>
<li><p>Click on the AWS icon in the activity bar</p>
</li>
<li><p>Select "Connect to AWS"</p>
</li>
<li><p>Choose to use the credentials from your AWS CLI configuration</p>
</li>
<li><p>Verify connection by expanding your region in the AWS Explorer panel</p>
</li>
<li><p>Configure Terraform settings in VSCode:</p>
<ul>
<li><p>Go to Settings (Ctrl+,)</p>
</li>
<li><p>Search for "terraform" and adjust formatting settings as needed</p>
</li>
<li><p>Enable auto-formatting and validation features</p>
</li>
</ul>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[What is Infrastructure as Code (IaC) and Why It's Transforming DevOps]]></title><description><![CDATA[I started my Terraform learning journey with a 30 day challenge by the AWS AI/ML User Group. My goal for the next 30 days is to learn and become better at Infrastructure as Code (IaC).
IaC is a way to automate infrastructure in your environments, mos...]]></description><link>https://blog.simiops.fun/what-is-infrastructure-as-code-iac-and-why-its-transforming-devops</link><guid isPermaLink="true">https://blog.simiops.fun/what-is-infrastructure-as-code-iac-and-why-its-transforming-devops</guid><category><![CDATA[AWS]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[30 Days of Code]]></category><dc:creator><![CDATA[Mwanza Simi]]></dc:creator><pubDate>Sun, 25 May 2025 21:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1748435652348/c483176b-78a8-499e-964f-d92707eb2c5c.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I started my Terraform learning journey with a 30 day challenge by the <a target="_blank" href="https://awsaimlkenya.org/">AWS AI/ML User Group</a>. My goal for the next 30 days is to learn and become better at Infrastructure as Code (IaC).</p>
<p>IaC is a way to automate infrastructure in your environments, mostly cloud based, but it can also be used on prem. Essentially, it means managing and provisioning your infrastructure through code instead of manual processes. This brings a ton of benefits, like consistency, scalability, and speed. The beauty of it lies in its ability to bring software development practices like version control, testing, and CI/CD to your infrastructure.</p>
<p>Tools like Terraform, CloudFormation, and Ansible are at the forefront of this movement, allowing engineers to treat their infrastructure like any other codebase, leading to fewer errors and faster deployments.</p>
<p>My focus with would specifically is to deeply understand its declarative language and how it interacts with various cloud providers to provision and manage resources efficiently. I'm excited to dive deeper into modules, state management, and best practices for collaborative IaC development. Over the next 30 days, I aim to achieve a solid foundational understanding of the core concepts.</p>
<p>I want to successfully deploy and manage a complete application stack using Terraform, covering networking, compute, and database services. Furthermore, I hope to gain practical experience with state file management, understand remote backends, and explore strategies for handling sensitive data securely. See you on day 30</p>
<h2 id="heading-day-1-tasks">Day 1 tasks</h2>
<p>Completed the assigned tasks for the day:</p>
<ol>
<li><p><strong>Reading</strong>: Chapter 1 of "Terraform: Up &amp; Running"</p>
</li>
<li><p><strong>Complete a hands on lab</strong></p>
</li>
<li><p><strong>Blog Post</strong>: "What is IAC and its benefits ,."</p>
</li>
<li><p><strong>Social Media Post</strong>: "💻 Just installed Terraform, AWS CLI, and configured my AWS environment with VSCode. Ready to deploy some infrastructure! #TerraformSetup #AWS #DevOps"</p>
</li>
</ol>
<h2 id="heading-hands-on-lab">Hands on Lab</h2>
<p>This lab setting up a VPC through a two ways. Initially, we manually configured a test VPC in the AWS console, encompassing the creation of subnets, route tables, an Elastic IP, and both Internet and NAT Gateways. This approach built a foundational understanding of each component's role.</p>
<p>Following this, the we transitioned to Terraform, automating the exact same VPC setup. This phase emphasized the principles of Infrastructure as Code, demonstrating how defining it from the sample code provided.</p>
<h3 id="heading-manual-vpc-console-setup">Manual VPC Console Setup</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748434212899/1fc5fd0c-41df-4dfb-9034-ed6cffcaadc5.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-then-set-up-terraform-to-do-the-same">Then Set up Terraform to do the same</h4>
<h4 id="heading-using-the-code-examples-my-maintf">Using the code examples my <code>main.tf</code></h4>
<pre><code class="lang-json"># Configure the AWS Provider
provider <span class="hljs-string">"aws"</span> {
  region = <span class="hljs-attr">"us-west-2"</span>
}

#Retrieve the list of AZs in the current AWS region
data <span class="hljs-string">"aws_availability_zones"</span> <span class="hljs-string">"available"</span> {}
data <span class="hljs-string">"aws_region"</span> <span class="hljs-string">"current"</span> {}

#Define the VPC
resource <span class="hljs-string">"aws_vpc"</span> <span class="hljs-string">"vpc"</span> {
  cidr_block = var.vpc_cidr

  tags = {
    Name        = var.vpc_name
    Environment = <span class="hljs-attr">"demo_environment"</span>
    Terraform   = <span class="hljs-attr">"true"</span>
  }
}

#Deploy the private subnets
resource <span class="hljs-string">"aws_subnet"</span> <span class="hljs-string">"private_subnets"</span> {
  for_each          = var.private_subnets
  vpc_id            = aws_vpc.vpc.id
  cidr_block        = cidrsubnet(var.vpc_cidr, 8, each.value)
  availability_zone = tolist(data.aws_availability_zones.available.names)[each.value]

  tags = {
    Name      = each.key
    Terraform = <span class="hljs-attr">"true"</span>
  }
}

#Deploy the public subnets
resource <span class="hljs-string">"aws_subnet"</span> <span class="hljs-string">"public_subnets"</span> {
  for_each                = var.public_subnets
  vpc_id                  = aws_vpc.vpc.id
  cidr_block              = cidrsubnet(var.vpc_cidr, 8, each.value + 100)
  availability_zone       = tolist(data.aws_availability_zones.available.names)[each.value]
  map_public_ip_on_launch = true

  tags = {
    Name      = each.key
    Terraform = <span class="hljs-attr">"true"</span>
  }
}

#Create route tables for public and private subnets
resource <span class="hljs-string">"aws_route_table"</span> <span class="hljs-string">"public_route_table"</span> {
  vpc_id = aws_vpc.vpc.id

  route {
    cidr_block     = <span class="hljs-attr">"0.0.0.0/0"</span>
    gateway_id     = aws_internet_gateway.internet_gateway.id
    #nat_gateway_id = aws_nat_gateway.nat_gateway.id
  }
  tags = {
    Name      = <span class="hljs-attr">"demo_public_rtb"</span>
    Terraform = <span class="hljs-attr">"true"</span>
  }
}

resource <span class="hljs-string">"aws_route_table"</span> <span class="hljs-string">"private_route_table"</span> {
  vpc_id = aws_vpc.vpc.id

  route {
    cidr_block     = <span class="hljs-attr">"0.0.0.0/0"</span>
    # gateway_id     = aws_internet_gateway.internet_gateway.id
    nat_gateway_id = aws_nat_gateway.nat_gateway.id
  }
  tags = {
    Name      = <span class="hljs-attr">"demo_private_rtb"</span>
    Terraform = <span class="hljs-attr">"true"</span>
  }
}

#Create route table associations
resource <span class="hljs-string">"aws_route_table_association"</span> <span class="hljs-string">"public"</span> {
  depends_on     = [aws_subnet.public_subnets]
  route_table_id = aws_route_table.public_route_table.id
  for_each       = aws_subnet.public_subnets
  subnet_id      = each.value.id
}

resource <span class="hljs-string">"aws_route_table_association"</span> <span class="hljs-string">"private"</span> {
  depends_on     = [aws_subnet.private_subnets]
  route_table_id = aws_route_table.private_route_table.id
  for_each       = aws_subnet.private_subnets
  subnet_id      = each.value.id
}

#Create Internet Gateway
resource <span class="hljs-string">"aws_internet_gateway"</span> <span class="hljs-string">"internet_gateway"</span> {
  vpc_id = aws_vpc.vpc.id
  tags = {
    Name = <span class="hljs-attr">"demo_igw"</span>
  }
}

#Create EIP for NAT Gateway
resource <span class="hljs-string">"aws_eip"</span> <span class="hljs-string">"nat_gateway_eip"</span> {
  domain     = <span class="hljs-attr">"vpc"</span>
  depends_on = [aws_internet_gateway.internet_gateway]
  tags = {
    Name = <span class="hljs-attr">"demo_igw_eip"</span>
  }
}

#Create NAT Gateway
resource <span class="hljs-string">"aws_nat_gateway"</span> <span class="hljs-string">"nat_gateway"</span> {
  depends_on    = [aws_subnet.public_subnets]
  allocation_id = aws_eip.nat_gateway_eip.id
  subnet_id     = aws_subnet.public_subnets[<span class="hljs-attr">"public_subnet_1"</span>].id
  tags = {
    Name = <span class="hljs-attr">"demo_nat_gateway"</span>
  }
}
</code></pre>
<h4 id="heading-variablestf"><code>variables.tf</code></h4>
<pre><code class="lang-json">variable <span class="hljs-string">"aws_region"</span> {
  type    = string
  default = <span class="hljs-attr">"us-west-2"</span>
}

variable <span class="hljs-string">"vpc_name"</span> {
  type    = string
  default = <span class="hljs-attr">"demo_vpc"</span>
}

variable <span class="hljs-string">"vpc_cidr"</span> {
  type    = string
  default = <span class="hljs-attr">"10.0.0.0/16"</span>
}

variable <span class="hljs-string">"private_subnets"</span> {
  default = {
    <span class="hljs-attr">"private_subnet_1"</span> = 1
    <span class="hljs-attr">"private_subnet_2"</span> = 2
    <span class="hljs-attr">"private_subnet_3"</span> = 3
  }
}

variable <span class="hljs-string">"public_subnets"</span> {
  default = {
    <span class="hljs-attr">"public_subnet_1"</span> = 1
    <span class="hljs-attr">"public_subnet_2"</span> = 2
    <span class="hljs-attr">"public_subnet_3"</span> = 3
  }
}
</code></pre>
<h4 id="heading-and-terraform-install-and-terraform-init">And <code>terraform install</code>, and <code>terraform init</code></h4>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748434618625/6104b474-c25a-4879-a91b-ea1427b30fba.png" alt class="image--center mx-auto" /></p>
<p>Did a <code>terraform plan</code></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748434648128/2a4ddca4-b19c-4c0a-8222-677e28553a1d.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-and-terraform-apply">And <code>terraform apply</code></h4>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748434666311/569377bb-604e-44d6-9fbe-1091faa83b55.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-and-vpc-resources-created-using-terraform">And VPC resources created using terraform</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748434680972/9d86098a-ffed-4915-bb69-b57e226ce6a2.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-and-lastly-using-terraform-destroy-to-automatically-remove-the-created-resources">And lastly using terraform destroy, to automatically remove the created resources</h4>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748435197203/2f5278d2-09d4-499e-bdae-270fdc0564d9.png" alt class="image--center mx-auto" /></p>
<p>At the end of day 1 I completed the learning cycle and understood the full lifecycle management, I then used <code>terraform destroy</code> to automatically remove all the created resources, demonstrating Terraform's capability for efficient cleanup and resource deprovisioning. This experience with the entire <code>plan -&gt; apply -&gt; destroy</code> cycle has been invaluable in solidifying my understanding.</p>
]]></content:encoded></item><item><title><![CDATA[AWS GuardDuty vs. Inspector vs. Shield, What’s the Difference?]]></title><description><![CDATA[Securing your AWS environment can feel daunting as there are so many tools out there, and it’s not always clear which one does what. Take AWS GuardDuty, Inspector, and Shield, for example. At first glance, they might seem like they’re all doing the s...]]></description><link>https://blog.simiops.fun/aws-guardduty-vs-inspector-vs-shield-whats-the-difference</link><guid isPermaLink="true">https://blog.simiops.fun/aws-guardduty-vs-inspector-vs-shield-whats-the-difference</guid><category><![CDATA[AWS]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[Security]]></category><dc:creator><![CDATA[Mwanza Simi]]></dc:creator><pubDate>Sun, 09 Mar 2025 21:46:27 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/fRVPzBYcd5A/upload/82109e137f9372d74b3613b852f03481.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Securing your AWS environment can feel daunting as there are so many tools out there, and it’s not always clear which one does what. Take AWS GuardDuty, Inspector, and Shield, for example. At first glance, they might seem like they’re all doing the same thing of keeping your cloud safe. But dig a little deeper, and you’ll see they each have their own power. So, how do you know which one to use, What makes GuardDuty different from Inspector, and when does Shield come into play?</p>
<h2 id="heading-your-cloud-detectiveaws-guardduty">Your Cloud Detective,AWS GuardDuty</h2>
<p><img src="https://cdn.pixabay.com/photo/2016/05/30/14/23/detective-1424831_1280.png" alt="Free detective searching man vector" /></p>
<p>Think of AWS GuardDuty as a detective that’s always on the lookout for suspicious activity. It’s a threat detection service that continuously monitors your AWS environment for signs of trouble. it uses machine learning and analyzes data from various sources, like AWS CloudTrail logs, VPC Flow Logs, and DNS logs, to spot unusual behavior.</p>
<p>For example, if someone tries to log in to your account from a strange location or if an EC2 instance starts communicating with a known malicious IP address, it will flag it. It’s like having a security guard who’s always watching and ready to raise the alarm.</p>
<p>If you want to detect potential threats in real time, like unauthorized access, compromised instances, or suspicious network activity, GuardDuty is your tool.</p>
<h2 id="heading-the-vulnerability-scanner-aws-inspector">The Vulnerability Scanner, AWS Inspector</h2>
<p><img src="https://www.safetystratus.com/wp-content/uploads/2022/11/Picture1-11-915x610.jpg" alt="Inspections and Observations: Tech Improvements | SafetyStratus" /></p>
<p>AWS Inspector is designed to find vulnerabilities in your applications and infrastructure. It automatically assesses your resources, such as EC2 instances, and checks for common security issues, like open ports, missing patches, or misconfigurations.</p>
<p>By running automated security assessments, it provides a detailed report with recommendations on how to fix the issues it finds. It’s not a real time like GuardDuty but a more of a periodic check-up to make sure everything is secure.</p>
<p>If you’re looking to identify and fix vulnerabilities in your applications or infrastructure, Inspector is the right choice. It’s especially useful before deploying new applications or after making significant changes to your environment. Think of it as a way to ensure your systems are secure before they go live.</p>
<h2 id="heading-your-ddos-bodyguard-aws-shield">Your DDoS Bodyguard, AWS Shield</h2>
<p><img src="https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcSpfyit0PVg5x4ijdF6_Z8VblnC2XmBMp8dJw&amp;s" alt="Mr Bodyguard | ID#: 353 | Funny Emoticons" class="image--center mx-auto" /></p>
<p>AWS Shield is all about protecting your applications from Distributed Denial of Service (DDoS) attacks. These attacks can overwhelm your systems with traffic, making them unavailable to legitimate users. Shield comes in two versions: Standard and Advanced.</p>
<ul>
<li><p><strong>Shield Standard</strong> is automatically included with all AWS accounts and provides basic protection against common DDoS attacks.</p>
</li>
<li><p><strong>Shield Advanced</strong> is a paid service that offers enhanced protection, including 24/7 access to the AWS DDoS Response Team, detailed attack reports, and financial protection against scaling costs during an attack.</p>
</li>
</ul>
<p>If you’re running applications that need to be highly available and you’re concerned about DDoS attacks, Shield is a must. Shield Advanced is ideal for businesses that need extra protection and support, especially if they’re running critical workloads.</p>
<h2 id="heading-how-they-work-together">How They Work Together</h2>
<p>While they all serve different purposes, they can work together to provide a comprehensive security strategy. Here’s how:</p>
<ul>
<li><p><strong>GuardDuty</strong> monitors for threats in real time, helping you detect and respond to suspicious activity.</p>
</li>
<li><p><strong>Inspector</strong> identifies vulnerabilities in your applications and infrastructure, giving you a chance to fix them before they’re exploited.</p>
</li>
<li><p><strong>Shield</strong> protects your applications from DDoS attacks, ensuring they stay online and available.</p>
</li>
</ul>
<p>For example, Inspector might find an open port on one of your EC2 instances. You close the port, but GuardDuty later detects unusual traffic from that instance, indicating a potential compromise. Meanwhile, Shield is protecting your application from being taken offline by a DDoS attack. Together, these tools create a layered defense that keeps your AWS environment secure.</p>
<p><img src="https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcSDDQYzjHsCY3ZS7VPJSkPwm_tNjpCsd3yg4A&amp;s" alt="9 Really Funny Cartoons on Cloud" class="image--center mx-auto" /></p>
]]></content:encoded></item></channel></rss>