[{"content":" About Me # I\u0026rsquo;m an experienced IT professional with 18years experience in finance \u0026amp; manufacturing. Working as an automation focussed Azure consultant designing and deploying enterprise scale infrastructure as code as well as platform hardening with Sentinel, Policy \u0026amp; Defender XDR adhering to well architected framework.\nKey Skills: # AZURE SECURITY/IDENTITY AZURE NETWORKING SCRIPTING PLATFORM Sentinel Defender XDR Policy Monitor Entra ID RBAC Entra Domain Services Graph/KQL Log Management Landing Zone ExpressRoute vWAN Firewall vNet/Subnet NSG/vNet Flow logs Private Link Entra DS DNS Azure DevOps PowerShell Ansible Terraform Packer Logic Apps/Playbooks Python Azure M365 Power BI vSphere Windows XP/7/10 Windows Server 08/12/16/19 RedHat Ubuntu Experience: # Azure Consultant Quorum July 2021 - March 2026 - Edinburgh, UK Quorum a large MSPs and is based in Edinburgh. My role as a consultant in the enterprise consultancy team has me working on engineering and architectural projects in UK banks. My duties include: Augment Cloud Platform team to manage and develop Azure cloud platform Configure and remediate regulatory compliance policy initiatives Azure DevOps IaC for infrastructure deployment into vWAN Landing Zones Collaborate with projects providing technical and security advice for configuration Creation of various Power BI infrastructure reports via Azure Resource Graph \u0026 LAW Continual and enthusiastic self-development with new certifications and training Knowledge share via “Communities of Practice” for growth and learning Delivery of full legacy protocol detection PS script for large, multi-domain enterprise estates Delivery of large infrastructure upgrades for Express Route, Azure Hub Firewall, WebApps Wintel SME IBM UK September 2017 - July 2021 - Edinburgh, UK Working as part of the Lloyds Banking Group retail it division as a Wintel SME, my role was a platform owner for retail Wintel servers. My duties included Wintel project delivery in the retail finance space, incident, problems and RCA 24x7 on call support across multi-domain enterprise estate with physical/virtual/cloud mix Manage \u0026 co-ordinate 3rd party vendors, datacentre technicians and off-shore resources Wintel Technician Lloyds Banking Group June 2014 - September 2017 - Edinburgh, UK Starting out as a desktop support technician before moving into the Wintel Server support team after internal promotion. My duties included: Project work to delivery new platform to users including testing and documentation Additional support roles including change management, patching, service monitoring etc Moved into Windows Server infrastructure deployment delivering servers, services and advice. Service Desk Analyst Heineken UK January 2011 - June 2014 - Edinburgh, UK Supporting the UK Heineken manufacturing business, 24x7 as a service desk analyst. My duties included: Achieve high FTF rate at first line with re-installs, profile fixes etc at initial contact Supporting desktop apps/VPN connections and Blackberry/domino support Hardware builds for laptops/desktops/blackberries/routers 2nd Line Remote Support Atos Origin September 2007 - January 2011 - Livingston, UK Supporting BNP Paribas as part of a managed service contract with Atos Origin. Started my IT career on service desk before promotion to remote desktop support after 9 months. My duties included: Initially identifying fault and attempting FTF before logging detailed incident to support teams Troubleshooting desktop apps, MS Suite, Windows faults, LOB Apps, software deployment User Active Directory, Lotus Notes, Training and development of new “1.5 Line” team to meet first time fix targets Professional Certifications # HashiCorp Certified: Terraform Associate (003) Azure Administrator Associate Azure Security Engineer Associate DevOps Engineer Expert Security Operations Analyst Associate Cybersecurity Architect Expert M365 Administrator Expert Identity and Access Administrator Associate Azure Solutions Architect Expert Teams Administrator Associate Azure Vidrtual Desktop Specialty Azure Network Engineer Associate Azure Data Engineer Associate Security Administrator Associate ","date":"7 March 2026","externalUrl":null,"permalink":"/about/","section":"Ben Stalker","summary":"","title":"About Me","type":"page"},{"content":"Contact form\nName: Email: Message:\nSend Message ","date":"7 March 2026","externalUrl":null,"permalink":"/contact/","section":"Ben Stalker","summary":"","title":"Contact Me","type":"page"},{"content":"","date":"7 March 2026","externalUrl":null,"permalink":"/resources/","section":"Ben Stalker","summary":"","title":"Resources","type":"page"},{"content":"","date":"14 March 2026","externalUrl":null,"permalink":"/categories/ai/","section":"Categories","summary":"","title":"AI","type":"categories"},{"content":"","date":"14 March 2026","externalUrl":null,"permalink":"/tags/antigravity/","section":"Tags","summary":"","title":"Antigravity","type":"tags"},{"content":"Welcome to my professional blog where I break down Security Infrastructure in Azure and M365.\n","date":"14 March 2026","externalUrl":null,"permalink":"/","section":"Ben Stalker","summary":"","title":"Ben Stalker","type":"page"},{"content":"","date":"14 March 2026","externalUrl":null,"permalink":"/blog_posts/","section":"Blog_posts","summary":"","title":"Blog_posts","type":"blog_posts"},{"content":"","date":"14 March 2026","externalUrl":null,"permalink":"/categories/","section":"Categories","summary":"","title":"Categories","type":"categories"},{"content":"","date":"14 March 2026","externalUrl":null,"permalink":"/tags/claude-code/","section":"Tags","summary":"","title":"Claude Code","type":"tags"},{"content":"","date":"14 March 2026","externalUrl":null,"permalink":"/tags/proxmox/","section":"Tags","summary":"","title":"Proxmox","type":"tags"},{"content":"","date":"14 March 2026","externalUrl":null,"permalink":"/tags/","section":"Tags","summary":"","title":"Tags","type":"tags"},{"content":"","date":"14 March 2026","externalUrl":null,"permalink":"/categories/tutorial/","section":"Categories","summary":"","title":"Tutorial","type":"categories"},{"content":"","date":"14 March 2026","externalUrl":null,"permalink":"/tags/ubuntu/","section":"Tags","summary":"","title":"Ubuntu","type":"tags"},{"content":"","date":"14 March 2026","externalUrl":null,"permalink":"/categories/vibe-coding/","section":"Categories","summary":"","title":"Vibe Coding","type":"categories"},{"content":" Overview # As mentioned in my previous blog posts, I\u0026rsquo;ve found myself really enjoying building websites and apps again with coding agents or \u0026ldquo;Vibe Coding\u0026rdquo;. I\u0026rsquo;m very far from a competent web developer or any developer for that matter so it\u0026rsquo;s interesting to see how far I can get by simply vibe coding along.\nI\u0026rsquo;ve read about the concept of a \u0026ldquo;Dark Factory\u0026rdquo; described where you don\u0026rsquo;t need to necessarily care about the exact quality of the code, so long as you clearly define your requirements, constraints and tools and the end product passes the testing suite then you can use. How true that turns out to be is essentially what I\u0026rsquo;m testing.\nI started vibe coding with GitHub CoPilot extension in VSCode, having back and forth convos, incrementally implementing features on a somewhat chaotic way. This moved onto creating very large and comprehensive prompts with clearly defined technologies, layouts, user experiences, end results etc which did slightly better.\nI then ultimately found a system called \u0026ldquo;BMAD - https://github.com/bmad-code-org\" which I then started using from about v2 on a larger webapp I\u0026rsquo;m working on which I will eventually post about. But essentially it is using Google\u0026rsquo;s Antigravity, with a Google pro AI subscription to work through the BMAD method epic by epic, story by story, in the Gemini chat window of Antigravity. The main drawback is that I must have my laptop open and be in constant chat as Antigravity requires constant approval for tasks, something I\u0026rsquo;m looking to avoid.\nThe Plan # I\u0026rsquo;m looking at starting a new website and I\u0026rsquo;d like to try a newer workflow using Claude Code over SSL on a virtual machine on my local hypervisor. They basic concept is:\nBuild an Ubuntu Virtual Machine on my Proxmox host Install the necessary tools such as Claude Code, Antigravity, VSCode Web, Git, TMUX etc Perform the initial exploratory and doc creation stages of BMAD method in Antigravity with Gemini 3.1 pro Perform the architecture and epic creation stage of BMAD method in Antigravity with Opus 4.6 (thinking) Swap to headless coding in Claude Code, in multiple TMUX sessions over SSH, that can be re-connected to from any machine including my phone. This will allow me to kick of a \u0026ldquo;Story\u0026rdquo; defined by BMAD and hopefully come back to a complete feature.\nThe Build # First step is to build an Ubuntu Desktop 24.04 VM on my Proxmox host. This is a custom build from a Super Micro Dual Socket motherboard with 2x Xeon E5-26xx CPU, 192GB RAM with some HDD and SSD storage via a TrueNas VM.\nVM Properties # The specific properties I used for the VM was:\nItem Value Name UbuDev01 ISO Image ubuntu-24.04.3-desktop Guest OS Type Linux Guest OS Version 6.x - 2.6 Kernal Graphics Card VirtIO-GPU Machine q35 Qemu Agent Checked Disk size 200GB Disk Discard Checked Disk SSD emulation Checked CPU Type Host Sockets 1 Cores 8 Enabled NUMA Checked PCID On Memory 16384 Ballooning Device Unchecked The Build Process # Once Ubuntu OS was installed, I connected to the Desktop via Proxmox console using Virt-Viewer for Windows. I collect the IP address assigned via DHCP with the ip a command and then update and install some software:\n# Update all packages sudo apt update \u0026amp;\u0026amp; sudo apt upgrade -y # For good measure, check for dist upgrade sudo apt dist-upgrade Next, we will install SSH so I can perform the rest of this work over SSH for convenience:\n# Install the package sudo apt install openssh-server -y # Enable the service sudo systemctl enable --now ssh With this done, I can now close the Virt-Viewer window and continue in the terminal.\nOnce connected over terminal with command ssh user@IP I continue the setup by mapping my network drives where I keep my code repositories:\nsudo mkdir /mnt/docs sudo vim /etc/fstab # add \u0026lt;IP\u0026gt;:\u0026lt;TrueNasSharePath\u0026gt; /mnt/docs nfs auto,nofail 0 0 sudo mount -a WARNING: I can\u0026rsquo;t run off an NFS share and must develop locally, using GitHub to sync files instead of using a central file share Next we will install some basic packages to get working with:\nsudo apt -y install curl git qemu-guest-agent nodejs npm At this point I confirm that the option is set in Proxmox under VM \u0026gt; Options \u0026gt; QEMU Guest Agent and then reboot the VM to allow Proxmox to pick up the guest configuration now that the qemu-guest-agent package is installed and configured.\nNext I like to assign static IPs to my VMs once they are created and we do this by updating the correct netplan file. I start by listing the files available as this seems to change over ubuntu versions:\nThe output of ls /etc/netplan I view both and can see that the IP Configuration is in the 50-cloud-init.yaml. It currently shows as:\nnetwork: version: 2 ethernets: enp6s18: dhcp4: true I update it to show the following:\nnetwork: ethernets: enp6s18: addresses: [\u0026lt;IP\u0026gt;/24] nameservers: addresses: [\u0026lt;nameserverip\u0026gt;] routes: - to: default via: \u0026lt;gatewayip\u0026gt; version: 2 I then apply these changes with:\nsudo netplan apply and reboot again.\nConfiguring the Dev Tools # Now the basics are configured, we can look at installing the tools needed to enable the vibe coding. The software I plan to use is:\nTMUX - https://github.com/tmux/tmux/wiki - A terminal multiplexer allowing the easy switching between several programs at once with the ability to detach and reattach for background tasks Claude Code - https://code.claude.com/docs/en/cli-reference - A CLI tool I can use via SSH to control the Coding Agents Gemini CLI - https://geminicli.com/ - The Google Gemini CLI tool I can use via SSH if needed. Google AntiGravity - https://antigravity.google/ - A VSCode/Windsurf fork with built in chat panel for Gemini Agents VSCode Server - https://code.visualstudio.com/docs/remote/vscode-server - Allows me to view the codebase via a webserver version of VSCode via browser from any device. Other Useful Utilities - There are several smaller packages I will install such as: BTOP - Process monitor like top Claude Code Monitor - View Claude code usage These tools will also be installed per project folder as they will generate a folder and file structure that the AI Agents will use.\nBMAD: https://github.com/bmad-code-org - A web development framework for use with coding agents. BMAD Claude: https://github.com/aj-geddes/claude-code-bmad-skills - A version of BMAD optimized with use for Claude Code TMUX # Link: https://github.com/tmux/tmux/wiki\nTMUX Installation # To install TMUX, we simply use the apt package manager for Ubuntu:\nsudo apt install tmux -y TMUX Basic Commands # Conceptually understanding TMUX is helpful when navigating nested screens and menus. At the top level, we have a TUMX session. This is the overall container which we can name. Within this session we have windows. These are displayed along the bottom of a TMUX session and can be swapped between. Within each TMUX window we can \u0026ldquo;split\u0026rdquo; it into different panes.\nThis will install the package and allow us to start multiplexing our terminal. The basic commands I will be using to control TMUX are:\ntmux new -s \u0026lt;sessionname\u0026gt; - Creates a new TMUX session with a specified name tmux attach -d \u0026lt;sessionname\u0026gt; - Attached to named session tumx attach - Attaches to most recent session tmux kill-session -t \u0026lt;sessionname\u0026gt; - Kill the named session tmux ls - Lists currently active sessions TMUX Hot Key Combos # There are also hot keys that must be pressed in sequence. Typically they start with holding Ctrl and pressing b, then releasing all buttons and pressing the hotkey you want. To be hyper specific in a few instances, when using special symbols like \u0026quot; or %. You must press shift and the corresponding number. So hold Ctrl, press b, release, hold Shift, press 2 and you\u0026rsquo;ll split the terminal horizontally.\nCtrl + b then d - Detaches from a session that is currently active Ctrl + b then % or \u0026quot; - This will split the terminal vertically for % and horizontally for \u0026quot; Ctrl + b then Arrow Keys - Moves around split terminal windows Ctrl + b then PgUp or PgDwn - Scrolls up and down terminal window to see previous output Ctrl + b then Alt + Arrow - Resize pane Claude Code # Link: https://code.claude.com/docs/en/quickstart\nClaude Code Installation # Claude Code is the foundational tool that I will be using for the actual development of the websites. I find their models ideal. I only subscribe to the pro subscription (maybe if my side project websites generate an income I\u0026rsquo;ll upgrade to max), so I\u0026rsquo;m limited to the Sonnet 4.6 model at the moment. I do however also have a Google Pro subscription that gives me access to Opus 4.6 within the Antigravity tool so I will use that to plan the architecture and write the Epics/Stories/Tasks.\nClaude Code CLI is installed on Ubuntu by:\ncurl -fsSL https://claude.ai/install.sh | bash Once installed, it requested I run this command to add to my path folder:\necho \u0026#39;export PATH=\u0026#34;$HOME/.local/bin:$PATH\u0026#34;\u0026#39; \u0026gt;\u0026gt; ~/.bashrc \u0026amp;\u0026amp; source ~/.bashrc Claude Code Usage # Once installed, we can start a Claude Code session with claude. There are various switches we can add but this isn\u0026rsquo;t a Claude Code Guide. You will need to run through a quick setup wither you pick your theme and then connect to your account. It\u0026rsquo;s worth noting that this command would typically be run inside your project directory and not at the Ubuntu user home drive.\nOnce configured, we can move on.\nGemini CLI # Link: https://geminicli.com/\nGemini CLI Installation # First we must install some prerequisites to use Gemini CLI, namely NodeJS and NPM Package Manager. They are installed as follows:\n# Install latest LTS version of Node JS curl -fsSL https://deb.nodesource.com/setup_lts.x | sudo -E bash - sudo apt-get install -y nodejs npm No nodejs and npm are installed, we can simply run the command to install Gemini CLI:\nsudo npm install -g @google/gemini-cli This will install the Gemini CLI command\nGemini CLI Usage # With the node package installed, we can run Gemini CLI with the command gemini. When we do so, we will be prompted to trust the current directory and then login. We would typically run this in the context of the project directory also.\nWARN: Also worth noting that after logging in, the Gemini CLI restarted and hung my SSH session, I had to disconnect and reconnect which then allowed me to run it and interact with it.\nGoogle Antigravity # Link: https://antigravity.google/download/linux\nGoogle Antigravity Installation # To install on Linux, we will run the following commands. First we need to add the repository to the sources.list.d:\nsudo mkdir -p /etc/apt/keyrings curl -fsSL https://us-central1-apt.pkg.dev/doc/repo-signing-key.gpg | \\ sudo gpg --dearmor --yes -o /etc/apt/keyrings/antigravity-repo-key.gpg echo \u0026#34;deb [signed-by=/etc/apt/keyrings/antigravity-repo-key.gpg] https://us-central1-apt.pkg.dev/projects/antigravity-auto-updater-dev/ antigravity-debian main\u0026#34; | \\ sudo tee /etc/apt/sources.list.d/antigravity.list \u0026gt; /dev/null The, we update the package cache and install the package:\nsudo apt update sudo apt install antigravity -y Google Antigravity Usage # After installation, as it\u0026rsquo;s not a CLI tool, I signed back into the desktop with Windows Virt-Viewer, opened it, pinned it to the dock and then ran through the setup. This involved selecting the dark theme (obviously) and then signing into google\nAntiGravity VSCode Server # Link: https://github.com/coder/code-server\nVSCode Server Installation # From the repo, there is a setup script that can be simply run to install. The command is:\ncurl -fsSL https://code-server.dev/install.sh | sh After installation we are prompted to enable it as a service by running:\nsudo systemctl enable --now code-server@$USER Next up, because I want to access it via a browser on other devices in my home via https://\u0026lt;server-ip\u0026gt;:8080, I need to bind port 8080 to 0.0.0.0. I need to update the config file as follows:\nsudo vim ~/.config/code-server/config.yaml The current bind-addr is \u0026ldquo;127.0.0.1:8080\u0026rdquo; , I will update this to read \u0026ldquo;0.0.0.0:8080\u0026rdquo;. I will also update the password to something more memorable.\nAfter that, I will restart the service and should be able to access it in my browser:\nsudo systemctl restart code-server@$USER VSCode Server Usage # This is used just like normal VSCode except I can access it via https://\u0026lt;server-ip\u0026gt;:8080 from other devices in my house.\nWe are now at a stage where we can install some optional utilities to try with my intended coding workflow:\nOther Utilities Installation and Usage # The other utilities I plan to use will be somewhat experimental and they will likely change over time. I\u0026rsquo;ve done some research and will plan to start out with the following tools:\nBTOP # This is a slightly more stylised version of the command top used to monitor system performance in a Linux terminal.\nIt is installed with the native package manager with the following command:\nsudo apt install btop -y Once installed, you get this beautiful interface when using the command btop:\nbtop Claude Code Monitor # This is a small terminal UI that will help you keep track of your Claude Code usage, useful to understand your subscription allowance and upcoming reset times. It is installed via UV which itself needs installed first:\ncurl -LsSf https://astral.sh/uv/install.sh | sh Once installed, we can install Claude Code Usage Monitor by running:\nuv tool install claude-monitor Once installed there are various alias\u0026rsquo; to run it with various arguments. In my case, with the Claude Code pro plan, I will use the command:\nclaude-monitor --plan pro --theme dark --timezone Europe/London What\u0026rsquo;s next # This is long enough for a setup and configuration post. In my next post, we will set up a project, use the different bmad tools to give skills/workflows to our agents, go through a project definition and the enter the coding loop.\nStay tuned.\n","date":"14 March 2026","externalUrl":null,"permalink":"/blog_posts/post-4/","section":"Blog_posts","summary":"","title":"Vibe Coding Setup - Creating the VM","type":"blog_posts"},{"content":"","date":"7 March 2026","externalUrl":null,"permalink":"/categories/azure-functions/","section":"Categories","summary":"","title":"Azure Functions","type":"categories"},{"content":"","date":"7 March 2026","externalUrl":null,"permalink":"/tags/blowfish/","section":"Tags","summary":"","title":"Blowfish","type":"tags"},{"content":" Complicating the Blog # Before I post any actually useful content, I thought I\u0026rsquo;d continue to document my development of this website in case it helps someone out at some point. I was looking at adding a bog standard \u0026ldquo;About Me\u0026rdquo;, \u0026ldquo;Contact Me\u0026rdquo; and maybe a \u0026ldquo;Resources\u0026rdquo;, where I link to some interesting websites that I value but found due to the nature of a static site, it\u0026rsquo;s a little more tricky to have a contact me form than you\u0026rsquo;d think. Now I do have the e-mail link underneath my creepy AI pic everywhere but where is the fun in that?\nHad a quick look around and it seems a fun way to do this is to use an Azure Function with a Resend.com account to send a mail to me for extra complexity. This page will largely detail that as well as the \u0026ldquo;menu\u0026rdquo; component within Hugo/Blowfish theme.\nAdding pages # The pages were actually quite simple. Hugo has built in menus that you can append pages to by completing front matter. For my pages, I simply added a menu and weight that specifies that I\u0026rsquo;d like the page to appear in the main menu and the different weights are used to order the pages:\n--- title: \u0026#34;About Me\u0026#34; description: \u0026#34;blah, blah, blah\u0026#34; layout: \u0026#34;background\u0026#34; date: 2026-03-07 menu: \u0026#34;main\u0026#34; weight: 10 --- This makes them appear at the top right of all pages as you can likely see right now in the order of in ascending order from left to right. Easy.\nAbout me # Looking at the shortcodes available in the Blowfish theme, they have a nice \u0026ldquo;timeline\u0026rdquo; component that I think will be nice for a live job history. I guess the About me page should be a live CV so that\u0026rsquo;s what we are going to do.\nThe syntax is available here and I simply plug it in and fill it with details to present my job history. For my key skills section, I\u0026rsquo;ll ended up using lists in tabs, doesn\u0026rsquo;t look the best, will need more work. Also ended up using a gallery for professional certifications. Contact me # RESEND Api Key This is where we can have some fun. First of all, I headed to www.resend.com and signed up for free then pulled an API key. I went straight to the static webapp for this site in Azure and added it as an environmental variable for the site under RESEND_API_KEY.\nThe Azure Function Next up, we will create a small Node.js function. Azure Static Web Apps automatically detect and builds APIs if I you place them in a folder named api at the root of your repository. I create the following file in the root of the project api/src/functions/contact.js\nThe contents of this file will be:\nconst { app } = require(\u0026#39;@azure/functions\u0026#39;); app.http(\u0026#39;contact\u0026#39;, { methods: [\u0026#39;POST\u0026#39;], authLevel: \u0026#39;anonymous\u0026#39;, handler: async (request, context) =\u0026gt; { try { // Parse the incoming JSON from your website const { name, email, message } = await request.json(); if (!email || !message) { return { status: 400, body: \u0026#34;Email and message are required.\u0026#34; }; } // Send the email via Resend\u0026#39;s API const res = await fetch(\u0026#39;https://api.resend.com/emails\u0026#39;, { method: \u0026#39;POST\u0026#39;, headers: { \u0026#39;Content-Type\u0026#39;: \u0026#39;application/json\u0026#39;, \u0026#39;Authorization\u0026#39;: `Bearer ${process.env.RESEND_API_KEY}` }, body: JSON.stringify({ from: \u0026#39;Contact Form \u0026lt;onboarding@resend.dev\u0026gt;\u0026#39;, to: \u0026#39;YOUR_PERSONAL_EMAIL@DOMAIN.COM\u0026#39;, // Change this to your actual email subject: `New message from ${name}`, text: `Reply to: ${email}\\n\\nMessage:\\n${message}` }) }); if (res.ok) { return { status: 200, jsonBody: { success: \u0026#34;Message sent!\u0026#34; } }; } else { return { status: 500, jsonBody: { error: \u0026#34;Failed to send via email provider.\u0026#34; } }; } } catch (error) { context.error(error); return { status: 500, jsonBody: { error: \u0026#34;Internal server error.\u0026#34; } }; } } }); We also need a very basic package.json file in the actual api/ folder so Azure knows it\u0026rsquo;s a Node app. I create api/package.json and paste the following content:\n{ \u0026#34;name\u0026#34;: \u0026#34;contact-api\u0026#34;, \u0026#34;version\u0026#34;: \u0026#34;1.0.0\u0026#34;, \u0026#34;dependencies\u0026#34;: { \u0026#34;@azure/functions\u0026#34;: \u0026#34;^4.0.0\u0026#34; }, \u0026#34;main\u0026#34;: \u0026#34;src/functions/*.js\u0026#34; } The Contact Form Now within my actual contact.md file in the content directory, I can place the following to define a contact form:\n\u0026lt;div id=\u0026#34;form-container\u0026#34;\u0026gt; \u0026lt;form id=\u0026#34;contact-form\u0026#34;\u0026gt; \u0026lt;label\u0026gt;Name: \u0026lt;input type=\u0026#34;text\u0026#34; id=\u0026#34;name\u0026#34; required\u0026gt;\u0026lt;/label\u0026gt;\u0026lt;br\u0026gt;\u0026lt;br\u0026gt; \u0026lt;label\u0026gt;Email: \u0026lt;input type=\u0026#34;email\u0026#34; id=\u0026#34;email\u0026#34; required\u0026gt;\u0026lt;/label\u0026gt;\u0026lt;br\u0026gt;\u0026lt;br\u0026gt; \u0026lt;label\u0026gt;Message:\u0026lt;br\u0026gt;\u0026lt;textarea id=\u0026#34;message\u0026#34; rows=\u0026#34;5\u0026#34; required\u0026gt;\u0026lt;/textarea\u0026gt;\u0026lt;/label\u0026gt;\u0026lt;br\u0026gt;\u0026lt;br\u0026gt; \u0026lt;button type=\u0026#34;submit\u0026#34;\u0026gt;Send Message\u0026lt;/button\u0026gt; \u0026lt;/form\u0026gt; \u0026lt;p id=\u0026#34;status-message\u0026#34;\u0026gt;\u0026lt;/p\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;script\u0026gt; document.getElementById(\u0026#39;contact-form\u0026#39;).addEventListener(\u0026#39;submit\u0026#39;, async (e) =\u0026gt; { e.preventDefault(); const status = document.getElementById(\u0026#39;status-message\u0026#39;); status.innerText = \u0026#34;Sending...\u0026#34;; const data = { name: document.getElementById(\u0026#39;name\u0026#39;).value, email: document.getElementById(\u0026#39;email\u0026#39;).value, message: document.getElementById(\u0026#39;message\u0026#39;).value }; try { const response = await fetch(\u0026#39;/api/contact\u0026#39;, { method: \u0026#39;POST\u0026#39;, headers: { \u0026#39;Content-Type\u0026#39;: \u0026#39;application/json\u0026#39; }, body: JSON.stringify(data) }); if (response.ok) { status.innerText = \u0026#34;Thank you! Your message has been sent.\u0026#34;; document.getElementById(\u0026#39;contact-form\u0026#39;).reset(); } else { status.innerText = \u0026#34;Oops! Something went wrong.\u0026#34;; } } catch (err) { status.innerText = \u0026#34;Error connecting to the server.\u0026#34;; } }); \u0026lt;/script\u0026gt; GitHub Actions Update Now we need to tell GitHub actions about the above. To do this, we open the existing workflow file in .github/workflows/deploy.yml and add an api_location property to the Deploy to Azure Static Web Apps task as follows:\n# 3. Upload the pre-built site to Azure (Bypassing Oryx) - name: Deploy to Azure Static Web Apps uses: Azure/static-web-apps-deploy@v1 with: azure_static_web_apps_api_token: ${{ steps.get_token.outputs.swa_token }} action: \u0026#34;upload\u0026#34; app_location: \u0026#34;public\u0026#34; # Point this directly to Hugo\u0026#39;s output folder api_location: \u0026#34;api\u0026#34; # Tells Azure where function code is skip_app_build: true # CRITICAL: This tells Azure Oryx to back off When we push these changes, the function will be picked up and work.\nResources page # I will create a stub page for Resources where I can link various links I think are interesting too.\n","date":"7 March 2026","externalUrl":null,"permalink":"/blog_posts/post-3/","section":"Blog_posts","summary":"","title":"Complicating the Blog","type":"blog_posts"},{"content":"","date":"7 March 2026","externalUrl":null,"permalink":"/tags/hugo/","section":"Tags","summary":"","title":"Hugo","type":"tags"},{"content":"","date":"7 March 2026","externalUrl":null,"permalink":"/categories/website/","section":"Categories","summary":"","title":"Website","type":"categories"},{"content":"","date":"1 March 2026","externalUrl":null,"permalink":"/tags/azure/","section":"Tags","summary":"","title":"Azure","type":"tags"},{"content":" Hosting and Deployment # Following on from my previous post, we now have a basic layout and first piece of content prepared. This article will deal with the setup of the Azure tenant and the deployment of both the infrastructure and website.\nOverview # For the deployment, I will start with an empty Azure tenant. This means GitHub will not have the rights to deploy the required infra (Resource Group \u0026amp; Static Website) so I will need to create the resources needed to permit this. Once in place, I can deploy the actual infra required by the blog via GitHub Actions pipeline from Terraform code, my preferred method and then use the GitHub actions runner to build and deploy the Hugo blog as follows:\nThe markdown files to hugo azure static website pipeline Please ignore the imperfect diagram, that was the best Gemini Pro 3.1 could do after 10 attempts and corrective prompts.\nInitial GitHub \u0026ndash;\u0026gt; Azure Connection # The initial \u0026ldquo;bootstrap phase\u0026rdquo; will be focused on configuring the RBAC (Role Based Access Control) to allow a GitHub Identity (an EntraID service principal) the rights to deploy infrastructure to our subscription. We will also configure a storage account in which our terraform will be able to store it\u0026rsquo;s statefile. This initial setup will be handled via PowerShell as that\u0026rsquo;s my preference but could be done by Azure CLI or button clicking if you prefer.\nThese are the commands I wil run :\n# Connect to tenant Connect-AzAccount -Tenant \u0026#34;xxxx-xxxx-xxxx-xxxx-xxxx\u0026#34; # Set some variables $location = \u0026#34;\u0026lt;deplomentregion\u0026gt;\u0026#34; $rgName = \u0026#34;\u0026lt;rgname\u0026gt;\u0026#34; $storageName = \u0026#34;\u0026lt;saname\u0026gt;\u0026#34; # Must be globally unique # Create the resource Group New-AzResourceGroup -Name $rgName -Location $location # Create the storage account $storage = New-AzStorageAccount -ResourceGroupName $rgName -Name $storageName -SkuName \u0026#34;Standard_LRS\u0026#34; -Location $location -AllowBlobPublicAccess $false -MinimumTlsVersion TLS1_2 # Create the blob storage container New-AzStorageContainer -Name \u0026#34;tfstate-blog\u0026#34; -Context $storage.Context # Create the service principal $app = New-AzADApplication -DisplayName \u0026#34;sp-github-actions-mgmt\u0026#34; Start-Sleep -Seconds 10 $sp = New-AzADServicePrincipal -ApplicationId $app.AppId # Grant roles $subId = (Get-AzContext).Subscription.Id New-AzRoleAssignment -ApplicationId $app.AppId -RoleDefinitionName \u0026#34;Contributor\u0026#34; -Scope \u0026#34;/subscriptions/$subId\u0026#34; # Create the OIDC Trust $fedParams = @{ Name = \u0026#34;github-actions-blog\u0026#34; Issuer = \u0026#34;https://token.actions.githubusercontent.com\u0026#34; Subject = \u0026#34;repo:\u0026lt;githubusername\u0026gt;/\u0026lt;githubreponame\u0026gt;:ref:refs/heads/main\u0026#34; Description = \u0026#34;OIDC trust for Blog deployment\u0026#34; Audience = @(\u0026#34;api://AzureADTokenExchange\u0026#34;) } New-AzADAppFederatedCredential -ApplicationObjectId $app.Id @fedParams # Output the vars for GitHub secrets Write-Host \u0026#34;AZURE_CLIENT_ID : $($app.AppId)\u0026#34; Write-Host \u0026#34;AZURE_TENANT_ID : $((Get-AzContext).Tenant.Id)\u0026#34; Write-Host \u0026#34;AZURE_SUBSCRIPTION_ID : $subId\u0026#34; GitHub Secrets Setup # I will go to my GitHub Repository -\u0026gt; Settings -\u0026gt; Secrets and variables -\u0026gt; Actions. I will then add the following \u0026ldquo;Repository secrets\u0026rdquo;:\nAZURE_CLIENT_ID: The $appId from Step 2. AZURE_TENANT_ID: Your Azure Tenant ID (az account show --query tenantId -o tsv). AZURE_SUBSCRIPTION_ID: Your Subscription ID ($subId). Completed GitHub secrets section Terraform # Back in VSCode, I will create a new folder called terraform, create 2x files inside and populate them as below:\nterraform/providers.tf Defines the Terraform \u0026ldquo;backend\u0026rdquo; which comprises of:\nWhich provider to use Where to store the state file How to connect terraform { required_providers { azurerm = { source = \u0026#34;hashicorp/azurerm\u0026#34; version = \u0026#34;~\u0026gt; 3.0\u0026#34; } } backend \u0026#34;azurerm\u0026#34; { resource_group_name = \u0026#34;\u0026lt;rgname\u0026gt;\u0026#34; storage_account_name = \u0026#34;\u0026lt;saname\u0026gt;\u0026#34; # Must match Phase 1 container_name = \u0026#34;tfstate-blog\u0026#34; key = \u0026#34;prod.terraform.tfstate\u0026#34; use_oidc = true } } provider \u0026#34;azurerm\u0026#34; { features {} use_oidc = true } terraform/main.tf Defines the two resources that will be deployed, the resource group and the static webapp and an output the deployment token to feed back into GitHub to allow GitHub Actions workflow to push the Hugo site on update.\nresource \u0026#34;azurerm_resource_group\u0026#34; \u0026#34;blog\u0026#34; { name = \u0026#34;rg-securityblog-prod\u0026#34; location = \u0026#34;westeurope\u0026#34; } resource \u0026#34;azurerm_static_web_app\u0026#34; \u0026#34;blog_app\u0026#34; { name = \u0026#34;swa-benstalker-tech\u0026#34; resource_group_name = azurerm_resource_group.blog.name location = azurerm_resource_group.blog.location sku_tier = \u0026#34;Free\u0026#34; sku_size = \u0026#34;Free\u0026#34; } # We output the deployment token so GitHub Actions can use it to push the Hugo site output \u0026#34;swa_api_key\u0026#34; { value = azurerm_static_web_app.blog_app.api_key sensitive = true } At this stage, I will commit and merge with the GitHub again.\nGitHub Actions Pipeline # Next I will define the GitHub actions workflow by creating a file at .github/workflows/deploy.yml. This workflow logs into Azure securely via OIDC, runs Terraform to build the infrastructure, extracts the SWA token dynamically, and deploys your Hugo site.\nYAML\nname: Deploy Infra and Blog on: push: branches: [\u0026#34;main\u0026#34;] # REQUIRED for OIDC Authentication permissions: id-token: write contents: read jobs: build-and-deploy: runs-on: ubuntu-latest steps: - name: Checkout Code uses: actions/checkout@v4 with: submodules: true fetch-depth: 0 - name: Azure Login via OIDC uses: azure/login@v2 with: client-id: ${{ secrets.AZURE_CLIENT_ID }} tenant-id: ${{ secrets.AZURE_TENANT_ID }} subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }} - name: Setup Terraform uses: hashicorp/setup-terraform@v3 - name: Terraform Init \u0026amp; Apply id: tf working-directory: ./terraform run: | terraform init terraform apply -auto-approve env: ARM_CLIENT_ID: ${{ secrets.AZURE_CLIENT_ID }} ARM_SUBSCRIPTION_ID: ${{ secrets.AZURE_SUBSCRIPTION_ID }} ARM_TENANT_ID: ${{ secrets.AZURE_TENANT_ID }} ARM_USE_OIDC: true - name: Get SWA Deployment Token id: get_token working-directory: ./terraform run: | SWA_TOKEN=$(terraform output -raw swa_api_key) echo \u0026#34;::add-mask::$SWA_TOKEN\u0026#34; echo \u0026#34;swa_token=$SWA_TOKEN\u0026#34; \u0026gt;\u0026gt; $GITHUB_OUTPUT - name: Build and Deploy Hugo to Azure Static Web Apps uses: Azure/static-web-apps-deploy@v1 with: azure_static_web_apps_api_token: ${{ steps.get_token.outputs.swa_token }} repo_token: ${{ secrets.GITHUB_TOKEN }} action: \u0026#34;upload\u0026#34; app_location: \u0026#34;/\u0026#34; output_location: \u0026#34;public\u0026#34; I then committed and pushed these changes to GitHub. When the workflow was detected, it was immediately kicked off by GitHub:\nIn Progress GitHub Actions Workflow The workflow completed successfully and we have our very first version of this blog published.\nFQDN \u0026amp; DNS # The last step is to buy the domain, configure name servers to point at Azure then setup the necesarry DNS records in Azure. I have got into the habbit of using godaddy for all my domain names and this site will be no different.\nDomain Purchased Now I\u0026rsquo;ve purchased, I will need to update the configuration for the Azure Static Website as well as in GoDaddy. First up, Godaddy:\nGoDaddy Configuration # I have the default Azure hostname for this website: agreeable-grass-08e492903.1.azurestaticapps.net. In GoDaddy, I will edit the existing CNAME record for www pointing at the this URL.\nTerraform # Next up is adding this custom domain name via Terraform to the static webapp. I will add this resource block to my main.tf file:\nimport { to = azurerm_static_web_app_custom_domain.apex_domain id = \u0026#34;/subscriptions/***/resourceGroups/\u0026lt;resource_group\u0026gt;/providers/Microsoft.Web/staticSites/\u0026lt;webappname\u0026gt;/customDomains/benstalker.co.uk\u0026#34; } # Adding CNAME record to static web app resource \u0026#34;azurerm_static_web_app_custom_domain\u0026#34; \u0026#34;www_domain\u0026#34; { static_web_app_id = azurerm_static_web_app.blog_app.id domain_name = \u0026#34;www.benstalker.co.uk\u0026#34; validation_type = \u0026#34;cname-delegation\u0026#34; } This will import the config into the terraform statefile, after which I can remove the import block. With this in place, I can commit and push the code changes to GitHub, triggering a fresh GitHub Actions run. When completed I see the following:\nCustom Domain Verified Now the root domain. Due to limitations, this must be done by button clicking in the portal. Not great, but it\u0026rsquo;s what we have when using Azure. I will button click on the following manor:\n1. Generate the Token in Azure\nGo to Azure Static Web App in the portal. Click Custom domains -\u0026gt; + Add -\u0026gt; Custom domain on other DNS. Enter benstalker.co.uk and click Next. Azure will generate a TXT validation token. 2. Add the TXT Record to GoDaddy\nGo to GoDaddy DNS management. Add a new record. Type: TXT Name: @ (This represents the root domain) Value: Paste the token from Azure. Save it. 3. Validate and Route\nI waited overnight as the validation took some time and was completed in the morning. I had to grab the IP from the json view under the property \u0026ldquo;stableInbountIP\u0026rdquo; Back in GoDaddy, I created a new \u0026ldquo;A\u0026rdquo; record with a name of @ and a value of this IP Also, in GoDaddy, I had to delete another \u0026ldquo;A\u0026rdquo; record, pointing at \u0026ldquo;Website Builder\u0026rdquo;. Adding Aped domain to Terraform # Now we have it connected, we can update the Terraform to identify this and \u0026ldquo;Document as Code\u0026rdquo; which is as good as we can get with \u0026ldquo;Click-Ops\u0026rdquo;. To allow Terraform to see the new domain that was button clicked, we must first import it with the upper of the 2 blocks and then define it. with updates to the main.tf.\nI will commit and push this update, allow the pipeline to run and can later comment out the import line as it\u0026rsquo;s no longer needed.\nimport { to = azurerm_static_web_app_custom_domain.apex_domain id = \u0026#34;/subscriptions/\u0026lt;sub_id\u0026gt;/resourceGroups/\u0026lt;rg_name\u0026gt;/providers/Microsoft.Web/staticSites/swa-benstalker-tech/customDomains/benstalker.co.uk\u0026#34; } resource \u0026#34;azurerm_static_web_app_custom_domain\u0026#34; \u0026#34;apex_domain\u0026#34; { static_web_app_id = azurerm_static_web_app.blog_app.id domain_name = \u0026#34;benstalker.co.uk\u0026#34; validation_type = \u0026#34;dns-txt-token\u0026#34; } Conclusion # And that\u0026rsquo;s us. A personal blog fully developed in Hugo and markdown, stored in GitHub, deployed to Azure Static Website via GitHub Actions on infrastructure, deployed as code via Terraform. I will maybe do a few updates on this site discussing the cost but I imagine it\u0026rsquo;ll be pennies.\nI still have some work to do such as populating and linking the about me page, which will be a live CV and a contact page with maybe a webform but that for future Ben to decide.\nAll in, not bad for a weekend project. As stated in the original post, I plan to use this stie to\nDetail some of the larger scale pieces of work I do professionally Detail my larger journey through AI usage Discuss AI topics and various AI workflows as and when I use them Talk about any new hobby/portfolio websites I deploy and how I developed, deployed and host them. If this sounds good, feel free to check back every so often.\n","date":"1 March 2026","externalUrl":null,"permalink":"/blog_posts/post2/","section":"Blog_posts","summary":"","title":"Deploying my blog with Hugo","type":"blog_posts"},{"content":"","date":"1 March 2026","externalUrl":null,"permalink":"/tags/github-actions/","section":"Tags","summary":"","title":"Github Actions","type":"tags"},{"content":"","date":"1 March 2026","externalUrl":null,"permalink":"/tags/static-web-app/","section":"Tags","summary":"","title":"Static Web App","type":"tags"},{"content":"","date":"1 March 2026","externalUrl":null,"permalink":"/tags/terraform/","section":"Tags","summary":"","title":"Terraform","type":"tags"},{"content":" Background # I\u0026rsquo;ve been tinkering with websites for almost 20 years at this point and have went through many \u0026ldquo;phases\u0026rdquo; of website building and ownership. My interests in websites was first started when I was in school in the late 90s and I created a \u0026ldquo;geocities\u0026rdquo; page that I ultimately turned into a tutorial, allowing my teachers to issue as work to other students, making me very popular.\nYears later, I stumbled across the world of \u0026ldquo;SEO\u0026rdquo; and went down a rabbit hole of small micro sites, made manually from HTML/CSS that generated income from google adwords, later switching to larger semi-authority affiliate sites based on WordPress.\nMore recently, I made an attempt to learn a full webapp stack with HTML, CSS, Javascript, jQuery, Node.js, Express.js, React.js along with some SQL and MongoDB. I made a site from scratch that was ultimately planned to be a scoreboard system for a game server that I was involved in running for a game called DayZ but it never made it to the finish line. Ultimately, this knowledge and skills rotted with no chance to use them.\nWith the advent of AI coding agents and the lowering of the bar of entry into coding webapps, my interest in website building has piqued again with a few in development and some more ideas floating around. I\u0026rsquo;d like however somewhere to share my work so that others may benefit from it (and I can re-read to remember) and that\u0026rsquo;s the purpose of a personal blog website.\nThe Goal # The goal is to build a personal website that will provide the following:\nThe space to post technical breakdowns of large pieces of professional work Share guides/scripts/walkthroughs that other techies may find useful Share passion projects and the learning that they give me Document my journey through AI and the various tools/processes I use to develop Act as something of a portfolio of my work. After looking through the various front and backend technologies available, I\u0026rsquo;ve landed on using quite a simple static website using Hugo to build HTML/CSS from simple markdown files. I plan to store the markdown files in GitHub and use GitHub Actions to deploy them Azure Static Web Sites as this aligns quite with with my professional role as a DevOps \u0026amp; infrastructure engineer.\nIt will look something like this:\nThe markdown files to hugo azure static website pipeline 1. Bootstrap Phase (Local PowerShell) # This is the foundational, one-time setup performed securely from the local machine before any automation begins.\nTools: Native PowerShell on the Local Engineer machine. Identity Configuration: The script connects to the Entra ID Tenant and creates an App Registration (OIDC Identity). This establishes a passwordless, federated trust with GitHub. State Storage: It provisions a Central Management Resource Group inside the Azure Subscription, containing an Azure Storage Account. This securely air-gaps the Terraform remote state away from the production environment. 2. Terraform Infra (Automated CI/CD) # This phase turns infrastructure into code, triggered automatically by a code push to the repository.\nTrigger: A Git Push of the Terraform files to the GitHub Repo initiates the GitHub Actions pipeline. Authentication: The pipeline securely logs into Azure using the OIDC Auth established in Phase 1, pulling the lock from the remote State Storage. Provisioning: GitHub Actions executes a terraform apply. This deploys the actual production environment into the Azure Subscription: a Production Resource Group (rg-securityblog-prod) and an empty Azure Static Web App (swa-benstalker-tech). 3. Hugo App (Build \u0026amp; Deployment) # This phase handles the actual blog content, intentionally bypassing Azure\u0026rsquo;s default Oryx build engine to avoid library conflicts and speed up deployments.\nThe Build Engine: When raw Markdown content is pushed, the GitHub Actions pipeline spins up an Ubuntu runner and executes a Hugo Extended Build locally. The Artifact: This process compiles the Markdown, processes the theme\u0026rsquo;s SCSS, and generates a final public/ folder containing all static HTML, CSS, and JS assets. Direct Deployment: The pipeline performs an \u0026ldquo;Upload Only\u0026rdquo; action, pushing the pre-built public/ folder directly to the waiting Azure Static Web App. End Result: Microsoft Azure globally hosts the static files, serving fast and secure content to readers at benstalker.co.uk. The Prerequisites # This is going to be a long blog post where I detail the steps I\u0026rsquo;ve taken to set this blog up from scratch including the tools I used, the commands I typed etc. I\u0026rsquo;ll start with the required packages to get started with development\nPackage Install Purpose Git https://github.com/git-guides/install-git Required to push to GitHub Repo for Deployment Go https://go.dev/dl/ Hugo is written in Go and needs the package installed Hugo https://gohugo.io/installation/ This is the Hugo CLI required to initiate project and run local test server Once these packages are installed, ensure Hugo is accessible in the path environmental variable\nTo ensure all prerequisites are installed and working, run the following commands:\ngit version go version hugo version Each command should give you information on the version you have installed if working correctly.\nThe Basic Setup # Now we have the required packages installed, we can start to create the files for content and config. I will use VS Code for this but any text editor or your favourite IDE will be suitable.\nNavigate to the folder in which you\u0026rsquo;d like to store the project, I use a \u0026ldquo;repos\u0026rdquo; folder for all projects I use with GitHub. Within the terminal at this location, run the following command: hugo new site \u0026lt;websitename\u0026gt; # for me, this is benstalker Change directory into this folder and we can also initialize a git repository git init # If this is the first time you are using git, you may need to populate the following configuration items: git config --global user.name \u0026#34;Your Name\u0026#34; git config --global user.email \u0026#34;your.email@example.com\u0026#34; Next, I will open up my GitHub, create a new, private repo, add it as the remote origin and push content:\ngit remote add origin https://github.com/ben-stalker/,newrepo\u0026gt;.git git branch -M main git add . git commit -m \u0026#34;Initial commit: Hugo site structure\u0026#34; git push -u origin main Now with git set up, we can move onto installing a Hugo theme\nHugo Configuration # The first part of configuring Hugo will be selecting and installing a theme. There are many Hugo themes available here: https://themes.gohugo.io/. I\u0026rsquo;m opting to go with a theme called Blowfish as I like the style and it has many shortcodes that I can use to enrich the visuals and add functionality to my site.\nhttps://blowfish.page/\nEach Hugo theme may follow a different installation and initial configuration method. For blowfish, the instructions are here: https://blowfish.page/docs/installation. I performed the following actions:\nAdd the Blowfish submodule from git: git submodule add -b main https://github.com/nunocoracao/blowfish.git themes/blowfish Once installed, we set up the theme, the first step is to delete the default hugo.toml file and replace it by copying all the *.toml config files from: \\websitename\\themes\\blowfish\\config\\_default\nand place them in the newly created folder:\n\\websitename\\config\\_default\nOnce copied, we must manually specify the theme in the copied hugo.toml file near the top by uncommenting line 5: theme = \u0026quot;blowfish\u0026quot;\nI will also work down the languages.toml file and uncomment the following lines:\nLine 14 for my site description Lines 17 - 76 for the appropriate detail and socials Warning! At line 6, I will leave the baseURL as \u0026ldquo;/\u0026rdquo;. This is required until we have properly configured the domain. Within the params.toml file, I will update the following lines\nLine 8 setting colorScheme to \u0026ldquo;github\u0026rdquo; Line 9 setting defaultAppearance to \u0026ldquo;dark\u0026rdquo; At this stage, I will run the local hugo \u0026ldquo;webserver\u0026rdquo; to see how we are looking so far. I do this by returning to the terminal and running:\nhugo server --disableFastRender --noHTTPCache The two switches are useful for local development, turning off performance optimization to see all errors and correct previews and stopping any caching to ensure we are seeing latest version after any changes.\nSome smaller tweaks # I added a file _index.md in the root of the content folder to allow me to have a preamble before my articles are listed\nWithin the params.toml I:\nSet header.layout = \u0026quot;fixed-fill-blur Set homepage.layout = \u0026quot;background\u0026quot; Set homepage.homepageImage = \u0026quot;background.svg\u0026quot; # I\u0026rsquo;ll discuss this below Set homepage.showRecent = true Set homepage.cardView = true Added a horrific AI generated image of me as author.jpg, placed it in the /assets folder and linked it in the languages.en.toml file in params.author.image = \u0026quot;author.jpg\u0026quot; and a bunch of other fields to be relevant to me.\nThe blowfish theme has a cool animated background that I really liked and wanted to use for my site. This is achieved by using [this]https://blowfish.page/img/background.svg svg as the background image. I simply downloaded it, popped it in the asset folder and set the background image\nAt this stage the initial Hugo config is completed, I will commit and push the changes.\nAdding initial content # Now we have a basic layout and configuration complete, we can write some initial content. I have 2x blog posts planned, a home page to list them on and stubs for a \u0026ldquo;Contact me\u0026rdquo; and \u0026ldquo;About me\u0026rdquo; pages to get the basic layout.\nIt\u0026rsquo;s worth noting that Hugo uses \u0026ldquo;Front Matter\u0026rdquo; that will allow you to define meta data about a post or page. This is between 2x sets of --- with key value pairs such as Title: \u0026quot;Page Title or Tags: [\u0026quot;tag1\u0026quot;, \u0026quot;tag2\u0026quot;]\nHome Page # To properly use some of the background functions and add some pre-amble text before the list or recent posts, we will create a file _index.md in the /content/ folder with the following content:\n--- title: \u0026#34;Building my blog with Hugo\u0026#34; description: \u0026#34;A step by step overview of my process to create this blog including installing the required software, creating the site, choosing a theme and configuring the style.\u0026#34; layout: \u0026#34;background\u0026#34; categories: [\u0026#39;tutorial\u0026#39;, \u0026#39;website\u0026#39;] tags: [\u0026#39;hugo\u0026#39;, \u0026#39;blowfish\u0026#39;] date: 2026-02-28 draft: false --- Welcome to my professional blog where I break down Security Infrastructure in Azure and M365. This allows me to define the page title, meta description and use the background layout from the blow fish theme. The preamble text will change over time but we just need a place holder\nStub Pages # Also within the /content/ folder, I will create pages contact.md and about.md, populate them with similar content to the above and save them.\nPosts # For the posts, I want to be able to use the thumbnails method in the blowfish theme noted [here]https://blowfish.page/docs/thumbnails/. Essentially, instead of just a list of posttitle.md files in a subfolder of the /content/ folder, we need to create a folder for each one, drop in the posttitle.md, renaming it to index.md along with a .jpg or .png file titled featured.xxx like this:\ncontent └── awesome_article ├── index.md └── featured.png\nIn my case, my folder structure is:\ncontent └── blog_posts └──post1 ├── index.md └── featured.jpg ├── _index.md ├── about.md └── contact.md\nWhen it comes time for blog post 2, I will create a subfolder post2 in the blog_posts folder, create an index.md for the content and another featured.jpg for the thumbnail.\nWith this, I have committed and merged the changes to GitHub and will now work on the GitHub actions pipeline and the Azure hosting. Join me for Part 2 of this blog walkthrough where I describe that process.\n","date":"28 February 2026","externalUrl":null,"permalink":"/blog_posts/post1/","section":"Blog_posts","summary":"","title":"Building my blog with Hugo","type":"blog_posts"},{"content":"","externalUrl":null,"permalink":"/authors/","section":"Authors","summary":"","title":"Authors","type":"authors"},{"content":"","externalUrl":null,"permalink":"/series/","section":"Series","summary":"","title":"Series","type":"series"}]