<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="4.4.1">Jekyll</generator><link href="https://darrenbell.net/atom.xml" rel="self" type="application/atom+xml" /><link href="https://darrenbell.net/" rel="alternate" type="text/html" /><updated>2026-04-10T17:07:21+00:00</updated><id>https://darrenbell.net/atom.xml</id><title type="html">Darren Bell</title><subtitle>Practical insights on cloud architecture, Microsoft 365, IT leadership, and building secure, cost-efficient technology operations in regulated environments— written by a practitioner who has done it.</subtitle><author><name>Darren Bell</name><email></email></author><entry><title type="html">Copilot Is Ready. Your Tenant Isn’t.</title><link href="https://darrenbell.net/blog/2026/04/10/microsoft-365-ai-readiness-permissions/" rel="alternate" type="text/html" title="Copilot Is Ready. Your Tenant Isn’t." /><published>2026-04-10T00:00:00+00:00</published><updated>2026-04-10T00:00:00+00:00</updated><id>https://darrenbell.net/blog/2026/04/10/microsoft-365-ai-readiness-permissions</id><content type="html" xml:base="https://darrenbell.net/blog/2026/04/10/microsoft-365-ai-readiness-permissions/"><![CDATA[<p>Your Microsoft 365 environment has MFA enabled. Roles are assigned. You’ve done the security basics. On paper, it looks ready for Copilot.</p>

<p>It probably isn’t.</p>

<p>Not because your intentional security configuration is wrong — but because underneath it, there’s a layer of sharing decisions nobody made deliberately. Links that were created and never expired. Guest accounts provisioned for a contractor six months ago and never disabled. SharePoint libraries with default permissions that nobody changed because nobody knew the default was the problem.</p>

<p>Microsoft Copilot respects Microsoft’s permissions model exactly. It won’t surface a file to someone who doesn’t have access. The problem is that in most Microsoft 365 environments, more people have access to more files than anyone intended — and nobody audited it before turning on a tool that can search and summarize everything.</p>

<hr />

<h2 id="how-oversharing-happens-without-anyone-trying">How Oversharing Happens Without Anyone Trying</h2>

<p>The most common blind spot I see across Microsoft 365 environments isn’t a misconfiguration someone made deliberately. It’s the configuration nobody changed.</p>

<p><strong>“People in your organization” links are often set as the org-level default — and many tenants have never changed it.</strong> When a user shares a document in SharePoint or OneDrive, the default link type in tenants that haven’t been hardened is one that grants access to any authenticated member of the organization. The user clicks share, copies the link, pastes it into an email, and sends it to one person. What they actually created is a link that any colleague can redeem.</p>

<p><strong>The default permission on those links is Edit, not Read.</strong> The employee who shared a budget document with their manager didn’t intend to give any colleague who receives that link write access to a budget spreadsheet. But that’s what the link does.</p>

<p>Here’s how the exposure plays out. That link is in an email. The email gets forwarded — to a broader team, to someone cc’d by mistake, to a distribution list. Any org member who clicks it can redeem it and gain access. Once they do, Copilot can surface that document for them. An employee who was never supposed to see that file asks Copilot a question about budgets. Copilot finds it. Copilot answers with it.</p>

<p>No one breached anything. The permissions worked exactly as configured. That’s the problem.</p>

<hr />

<h2 id="guest-accounts-are-the-quiet-side-of-the-same-risk">Guest Accounts Are the Quiet Side of the Same Risk</h2>

<p>Shared links are visible when you know to look. Stale guest accounts are quieter.</p>

<p><strong>Every contractor, vendor, and partner who was granted guest access to your Microsoft 365 tenant is still there until someone removes them.</strong> Most organizations provision guest accounts when the relationship starts and forget them when it ends. The account stays active. The group memberships stay intact. The SharePoint library permissions set up for that engagement are still assigned.</p>

<p>Copilot with Graph access traverses your SharePoint environment without distinguishing between an active employee and a guest account that hasn’t been used in eight months. If the permissions say accessible, it’s accessible.</p>

<p>The engineers in your organization who manage Microsoft 365 already know this. The audit hasn’t happened because it’s not on the priority list — and it won’t be until leadership makes it one.</p>

<hr />

<h2 id="what-ai-ready-actually-looks-like">What AI-Ready Actually Looks Like</h2>

<p>Getting ready for Copilot isn’t a one-time project. It’s a posture shift that requires ongoing discipline. Here’s what the organizations that do it right actually change:</p>

<p><strong>Harden the global sharing configuration.</strong> Set the default link type to Specific people. Change the default permission to Read. Microsoft changed the out-of-the-box default to Specific people in July 2024 — but any tenant created before that may still be running the old default, and even new tenants need this verified. These two settings stop new oversharing before it accumulates. They don’t fix what’s already there — but they stop it from getting worse.</p>

<p><strong>Set link expiration policies.</strong> Sharing links should have a maximum lifetime. External links especially. Most tenants have no expiration configured, which means links created three years ago are still live and still granting access.</p>

<p><strong>Audit and disable unused guest accounts.</strong> Pull the guest account report, filter for accounts with no sign-in activity in 90 days, and disable them. Then make this a quarterly process, not a cleanup you do once before a Copilot rollout.</p>

<p><strong>Disable resharing.</strong> If you send someone a link, they shouldn’t be able to forward it to someone else and extend that access further. This is often enabled by default.</p>

<p><strong>Require admin consent for third-party applications.</strong> Any app requesting Microsoft Graph permissions to read SharePoint or OneDrive data — including AI tools and the third-party model providers now available through Copilot — should require explicit approval before it gets access.</p>

<p><strong>Treat least privilege as a standing operating principle, not a project.</strong> Every permission decision should start with the question: does this person actually need this access? That mindset doesn’t take hold from the engineering team up. It has to come from leadership down.</p>

<hr />

<h2 id="the-reason-this-cant-wait">The Reason This Can’t Wait</h2>

<p>Permission sprawl in Microsoft 365 has always been a governance problem. Most organizations have tolerated it because the practical consequences were manageable — the occasional file accessible to someone who shouldn’t have it, noticed when it caused a problem.</p>

<p>Copilot changes the consequence, not the root cause.</p>

<p>A forgotten “People in your organization” link used to mean one document was more accessible than intended — a risk that materialized only if someone stumbled across it. With Copilot, the risk materializes the moment that link gets forwarded to someone who wasn’t supposed to have it. They redeem it. They have access. Copilot can now answer their questions using content they were never meant to see.</p>

<p>The data exposure risk here isn’t about Microsoft reading your files or an LLM provider training on your content. It’s internal. It’s an employee in one department using Copilot to surface information from another department that was never meant to be shared — because someone sent an email with the wrong default link three years ago.</p>

<p>That’s the AI readiness problem. It isn’t glamorous. It’s an audit and a settings review and a governance conversation that most organizations have been deferring. The right time to have it is before you roll out the tool that makes the consequences visible.</p>

<hr />

<p><em>Getting ready for Copilot or trying to understand where your Microsoft 365 permissions actually stand? <a href="/contact/">I’m happy to walk through what a readiness assessment looks like.</a></em></p>]]></content><author><name>Darren Bell</name></author><category term="AI &amp; Automation" /><category term="Microsoft 365" /><category term="Copilot" /><category term="SharePoint" /><category term="AI readiness" /><category term="security" /><category term="permissions" /><category term="governance" /><category term="OneDrive" /><summary type="html"><![CDATA[Everyone thinks their Microsoft 365 environment is locked down enough to deploy AI tools. MFA is on. Permissions are set. What they haven't looked at is the sharing sprawl underneath — the Anyone links that never expired, the contractor guest accounts still sitting in SharePoint groups. Copilot follows Microsoft's permissions exactly. The problem is the permissions you forgot you set.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://darrenbell.net/assets/images/og-default.svg" /><media:content medium="image" url="https://darrenbell.net/assets/images/og-default.svg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Azure in a Healthcare Environment: Architecture Decisions That Actually Matter</title><link href="https://darrenbell.net/blog/2026/04/09/multi-cloud-architecture-patterns/" rel="alternate" type="text/html" title="Azure in a Healthcare Environment: Architecture Decisions That Actually Matter" /><published>2026-04-09T00:00:00+00:00</published><updated>2026-04-09T00:00:00+00:00</updated><id>https://darrenbell.net/blog/2026/04/09/multi-cloud-architecture-patterns</id><content type="html" xml:base="https://darrenbell.net/blog/2026/04/09/multi-cloud-architecture-patterns/"><![CDATA[<p>Cloud architecture in a regulated environment changes the decision calculus in ways that are hard to appreciate until you’ve owned the compliance posture yourself.</p>

<p>In a typical enterprise environment, an architecture decision is evaluated on performance, cost, reliability, and operational complexity. In healthcare, every one of those dimensions has a compliance overlay. The question isn’t just “does this work?” — it’s “does this work <em>and</em> can we audit it <em>and</em> is it covered by our BAAs <em>and</em> can we explain it to a HIPAA compliance officer who doesn’t care about the technical elegance?”</p>

<p>I’ve spent six years making those decisions in a healthcare IT environment, and the patterns that held up consistently were the ones that treated compliance as a first-class architectural constraint — not something layered on afterward.</p>

<hr />

<h2 id="start-with-identity-not-infrastructure">Start with Identity, Not Infrastructure</h2>

<p>The most consequential architecture decision in a Microsoft-first healthcare environment is not what Azure services you use. It’s how you structure identity.</p>

<p><strong>Entra ID (formerly Azure AD) is the foundation for everything.</strong> Access to M365, access to Azure resources, Conditional Access policy enforcement, device compliance through Intune — all of it runs through Entra ID. If your identity architecture is loose, your compliance posture is loose, regardless of how well everything else is configured.</p>

<p>In a healthcare environment, this means:</p>

<p><strong>Conditional Access policies are non-negotiable, not optional.</strong> Every user accessing PHI-adjacent systems needs MFA. Compliant device requirements should be enforced wherever clinically feasible. Location-based policies add a meaningful layer of control for systems with access to sensitive data.</p>

<p><strong>Privileged Identity Management (PIM) for administrative access.</strong> Standing admin privileges are a HIPAA risk. Just-in-time elevation with approval workflows and audit logging gives you the access control story auditors want to see.</p>

<p><strong>Group-based licensing and access provisioning.</strong> In a 12-person IT organization, manual licensing and access management doesn’t scale and doesn’t produce the consistent audit trail you need. Dynamic groups based on HR attributes keep provisioning accurate without manual maintenance.</p>

<p>The identity architecture is where healthcare compliance either holds together or falls apart. Get this right before building anything else on top of it.</p>

<hr />

<h2 id="cloud-analytics-infrastructure-for-clinical-reporting">Cloud Analytics Infrastructure for Clinical Reporting</h2>

<p>One of the more impactful projects I delivered at Premier Health was building the Azure infrastructure that enabled a clinical data analytics solution.</p>

<p>The business problem: clinical staff were waiting unacceptable amounts of time to generate reports that directly informed care decisions. The underlying cause was a combination of legacy on-premises infrastructure and data spread across systems that didn’t communicate efficiently.</p>

<p><strong>My role was the infrastructure layer:</strong></p>

<p>Designing the Azure environment that a cloud analytics solution could actually run on in a HIPAA-regulated context. That meant: storage architecture with the right access tiers, networking and private endpoints, Entra ID integration for role-based access control, encryption at rest and in transit, and the audit logging configuration that HIPAA requires. In healthcare, none of this is optional — it’s table stakes before you can run anything clinical on top of it.</p>

<p>The compliance architecture work is what makes the project possible. You can have the best analytics platform in the world; if the infrastructure underneath it can’t pass a HIPAA audit, it doesn’t get deployed.</p>

<p><strong>The outcome:</strong> the clinical reporting solution went live on infrastructure that was compliant, auditable, and operationally maintainable by a small IT team. Clinical staff got faster access to the data they needed to make care decisions.</p>

<p>The lesson: in healthcare IT, the infrastructure and compliance layer isn’t the interesting part of the project — but it’s the part that determines whether the project happens at all.</p>

<hr />

<h2 id="cold-storage-and-data-tiering-where-the-savings-are">Cold Storage and Data Tiering: Where the Savings Are</h2>

<p>Healthcare organizations accumulate data at scale. Clinical records, imaging, diagnostic data — retention requirements are long, access patterns are predictable, and storage costs compound over time.</p>

<p>Most healthcare environments have significant data on the wrong storage tier: high-cost active storage for data that hasn’t been accessed in years, simply because migrating it requires effort and no one has owned the work.</p>

<p><strong>Azure’s cold storage tiers exist for exactly this use case.</strong></p>

<p>The architectural pattern is straightforward: classify data by access frequency and retention requirement, then implement lifecycle policies that automatically transition data to Archive or Cold access tiers as it ages. Build the policy once, and the cost reduction is ongoing.</p>

<p>In practice, implementing this across an environment with years of accumulated data that had never been tiered produced $1,500 per month in storage savings — $18,000 annually — from a one-time architecture effort. That’s not transformational, but it’s the kind of defensible, documented savings that build credibility with finance and fund larger initiatives.</p>

<p><strong>The operational discipline required:</strong> data classification. You cannot implement effective tiering policies without knowing what data you have, how old it is, and what your retention obligations are. In a healthcare environment, that classification work is also required for HIPAA compliance — so the compliance work and the cost optimization work are the same work.</p>

<hr />

<h2 id="when-to-eliminate-rather-than-optimize">When to Eliminate Rather Than Optimize</h2>

<p>Not every Azure cost problem has an optimization solution. Sometimes the right answer is elimination.</p>

<p>Azure Virtual Desktop is a useful service for specific use cases: providing secure remote access to desktop environments, supporting contractors or vendors who need access to internal systems, enabling BYOD scenarios in regulated environments.</p>

<p>It is not the right answer for every remote access requirement, and the costs add up quickly when it’s deployed beyond its appropriate use case.</p>

<p>After auditing our AVD environment against actual usage patterns, the finding was clear: the deployment was substantially oversized relative to actual utilization, and a significant portion of the use cases it was solving could be addressed through better Conditional Access policies and Intune-managed devices at a fraction of the cost.</p>

<p>Eliminating the over-deployed AVD environment and replacing the legitimate use cases with appropriate alternatives produced $79,000 in annual savings.</p>

<p><strong>The principle:</strong> before optimizing an Azure service, ask whether the service is solving the right problem. Sometimes the most cost-effective architectural decision is the decommission.</p>

<hr />

<h2 id="the-compliance-audit-as-architecture-review">The Compliance Audit as Architecture Review</h2>

<p>The most useful feedback mechanism I’ve found for cloud architecture in healthcare is the compliance audit.</p>

<p>An auditor reviewing your HIPAA technical controls is, in effect, reviewing your architecture. They’re asking: where is PHI stored? Who can access it? How do you know who accessed it? What happens if someone’s credentials are compromised? How would you detect unauthorized access?</p>

<p>If your architecture has gaps, the audit finds them. If your architecture is solid, the audit is documentation.</p>

<p>The organizations I’ve seen struggle with HIPAA audits are usually the ones that built the technical controls first and tried to map them to compliance requirements afterward. The ones that pass audits cleanly built the compliance requirements into the architecture from the beginning.</p>

<p><strong>The questions to design against:</strong></p>

<ul>
  <li>Can you produce an access log for any PHI system, for any time range, for any user?</li>
  <li>Can you demonstrate that terminated employees lost access within a defined window?</li>
  <li>Can you show that encryption is enforced in transit and at rest?</li>
  <li>Can you demonstrate that your vendors handling PHI have active, signed BAAs?</li>
</ul>

<p>If you can answer those questions clearly, with evidence, your architecture is in good shape. If you can’t, the audit will tell you where the gaps are — ideally before the external auditor does.</p>

<hr />

<h2 id="bottom-line">Bottom Line</h2>

<p>Healthcare cloud architecture is not fundamentally different from enterprise cloud architecture. The services are the same. The principles are the same.</p>

<p>What’s different is the weight of compliance requirements, the consequences of getting it wrong, and the need to translate technical decisions into outcomes that clinical and executive stakeholders understand.</p>

<p>The architects who thrive in regulated environments are the ones who see compliance not as a constraint on architecture but as a design requirement — one that, when taken seriously, produces systems that are more auditable, more secure, and easier to defend when it matters.</p>

<hr />

<p><em>Building cloud infrastructure in a healthcare environment? The compliance requirements are real but navigable. <a href="/contact/">I’m happy to talk through the specifics.</a></em></p>]]></content><author><name>Darren Bell</name></author><category term="Cloud Architecture" /><category term="Azure" /><category term="healthcare IT" /><category term="HIPAA" /><category term="Entra ID" /><category term="cloud architecture" /><category term="compliance" /><category term="analytics infrastructure" /><summary type="html"><![CDATA[Cloud architecture in healthcare isn't just about what works technically — it's about what holds up under a compliance audit, what your BAAs actually cover, and what you can defend to clinical leadership. Here's the framework I use.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://darrenbell.net/assets/images/og-default.svg" /><media:content medium="image" url="https://darrenbell.net/assets/images/og-default.svg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">The Gap Between Technical Expert and Operational Leader</title><link href="https://darrenbell.net/blog/2026/04/02/from-engineer-to-director-lessons-in-tech-leadership/" rel="alternate" type="text/html" title="The Gap Between Technical Expert and Operational Leader" /><published>2026-04-02T00:00:00+00:00</published><updated>2026-04-02T00:00:00+00:00</updated><id>https://darrenbell.net/blog/2026/04/02/from-engineer-to-director-lessons-in-tech-leadership</id><content type="html" xml:base="https://darrenbell.net/blog/2026/04/02/from-engineer-to-director-lessons-in-tech-leadership/"><![CDATA[<p>Technical expertise gets you into leadership. It is not what keeps you there.</p>

<p>I’ve watched technically strong engineers step into IT management roles and struggle — not because they couldn’t architect the systems, but because the job changed and they didn’t realize it. The problems that mattered most weren’t architecture problems. They were people problems, budget problems, communication problems, and operational discipline problems.</p>

<p>I’ve also watched people with less technical depth succeed at the manager and director level because they understood something essential: <strong>at some point, your value shifts from your own output to the output of the organization you lead.</strong></p>

<p>This post is about that shift — what it actually looks like, why it’s hard, and what it means for anyone building toward executive-level IT leadership.</p>

<hr />

<h2 id="what-changes-when-you-lead-a-team">What Changes When You Lead a Team</h2>

<p>When you’re an individual contributor, your output is largely under your control. You design the system, write the script, configure the service. When you have a productive day, you know it.</p>

<p>When you’re leading a 12-person IT organization — as I did in a healthcare environment — your output is the team. You move at the speed of other people’s growth, which is slow, non-linear, and only partially under your influence. The feedback loop on your decisions stretches from weeks to years.</p>

<p>This is uncomfortable if you’re used to the fast feedback of technical work. The instinct is to drop into the technical problems because that’s where you can see immediate impact. You know the answer. You can fix it faster than explaining it to someone else.</p>

<p>That instinct is worth resisting.</p>

<p>Every time you solve a problem that someone on your team could have solved — even more slowly, even less elegantly — you’ve chosen your own short-term productivity over their long-term capability. Do that consistently and you’ve built a team that waits for direction instead of developing judgment.</p>

<p><strong>The shift to make:</strong> from being the best technical resource in the room, to being the environment in which good technical decisions get made without you.</p>

<hr />

<h2 id="the-problems-that-actually-consume-your-time">The Problems That Actually Consume Your Time</h2>

<p>Nobody tells you how much of IT leadership is non-technical.</p>

<p>In a healthcare environment, I spent significant time on:</p>

<ul>
  <li><strong>Vendor management</strong> — negotiating contracts, holding vendors accountable under BAAs, managing relationships when something goes wrong</li>
  <li><strong>Budget ownership</strong> — building and defending the annual budget, explaining variances, making the case for investment</li>
  <li><strong>Compliance accountability</strong> — owning the organization’s HIPAA posture, not just the technical controls but the policies, the training, the audit documentation</li>
  <li><strong>Stakeholder communication</strong> — translating technical risk into language that means something to clinical leadership and finance</li>
  <li><strong>Personnel decisions</strong> — hiring, performance management, having difficult conversations, developing people toward their next level</li>
</ul>

<p>None of that is architecture work. But all of it directly determines whether your IT organization earns the trust and resources it needs to do good technical work.</p>

<p>The leader who is only comfortable with technical problems will treat everything else as distraction. The leader who understands that these <em>are</em> the job will develop the operational and organizational skills that define executive effectiveness.</p>

<hr />

<h2 id="budget-ownership-changes-how-you-think">Budget Ownership Changes How You Think</h2>

<p>Owning a budget is clarifying in a way nothing else is.</p>

<p>When you have budget accountability, cost stops being an abstract concern and becomes a concrete one. You stop thinking “that’s expensive” and start thinking “is that the best use of these dollars compared to the alternatives?” You develop instincts about where money gets wasted, what vendors actually deliver value, and how to make a business case that leadership will approve.</p>

<p>Managing cost in a healthcare IT environment taught me that the discipline of cost optimization is almost entirely operational, not technical. Most waste exists because no one owns the outcome. Licenses accumulate because there’s no process to reclaim them. Environments stay up because decommissioning requires effort and no one has assigned the work.</p>

<p>When I eliminated a $79K annual Azure Virtual Desktop environment and documented $48K in licensing savings through a rigorous audit, the technical work was straightforward. What made it possible was having the organizational authority to own the outcome, the budget visibility to identify the waste, and the stakeholder relationships to make the change without friction.</p>

<p>Technical skill got me to the point where I could identify those opportunities. Budget ownership and operational authority are what let me capture them.</p>

<hr />

<h2 id="translating-between-technical-and-executive">Translating Between Technical and Executive</h2>

<p>One of the most undervalued skills in IT leadership is the ability to move fluently between technical detail and business framing.</p>

<p>Most technical people can explain what they built. Fewer can explain why it matters to someone who doesn’t care about the technology.</p>

<p>When I presented the case for a cloud analytics platform at Premier Health, the conversation with clinical leadership wasn’t about the underlying architecture. It was about: clinical staff currently wait [X amount of time] to generate reports that inform patient care decisions. This solution reduces that to [Y amount of time]. Here’s what that’s worth to the organization.</p>

<p>The technical architecture enabled the outcome. The business case got the investment approved.</p>

<p>This translation capability — understanding the technical depth well enough to be credible with engineers, and understanding the business impact well enough to be credible with executives — is what separates IT leaders who get resources from those who complain about not having them.</p>

<p><strong>The discipline to develop:</strong> every significant technical initiative should have a clear answer to “what business problem does this solve, and how will we know we solved it?”</p>

<hr />

<h2 id="building-a-team-that-outlasts-you">Building a Team That Outlasts You</h2>

<p>The most durable thing a leader builds is a team with the judgment to operate effectively without constant direction.</p>

<p>This means investing in people in ways that don’t pay off immediately:</p>
<ul>
  <li>Taking time to explain the reasoning behind decisions, not just the decisions</li>
  <li>Giving people problems that stretch them before they’re fully ready</li>
  <li>Letting people make mistakes in controlled contexts and learn from them</li>
  <li>Creating explicit development paths so people can see where the work leads</li>
</ul>

<p>In a healthcare IT environment, this also means building the operational documentation and processes that survive personnel transitions. A 12-person team running HIPAA-compliant operations cannot be dependent on any individual’s knowledge. The organization needs to be able to function when someone leaves, gets sick, or moves on.</p>

<p>That kind of institutional durability is an executive concern, not a technical one. It requires thinking about the organization as a system — one that needs to be reliable and resilient, just like the infrastructure it manages.</p>

<hr />

<h2 id="the-mindset-shift-that-matters-most">The Mindset Shift That Matters Most</h2>

<p>The engineers I’ve seen struggle in leadership roles tend to have one thing in common: they still measure their own value by their personal technical output.</p>

<p>The leaders I’ve seen succeed at the manager and director level share a different orientation: they measure their value by the outcomes the organization achieves, and they’re willing to do whatever the organization needs — technical or otherwise — to drive those outcomes.</p>

<p>That’s a mindset shift, not a skill upgrade. And it’s one worth making deliberately, before a leadership role forces the issue.</p>

<p>The technology is usually the easy part. The operational and organizational work is where it gets hard — and where the real leverage is.</p>

<hr />

<p><em>I write about technology leadership and cloud architecture from the perspective of someone building toward operational executive roles — connecting technical decisions to business outcomes, and sharing what I’ve learned along the way. <a href="/contact/">Get in touch</a> if you’re navigating a similar path.</em></p>]]></content><author><name>Darren Bell</name></author><category term="Technology Leadership" /><category term="leadership" /><category term="IT management" /><category term="career development" /><category term="team building" /><category term="executive thinking" /><summary type="html"><![CDATA[Technical expertise gets you into IT leadership. The ability to think about business outcomes, own operational risk, and develop other people is what determines whether you succeed once you're there.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://darrenbell.net/assets/images/og-default.svg" /><media:content medium="image" url="https://darrenbell.net/assets/images/og-default.svg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">FinOps in Practice: What Actually Works When Cloud Costs Get Out of Control</title><link href="https://darrenbell.net/blog/2026/03/26/finops-cloud-cost-optimization/" rel="alternate" type="text/html" title="FinOps in Practice: What Actually Works When Cloud Costs Get Out of Control" /><published>2026-03-26T00:00:00+00:00</published><updated>2026-03-26T00:00:00+00:00</updated><id>https://darrenbell.net/blog/2026/03/26/finops-cloud-cost-optimization</id><content type="html" xml:base="https://darrenbell.net/blog/2026/03/26/finops-cloud-cost-optimization/"><![CDATA[<p>Cloud cost problems rarely start with architecture. They start with operations.</p>

<p>Most organizations don’t have a cost problem because they chose the wrong services or instance types. They have a cost problem because <strong>no one owns the outcome</strong>. Engineers provision resources without cost visibility. Finance tracks spend but can’t influence decisions. Waste accumulates because there’s no accountability.</p>

<p>The pattern is consistent: tooling gets added, dashboards get built, but nothing changes. Costs continue to climb.</p>

<p>Fixing cloud spend is not a tooling problem. It’s an operational discipline problem.</p>

<p>I’ve applied the framework below across healthcare and managed services environments and documented over <strong>$127K in annual savings</strong> — from licensing audits, environment rationalization, and storage tier optimization — without changing any core architecture. The work is organizational, not technical.</p>

<hr />

<h2 id="the-reality-of-cloud-cost-optimization">The Reality of Cloud Cost Optimization</h2>

<p>Cloud cost optimization is mostly behavioral, not technical.</p>

<p>If the people creating resources don’t understand cost, and the people tracking cost can’t enforce change, optimization will fail. It doesn’t matter how advanced the tooling is.</p>

<p>The foundation requires four things to be true simultaneously:</p>

<ul>
  <li><strong>Visibility</strong> exists at the team level</li>
  <li><strong>Ownership</strong> is clearly defined</li>
  <li><strong>Waste</strong> has consequences</li>
  <li><strong>Cost</strong> is treated as an engineering metric — not a finance problem</li>
</ul>

<p>Without all four, everything else is noise.</p>

<hr />

<h2 id="phase-1-establish-visibility">Phase 1: Establish Visibility</h2>

<p>Before anything can be optimized, it has to be understood. In practice, most environments lack even basic visibility.</p>

<p><strong>The common failure point is tagging.</strong></p>

<p>Without consistent tagging, there is no way to attribute cost. You cannot answer basic questions like:</p>

<ul>
  <li>What does a specific system cost to run?</li>
  <li>Which team is responsible for a spend spike?</li>
  <li>Where is non-production spend occurring?</li>
</ul>

<p>Once tagging is enforced and tied to accountability, cost visibility becomes actionable instead of theoretical. A useful forcing mechanism: route untagged spend to a separate cost center owned by the VP of Engineering. Nothing motivates tagging compliance faster than a CFO asking questions.</p>

<p>At the same time, enable anomaly detection early. Unexpected spend is common — forgotten GPU clusters, misconfigured autoscaling, runaway data transfer — and catching it quickly prevents avoidable loss.</p>

<blockquote>
  <p>This phase is less about optimization and more about creating a reliable view of reality. You cannot improve what you cannot see.</p>
</blockquote>

<hr />

<h2 id="phase-2-eliminate-obvious-waste">Phase 2: Eliminate Obvious Waste</h2>

<p>Once visibility exists, the first meaningful reductions come from removing waste that should never have existed.</p>

<p>Typical patterns in almost every environment:</p>

<table>
  <thead>
    <tr>
      <th>Waste Category</th>
      <th>Common Cause</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>Idle compute running indefinitely</td>
      <td>No lifecycle policy</td>
    </tr>
    <tr>
      <td>Detached storage still accruing cost</td>
      <td>Forgotten after instance termination</td>
    </tr>
    <tr>
      <td>Snapshots with no retention limit</td>
      <td>Default settings never changed</td>
    </tr>
    <tr>
      <td>Temporary infrastructure never cleaned up</td>
      <td>No ownership, no expiry</td>
    </tr>
    <tr>
      <td>Non-production environments running 24/7</td>
      <td>No scheduled shutdown</td>
    </tr>
  </tbody>
</table>

<p>These are not complex problems. They are operational gaps.</p>

<p>Cleaning this up does not require architectural changes. It requires discipline, communication, and enforcement. This phase consistently produces fast, defensible cost reduction — typically 15–25% of total spend — with minimal risk.</p>

<p><strong>Real-world example:</strong> A licensing audit in a healthcare environment identified $48,000 in annual spend on unassigned or inactive licenses — accounts that had accumulated through turnover, role changes, and service trials that were never cleaned up. In the same environment, an Azure Virtual Desktop deployment audit found a $79,000 annual cost with utilization patterns that didn’t justify the deployment; the legitimate use cases were addressed through better Conditional Access policies at a fraction of the cost. Neither required new tooling. They required visibility and the willingness to act on what the data showed.</p>

<hr />

<h2 id="phase-3-align-spend-with-reality">Phase 3: Align Spend With Reality</h2>

<p>After waste is removed, the next step is aligning pricing models with actual usage.</p>

<p>Cloud providers charge a premium for flexibility. Most production workloads are not fully variable — they have predictable baselines. You are paying on-demand rates for stability you already know you need.</p>

<p>Optimizing this layer involves:</p>

<ol>
  <li><strong>Understanding steady-state usage</strong> — measure actual consumption patterns over 60–90 days after rightsizing</li>
  <li><strong>Applying commitment-based discounts</strong> where usage is stable and predictable</li>
  <li><strong>Maintaining flexibility</strong> for variable workloads — don’t over-commit</li>
</ol>

<p><strong>A critical sequencing note:</strong> commit to pricing <em>after</em> rightsizing, not before. A common and expensive mistake is locking in commitment-based discounts before workloads are properly sized. That locks in inefficiency at a discount — you still overpay, just less visibly.</p>

<p>Done correctly, this phase can reduce cost 30–70% on committed workloads without changing any architecture — only how it is paid for. Azure Reserved Instances alone offer up to 72% savings versus on-demand pricing on eligible VM families.</p>

<hr />

<h2 id="phase-4-optimize-architecture-where-it-matters">Phase 4: Optimize Architecture Where It Matters</h2>

<p>The deeper savings require engineering changes. This is where effort increases and trade-offs become real.</p>

<p><strong>Common high-impact areas:</strong></p>

<p><strong>Data transfer costs</strong> are frequently the hidden driver of bills that don’t respond to compute optimization. Poor resource placement, cross-AZ traffic, and uncompressed origin responses are typical culprits. Audit data movement before assuming compute is the problem.</p>

<p><strong>Storage tiering</strong> is consistently overlooked. Not all data is hot. Object storage lifecycle policies and archival tiers exist for a reason — use them. Most environments keep years of data on the highest-cost tier by default. In a healthcare data environment, implementing lifecycle policies that moved clinical data older than 90 days to cold storage tiers produced $1,500 per month in immediate, ongoing savings — from a one-time configuration effort against data that had been accumulating for years.</p>

<p><strong>Managed service economics</strong> cut both ways. Sometimes managed services save money once operational overhead is factored in. Sometimes they’re egregiously overpriced at scale compared to self-hosted alternatives. Evaluate each category on its own merits rather than applying a blanket policy.</p>

<p><strong>Over-engineered systems</strong> carry overhead that compounds. A distributed system with more moving parts than the problem requires costs more to run and more to maintain. Simplicity is a cost optimization strategy.</p>

<p>These optimizations are selective, not universal. Focus effort where it materially impacts total spend, not where it’s technically interesting.</p>

<hr />

<h2 id="what-makes-it-stick">What Makes It Stick</h2>

<p>Most cost optimization efforts fail because they are treated as one-time projects instead of ongoing operational practices. The cost comes back. Sometimes faster than it left.</p>

<p>Sustainable outcomes require structural change:</p>

<ul>
  <li><strong>Teams see and own their cost</strong> — weekly visibility, not quarterly reports</li>
  <li><strong>Cost is reviewed regularly</strong>, not reactively after a budget breach</li>
  <li><strong>Optimization is built into engineering cycles</strong> — not a separate initiative</li>
  <li><strong>Cross-team collaboration exists</strong> to share patterns and decisions</li>
  <li><strong>Cost sits alongside performance and reliability</strong> as a first-class engineering metric</li>
</ul>

<p>When cost becomes part of how systems are built and operated, improvements persist. When it doesn’t, regression is guaranteed — usually within two quarters of the project being declared complete.</p>

<hr />

<h2 id="bottom-line">Bottom Line</h2>

<p>Cloud cost optimization is not a technical challenge. It is an operational one.</p>

<p>If ownership, visibility, and accountability are not in place first, no amount of tooling or architectural sophistication will produce lasting results. I’ve seen organizations spend six figures on FinOps platforms while their bill continued to grow — because the platform couldn’t fix the fact that no one was accountable for the outcome.</p>

<p>Get the operational foundation right. The technical optimizations are straightforward once you do.</p>

<p><strong>The question to answer before starting any cost reduction effort:</strong> <em>Who is accountable for this number, and what can they actually change?</em> If you can’t answer that clearly, start there.</p>

<hr />

<p><em>Have a cost problem that’s resisted previous optimization attempts? The cause is almost always organizational, not architectural. <a href="/contact/">I’m happy to talk through it.</a></em></p>]]></content><author><name>Darren Bell</name></author><category term="FinOps" /><category term="FinOps" /><category term="cloud costs" /><category term="Azure" /><category term="cost optimization" /><category term="reserved instances" /><category term="tagging" /><category term="licensing" /><summary type="html"><![CDATA[Most cloud cost problems don't start with architecture. They start with ownership. Here's the four-phase operational framework that actually delivers lasting results.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://darrenbell.net/assets/images/og-default.svg" /><media:content medium="image" url="https://darrenbell.net/assets/images/og-default.svg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Multi-Tenant Doesn’t Have to Mean Multi-Login</title><link href="https://darrenbell.net/blog/2026/03/19/cross-tenant-sync-mfa-trust/" rel="alternate" type="text/html" title="Multi-Tenant Doesn’t Have to Mean Multi-Login" /><published>2026-03-19T00:00:00+00:00</published><updated>2026-03-19T00:00:00+00:00</updated><id>https://darrenbell.net/blog/2026/03/19/cross-tenant-sync-mfa-trust</id><content type="html" xml:base="https://darrenbell.net/blog/2026/03/19/cross-tenant-sync-mfa-trust/"><![CDATA[<p>A holding company running multiple subsidiaries on separate Microsoft 365 tenants has a collaboration problem that looks simple on the surface and isn’t.</p>

<p>The people doing the work need files from multiple tenants. They don’t know or care what a tenant boundary is — they just want access. What they get instead is repeated MFA prompts every time they cross a tenant boundary, a confusing login experience that varies depending on which subsidiary’s resources they’re trying to reach, and IT fielding a steady stream of access requests and guest account issues.</p>

<p>The obvious fixes are all expensive in different ways. License users in every tenant they need access to — now you’re paying for the same person three times. Consolidate the tenants — a project that takes months, disrupts operations, and may not be feasible if the subsidiaries need to remain operationally distinct. Use guest access everywhere — you’ve just traded licensing cost for management burden and you still haven’t fixed the MFA friction.</p>

<p>There’s a fourth option. It’s just not the first thing most people reach for.</p>

<hr />

<h2 id="the-business-problem">The Business Problem</h2>

<p>The holding company I worked with had subsidiaries on separate M365 tenants. Each subsidiary had its own SharePoint environment with files that holding company staff needed to access regularly — not occasionally, not for a specific project, but as part of their normal workday.</p>

<p>Two things were non-negotiable:</p>

<ul>
  <li>Users should not be licensed in multiple tenants. The cost wasn’t justified when the same person already had an M365 license in the holding company tenant.</li>
  <li>There should be one MFA prompt. Getting challenged every time you cross a tenant boundary trains users to resent security controls. That resentment eventually becomes pressure on IT to weaken them.</li>
</ul>

<p>The goal was a single sign-on experience with a single access point, where holding company staff could reach files in any subsidiary SharePoint without re-authenticating.</p>

<hr />

<h2 id="the-architecture-three-components-working-together">The Architecture: Three Components Working Together</h2>

<p><strong>Cross-tenant synchronization</strong> starts in Entra ID. You configure a sync from the holding company tenant outbound to each subsidiary tenant. The holding company’s users are provisioned in each subsidiary tenant as B2B collaboration users with member-level access — not guests. They show up in directory search, they’re manageable at the tenant level, and when someone leaves the holding company, the sync removes their presence in every subsidiary automatically. No manual guest account cleanup across four tenants.</p>

<p><strong>Inbound MFA trust</strong> is what eliminates the re-authentication problem. In each subsidiary tenant’s cross-tenant access policies, you configure inbound trust to accept MFA claims from the holding company tenant. When a synced user completes MFA in their home environment — the holding company tenant — that authentication is honored when they access resources in any subsidiary. The subsidiary tenants aren’t skipping MFA. They’re trusting that it already happened, which it did, in the right place.</p>

<p><strong>The SharePoint landing page</strong> is the piece that makes this feel like a single system to the end user. Each subsidiary has its own SharePoint site with its own content. In the holding company tenant, we built a SharePoint hub that functions as the access point — tiles and links out to the relevant subsidiary SharePoint sites, organized the way the business is organized. Users log in once, land on the holding company SharePoint, and navigate to files in any subsidiary from there. The underlying tenant boundaries are invisible.</p>

<hr />

<h2 id="what-the-experience-looks-like">What the Experience Looks Like</h2>

<p>Before: a user needs a document in a subsidiary’s SharePoint. They navigate to an unfamiliar URL, get prompted to sign in again, get an MFA challenge they weren’t expecting, and either complete the friction or submit a help desk ticket asking why they need to log in twice.</p>

<p>After: they go to the holding company SharePoint landing page they already have bookmarked. They see tiles for each subsidiary’s content area. They click through. The files are there. No additional sign-in. No second MFA prompt. The tenant boundary is invisible.</p>

<p>For IT, the management surface simplifies. User provisioning and deprovisioning is managed in one place — the holding company tenant. The sync handles propagation. Offboarding a user removes their access across all subsidiary tenants as part of the standard process, not as a separate checklist item.</p>

<hr />

<h2 id="what-to-watch">What to Watch</h2>

<p>This architecture is not a permanent set-it-and-forget-it configuration. A few things to maintain:</p>

<p>The cross-tenant access policies and sync scope need to match organizational reality. If a subsidiary is acquired, divested, or restructured, the identity trust relationship needs to be updated. Trust that outlives the organizational relationship that justified it becomes an unnecessary access vector.</p>

<p>Scope the sync carefully. Not every holding company user needs to be provisioned in every subsidiary tenant. Sync the users who actually need access to each tenant’s resources. The principle of least privilege applies at the tenant level, not just the file level.</p>

<p>The SharePoint landing page architecture requires governance like any other SharePoint environment — someone needs to own it, keep the links current, and manage what gets surfaced where. It’s not complex, but it’s not self-maintaining.</p>

<hr />

<p><em>Running a multi-tenant M365 environment and dealing with cross-tenant access friction? The licensing and authentication architecture matters more than most people realize before they’ve tried to untangle it. <a href="/contact/">Happy to talk through the specifics.</a></em></p>]]></content><author><name>Darren Bell</name></author><category term="Microsoft 365" /><category term="Microsoft 365" /><category term="Entra ID" /><category term="cross-tenant sync" /><category term="MFA trust" /><category term="SharePoint" /><category term="identity" /><category term="collaboration" /><category term="holding company" /><summary type="html"><![CDATA[A holding company with multiple subsidiaries needed their people to access SharePoint files across tenants without multiple logins, duplicate licenses, or MFA fatigue. Here's the architecture that got them there.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://darrenbell.net/assets/images/og-default.svg" /><media:content medium="image" url="https://darrenbell.net/assets/images/og-default.svg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Why File Servers Are a Leadership Problem, Not a Storage Problem</title><link href="https://darrenbell.net/blog/2026/03/12/sharepoint-digital-transformation-file-servers/" rel="alternate" type="text/html" title="Why File Servers Are a Leadership Problem, Not a Storage Problem" /><published>2026-03-12T00:00:00+00:00</published><updated>2026-03-12T00:00:00+00:00</updated><id>https://darrenbell.net/blog/2026/03/12/sharepoint-digital-transformation-file-servers</id><content type="html" xml:base="https://darrenbell.net/blog/2026/03/12/sharepoint-digital-transformation-file-servers/"><![CDATA[<p>Every organization knows file servers are a problem. The evidence is visible in daily work: shared drives full of folders named “OLD,” documents circulated via email because no one trusts the shared drive to have the current version, remote workers who need a VPN and a specific mapped drive to access files they should be able to reach from anywhere.</p>

<p>The solution is well understood. SharePoint Online, as part of Microsoft 365, provides the document management, collaboration, and accessibility that file servers don’t. The technology is available. Most organizations already have the licensing.</p>

<p>The migration doesn’t happen — or happens badly — because it’s treated as an IT project.</p>

<p>IT can migrate the files. IT cannot migrate the behavior. And behavior is what determines whether the new environment works.</p>

<hr />

<h2 id="what-file-servers-actually-cost-organizations">What File Servers Actually Cost Organizations</h2>

<p>The visible costs of file servers are the storage infrastructure, the backup systems, and the VPN access that remote work made more expensive and more fragile. These are real but not the primary cost.</p>

<p>The invisible costs are larger.</p>

<p><strong>Knowledge accessibility.</strong> Files on a shared drive are only accessible to people who know where to look and who have been given explicit permission to look there. Institutional knowledge lives in folder structures that made sense to the person who created them years ago. New employees spend weeks learning where things are. People who leave take navigational knowledge with them.</p>

<p><strong>Version chaos.</strong> “Final,” “Final_v2,” “Final_USE THIS ONE,” “Final_ACTUALLY FINAL” — every organization with file servers has this problem. The cost isn’t just the frustration; it’s the time spent confirming which version is current, the errors that come from working on the wrong version, and the meetings that exist primarily to establish which document is the authoritative one.</p>

<p><strong>Collaboration friction.</strong> Two people cannot edit the same document simultaneously on a file server. The workflow becomes: download, edit, re-upload, notify. Or email the document back and forth. Or put it in a Teams chat. The result is version proliferation and collaboration that’s slower and more error-prone than it needs to be.</p>

<p><strong>Remote access dependency.</strong> File servers were designed for on-premises access. Remote access requires VPN, which is a performance and security overhead that became much more expensive when remote work normalized. Organizations that didn’t address this before 2020 discovered the hard way how brittle file server-dependent workflows are.</p>

<p>None of these costs show up cleanly in IT budgets. They show up in productivity, in employee frustration, and in the quiet organizational dysfunction that comes from people working around their tools rather than with them.</p>

<hr />

<h2 id="why-migrations-fail">Why Migrations Fail</h2>

<p>The technical migration — moving files from a file server to SharePoint — is not the hard part. Tools exist. The process is well-documented. Microsoft provides migration tooling that handles the mechanics.</p>

<p>Migrations fail for organizational reasons:</p>

<p><strong>Migrating the folder structure instead of redesigning it.</strong> The instinct is to replicate the existing shared drive structure in SharePoint. This is the worst outcome — it preserves all the organizational dysfunction of the file server, minus the benefits of SharePoint’s search, metadata, and collaboration features. If you migrate a broken folder structure, you get a broken SharePoint.</p>

<p><strong>No information architecture decision before migration.</strong> SharePoint is not just a place to store files. It’s a platform for organizing and surfacing information. How you structure site collections, document libraries, and metadata determines how usable the result is. This is a design decision that requires input from the people who will use it — not just IT.</p>

<p><strong>Treating it as a project with an end date.</strong> The migration might have an end date. The adoption work doesn’t. People who’ve been using shared drives for years will default back to old behaviors unless there’s consistent reinforcement, training, and — most importantly — leadership that uses the new environment visibly.</p>

<p><strong>No champions outside IT.</strong> A SharePoint migration that is owned entirely by IT will be perceived as an IT initiative that was done to people, not with them. The organizations that succeed have champions in each business unit — people who were involved in the design, understand the value, and can support adoption within their teams.</p>

<hr />

<h2 id="what-a-successful-migration-actually-requires">What a Successful Migration Actually Requires</h2>

<p>The starting point is not file inventory. It’s stakeholder conversations.</p>

<p>Who are the heaviest users of the current shared drives? What do they actually need to do with documents — who needs to access them, who needs to create and edit them, how do they need to find them? What does the governance model look like — who is accountable for keeping content current and organized?</p>

<p>The answers to those questions drive the information architecture. The architecture drives the migration design. The migration is the last step, not the first.</p>

<p>The technical implementation — site structure, library design, permissions through security groups rather than direct assignments, metadata taxonomy, search configuration — follows from knowing what the organization actually needs rather than from copying what it had.</p>

<p>The permissions model deserves particular attention. File server permissions tend to accumulate complexity over years of individual exceptions. A SharePoint migration is an opportunity to clean this up: move to security group-based permissions managed through Entra ID, establish a governance model for who can create new sites and libraries, and document the model so it can be maintained.</p>

<hr />

<h2 id="the-outcome-is-organizational-not-technical">The Outcome Is Organizational, Not Technical</h2>

<p>A successful SharePoint migration is visible in how people work, not in a dashboard.</p>

<p>People stop emailing documents back and forth because co-authoring in SharePoint is easier. Remote workers stop complaining about VPN because SharePoint is browser-accessible from anywhere. Search starts returning useful results because files are named and organized consistently. Onboarding new employees takes less time because institutional knowledge is findable rather than requiring someone to walk you through the folder structure.</p>

<p>The technology enables all of this. But the technology doesn’t create it on its own. The information architecture decisions, the adoption work, and the organizational commitment to using the new environment — these are what separate a file migration from a transformation.</p>

<p>That’s why this is a leadership problem. IT can build the right platform. Leadership has to make the organizational decision to actually use it differently.</p>

<hr />

<p><em>Planning a SharePoint migration or trying to revive one that stalled? The organizational design questions are usually where the project is stuck. <a href="/contact/">I’m happy to think through the approach.</a></em></p>]]></content><author><name>Darren Bell</name></author><category term="Technology Leadership" /><category term="SharePoint" /><category term="Microsoft 365" /><category term="digital transformation" /><category term="file servers" /><category term="migration" /><category term="knowledge management" /><summary type="html"><![CDATA[Every organization knows file servers are a problem. Most treat it as a storage or technology decision. The ones that successfully migrate to SharePoint treat it as a leadership and organizational design decision first.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://darrenbell.net/assets/images/og-default.svg" /><media:content medium="image" url="https://darrenbell.net/assets/images/og-default.svg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Scaling MSP Operations Without Scaling Headcount: Standardizing Microsoft 365 with CIPP</title><link href="https://darrenbell.net/blog/2026/03/05/standardizing-msp-client-environments-cipp/" rel="alternate" type="text/html" title="Scaling MSP Operations Without Scaling Headcount: Standardizing Microsoft 365 with CIPP" /><published>2026-03-05T00:00:00+00:00</published><updated>2026-03-05T00:00:00+00:00</updated><id>https://darrenbell.net/blog/2026/03/05/standardizing-msp-client-environments-cipp</id><content type="html" xml:base="https://darrenbell.net/blog/2026/03/05/standardizing-msp-client-environments-cipp/"><![CDATA[<p>Every MSP reaches the same inflection point: you’ve grown the client base, but the way you manage each environment hasn’t changed. Every new client is onboarded slightly differently. Security policies that should be consistent vary by who did the setup. Something changes in a client tenant — a conditional access policy gets modified, a licensing assignment shifts — and you find out when the client calls, not before.</p>

<p>At that point, you’re not running a scalable business. You’re running a collection of individual consulting relationships, each requiring its own institutional knowledge, each carrying its own risk of configuration drift. Growth makes it worse, not better.</p>

<p>The operational problem isn’t lack of capability. It’s lack of standardization.</p>

<hr />

<h2 id="the-business-problem-with-manual-multi-tenant-management">The Business Problem with Manual Multi-Tenant Management</h2>

<p>In a managed services environment, the hidden cost isn’t the time spent on each task — it’s the cognitive overhead of managing environments that don’t conform to a standard.</p>

<p>When every client is configured differently, every support interaction starts with “let me check how this tenant is set up.” Onboarding a new team member means teaching them the quirks of each environment rather than handing them a playbook. A security incident in one tenant triggers a manual audit of all tenants, because there’s no automated way to know which ones have the affected configuration.</p>

<p>The deeper problem is that configuration drift is invisible until it isn’t. Tenants that were set up correctly at onboarding accumulate changes — admin settings toggled for troubleshooting, security defaults modified for a specific use case, licensing changes that create coverage gaps. Without a system to detect and surface drift, you’re operating on the assumption that things are still configured the way you think they are.</p>

<p>That assumption fails at the worst possible times.</p>

<hr />

<h2 id="the-approach-centralized-management-and-drift-detection-with-cipp">The Approach: Centralized Management and Drift Detection with CIPP</h2>

<p>The CyberDrain CIPP (CyberDrain Improved Partner Portal) platform addresses the core operational problem directly: it provides a single management interface across all Microsoft 365 tenants, with the ability to deploy standardized configuration templates and monitor for drift from those standards.</p>

<p>The operational shift this enables:</p>

<p><strong>Standardized onboarding templates.</strong> Rather than configuring each new client tenant from scratch, onboarding becomes a matter of applying a baseline template — conditional access policies, security defaults, licensing assignments, Teams settings, Exchange configurations — and then customizing from a known starting point. The work shifts from configuration to exception management.</p>

<p><strong>Drift detection and alerting.</strong> When a setting changes in a client tenant — whether through admin error, a vendor making changes, or a user with elevated privileges — CIPP surfaces that deviation from the standard. You know the configuration drifted before the client does, and before it becomes a security issue.</p>

<p><strong>Cross-tenant operations at scale.</strong> Tasks that previously required logging into each tenant individually — password resets, license assignments, security group changes — can be executed from a single interface across multiple tenants simultaneously.</p>

<hr />

<h2 id="what-actually-changed">What Actually Changed</h2>

<p>The operational impact was most visible in two places.</p>

<p>First, onboarding. The time to stand up a new client environment dropped significantly, and more importantly, the result became consistent. The first hour after a new client signs isn’t figuring out what they need — it’s applying what we already know they need, plus their specific requirements on top.</p>

<p>Second, the reactive-to-proactive shift. Drift detection changed the nature of client interactions around security and compliance. Instead of responding to configuration issues when clients notice something wrong, the conversation becomes: “We detected a change to your conditional access policy last week — here’s what it was, here’s why it matters, here’s what we did.” That’s a fundamentally different client relationship.</p>

<p>From a business perspective, that shift matters beyond the operational efficiency. Clients who see proactive management perceive more value in the relationship. The same technical work — catching a configuration change — lands differently when you surface it before it causes a problem.</p>

<hr />

<h2 id="the-broader-principle">The Broader Principle</h2>

<p>CIPP is a tool. The underlying principle — that scalable MSP operations require standardization before automation, and monitoring before incidents — applies regardless of tooling.</p>

<p>Any MSP managing more than a handful of client environments needs to answer two questions honestly: Do we have a defined standard that every client environment should conform to? And do we have visibility into which environments have drifted from that standard?</p>

<p>If the answer to either question is no, growth is working against you. Every new client adds complexity rather than leverage, and the team that felt manageable at 20 clients feels overwhelmed at 50.</p>

<p>Standardization isn’t a constraint on serving clients well. It’s what makes serving clients well at scale possible.</p>

<hr />

<p><em>Managing Microsoft 365 environments across multiple clients or tenants? The standardization and drift detection problems are solvable. <a href="/contact/">I’m happy to compare approaches.</a></em></p>]]></content><author><name>Darren Bell</name></author><category term="Infrastructure Ops" /><category term="MSP" /><category term="CIPP" /><category term="Microsoft 365" /><category term="standardization" /><category term="automation" /><category term="operational discipline" /><category term="multi-tenant" /><summary type="html"><![CDATA[Every MSP reaches the same inflection point: the number of client environments outpaces the team's ability to manage them individually. The answer isn't more people — it's operational standardization. Here's how we used CIPP to get there.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://darrenbell.net/assets/images/og-default.svg" /><media:content medium="image" url="https://darrenbell.net/assets/images/og-default.svg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">PowerShell as Operational Discipline: Automation Lessons from MSP-Scale IT</title><link href="https://darrenbell.net/blog/2026/02/26/infrastructure-as-code-at-scale/" rel="alternate" type="text/html" title="PowerShell as Operational Discipline: Automation Lessons from MSP-Scale IT" /><published>2026-02-26T00:00:00+00:00</published><updated>2026-02-26T00:00:00+00:00</updated><id>https://darrenbell.net/blog/2026/02/26/infrastructure-as-code-at-scale</id><content type="html" xml:base="https://darrenbell.net/blog/2026/02/26/infrastructure-as-code-at-scale/"><![CDATA[<p>PowerShell gets underestimated.</p>

<p>In conversations about automation and infrastructure as code, the tools that get attention are usually the ones with cloud-native integrations, declarative configurations, and CI/CD pipelines. PowerShell — the workhouse of Microsoft-first environments — tends to get positioned as a legacy automation tool rather than a strategic capability.</p>

<p>That framing is wrong, and expensive.</p>

<p>In a Microsoft-first environment — M365, Azure, Entra ID, Intune — PowerShell is not just the most practical automation tool. It’s often the <em>only</em> tool that provides the access depth the work requires. And in a managed services context, where you’re operating across multiple client environments, the discipline of PowerShell automation is what separates teams that scale from teams that hire more people to keep up with volume.</p>

<hr />

<h2 id="the-operational-problem-automation-solves">The Operational Problem Automation Solves</h2>

<p>The promise of automation is usually framed as time savings: “this task takes 45 minutes manually, but 2 minutes with the script.” That framing undervalues what automation actually delivers.</p>

<p>The real value of automation is <strong>operational consistency</strong>.</p>

<p>When a process runs from a script, it runs the same way every time. The same security groups get assigned. The same licenses get applied. The same compliance policies get attached. The same documentation gets generated. There’s no variation because someone was rushing, forgot a step, or applied the process slightly differently than the last person who did it.</p>

<p>In an MSP context, where you’re onboarding users and managing configurations across multiple client environments with different standards, that consistency is the difference between an operation that scales and one that depends on institutional knowledge that walks out the door when someone leaves.</p>

<p>At ClowdCover, building consistent PowerShell-based provisioning workflows reduced onboarding time by 25%. The reduction in hours was real, but the more important outcome was that the process became reliable enough to delegate — any team member with the script could execute an onboarding without tribal knowledge.</p>

<hr />

<h2 id="what-powershell-actually-covers-in-a-microsoft-environment">What PowerShell Actually Covers in a Microsoft Environment</h2>

<p>The Microsoft Graph API and the Exchange, SharePoint, Teams, Intune, and Azure PowerShell modules provide comprehensive programmatic access to the Microsoft ecosystem. In practice, this means you can automate nearly any administrative operation that you’d otherwise perform manually.</p>

<p><strong>The highest-leverage automation targets in M365 and Azure:</strong></p>

<p><strong>User lifecycle management.</strong> New user creation, license assignment, security group membership, Intune enrollment trigger, Teams membership — all driven from a single script that takes HR attributes as input and produces a fully provisioned user as output. The same script handles offboarding: license reclaim, group removal, mailbox conversion, account disable, litigation hold if required.</p>

<p>Lifecycle automation is particularly high-value in healthcare and regulated environments because it produces the audit trail that compliance requires. Every provisioning and deprovisioning event is logged in the script execution record, which is something auditors look for.</p>

<p><strong>License management and reporting.</strong> M365 licensing is expensive and consistently over-provisioned. A weekly script that queries assigned licenses against last sign-in data and flags accounts that haven’t been active in 30+ days gives you continuous visibility into licensing waste. The same query, run before a licensing audit, is the starting point for meaningful cost reduction.</p>

<p>When I ran a systematic licensing audit using this approach, the output was $48,000 in annual savings from reclaiming licenses on accounts that had accumulated through turnover, role changes, and service trials that were never cleaned up. The script took a few hours to write. The savings were immediate and permanent.</p>

<p><strong>Security posture reporting.</strong> Entra ID and Microsoft 365 Defender expose security configuration data through PowerShell and the Graph API. A weekly report that surfaces accounts without MFA, devices out of compliance with Intune policy, or sign-in risk flags gives the IT team the visibility to act before a problem becomes an incident.</p>

<p>In a healthcare environment, this reporting is also the evidence base for compliance conversations. Being able to pull an MFA adoption report on demand — showing percentage of users enrolled, broken down by department — is the kind of data that makes HIPAA compliance conversations concrete rather than theoretical.</p>

<p><strong>Azure resource tagging and cost attribution.</strong> Cost governance in Azure depends on accurate tagging. A script that audits resources against your tagging standard and reports on untagged or incorrectly tagged resources — run weekly and piped to a shared dashboard or email report — keeps tagging compliance from drifting. Cost attribution is only as good as the tagging it’s based on.</p>

<hr />

<h2 id="building-scripts-that-are-operationally-maintainable">Building Scripts That Are Operationally Maintainable</h2>

<p>The failure mode for PowerShell automation isn’t writing scripts that don’t work — it’s writing scripts that work initially and then become unmaintainable.</p>

<p>Scripts that no one understands get abandoned when they break. Scripts that only one person understands create single points of failure. Scripts that work in one environment and fail in another create incidents when someone runs them in the wrong context.</p>

<p><strong>The practices that produce maintainable automation:</strong></p>

<p><strong>Write for the person who inherits the script, not for yourself.</strong> The most valuable comment in a script isn’t “this does X” — it’s “this does X because of Y, and if you change it you’ll break Z.” Business context and operational reasoning belong in comments. The code says what it does; the comments say why.</p>

<p><strong>Parameterize everything that might vary across environments.</strong> Client name, tenant ID, license SKU, group naming convention — these belong in parameters or a configuration section at the top of the script, not scattered throughout the logic. A script that requires editing the body to run against a different client is a script that will eventually run against the wrong client.</p>

<p><strong>Build in error handling and logging.</strong> A script that fails silently is worse than a script that doesn’t exist. Every significant operation should produce output that tells you whether it succeeded, and failures should be logged with enough context to diagnose. In an MSP context, this also means your client-facing reporting can be driven from the script’s own output.</p>

<p><strong>Version control your scripts.</strong> Scripts that live in a shared network folder with names like <code class="language-plaintext highlighter-rouge">provision-user-FINAL-v2-USE THIS ONE.ps1</code> are an operational risk. Even basic version control — a Git repository, even a simple one — gives you change history, the ability to roll back, and a canonical source of truth.</p>

<hr />

<h2 id="automation-as-a-forcing-function-for-standardization">Automation as a Forcing Function for Standardization</h2>

<p>One insight from building MSP-scale automation: <strong>you cannot automate what isn’t standardized.</strong></p>

<p>If every client has a different licensing model, a different group naming convention, and a different onboarding checklist, you cannot build a single provisioning script that works across clients. You build one script per client, which means the automation hasn’t reduced your operational complexity — it’s just encoded it differently.</p>

<p>This is where automation becomes a business conversation, not just a technical one. The discipline of building reusable, cross-client automation requires working with clients to standardize the configurations and processes the automation depends on. That’s a client management conversation, a sales conversation, and an operational design conversation — not just a scripting conversation.</p>

<p>The organizations that get the most leverage from automation are the ones that use the automation requirement as a forcing function for standardization. The discipline makes the operation more consistent and more scalable, not just faster.</p>

<hr />

<h2 id="the-strategic-case-for-powershell-investment">The Strategic Case for PowerShell Investment</h2>

<p>For IT leaders building the case for automation investment — whether that’s time to build scripts, tooling, or dedicated resources — the ROI framing that works with executives is not hours saved per task.</p>

<p>It’s this:</p>

<p><strong>Every manual process that runs from a script is a process that no longer depends on specific individuals, runs consistently at any scale, and produces an audit trail.</strong> That’s not a time savings. That’s organizational capability.</p>

<p>In a healthcare environment, the audit trail argument alone often carries the investment decision. In a managed services environment, the scaling argument does. The combination — consistent, auditable, scalable operations — is the strongest case for treating automation as an operational priority rather than a nice-to-have.</p>

<p>The tooling investment is modest. The PowerShell modules and Graph API access are included in your Microsoft licensing. What requires investment is the engineering discipline to build automation that’s maintainable, the organizational discipline to standardize the processes it runs, and the time to build the library of scripts that covers your operational baseline.</p>

<p>That investment pays back quickly, and it compounds.</p>

<hr />

<p><em>Running Microsoft 365 or Azure in an MSP or enterprise environment? PowerShell automation is where operational leverage lives. <a href="/contact/">I’m happy to compare notes.</a></em></p>]]></content><author><name>Darren Bell</name></author><category term="Cloud Architecture" /><category term="PowerShell" /><category term="automation" /><category term="Microsoft 365" /><category term="Azure" /><category term="MSP" /><category term="operational discipline" /><category term="Entra ID" /><summary type="html"><![CDATA[The value of PowerShell automation in a Microsoft-first environment isn't the hours saved on individual tasks. It's the operational consistency it produces across every client and every environment you manage. Here's how to build it right.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://darrenbell.net/assets/images/og-default.svg" /><media:content medium="image" url="https://darrenbell.net/assets/images/og-default.svg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">HIPAA in the Cloud: What Healthcare IT Actually Requires at the Operational Level</title><link href="https://darrenbell.net/blog/2026/02/19/sre-in-practice-from-concepts-to-culture/" rel="alternate" type="text/html" title="HIPAA in the Cloud: What Healthcare IT Actually Requires at the Operational Level" /><published>2026-02-19T00:00:00+00:00</published><updated>2026-02-19T00:00:00+00:00</updated><id>https://darrenbell.net/blog/2026/02/19/sre-in-practice-from-concepts-to-culture</id><content type="html" xml:base="https://darrenbell.net/blog/2026/02/19/sre-in-practice-from-concepts-to-culture/"><![CDATA[<p>HIPAA compliance failures rarely happen because the technology was wrong.</p>

<p>They happen because someone shared a file the wrong way. Because a terminated employee’s account wasn’t deprovisioned on the same day their badge was deactivated. Because a vendor relationship existed without a signed BAA because procurement didn’t flag it. Because audit logs existed but no one was actually reviewing them.</p>

<p>The technology can be configured correctly, and an organization can still fail an audit — because the compliance posture is not the configuration. It’s the operational discipline that keeps the configuration meaningful over time.</p>

<p>I spent six years owning the IT compliance posture in a healthcare environment. Here’s what I learned about what it actually takes.</p>

<hr />

<h2 id="compliance-is-an-operational-state-not-a-project-outcome">Compliance Is an Operational State, Not a Project Outcome</h2>

<p>The most dangerous framing in healthcare IT compliance is treating it as a project.</p>

<p>Projects have completion dates. You implement the controls, you document them, you pass the audit, you move on. But HIPAA doesn’t have a completion date. The threat landscape changes. Personnel turn over. New systems get added. Vendors change their practices. Each of these events is a potential gap in your compliance posture if you’re not actively maintaining it.</p>

<p>The organizations that maintain clean compliance postures treat it as a continuous operational discipline:</p>

<ul>
  <li><strong>Access reviews happen on a schedule</strong>, not just after incidents</li>
  <li><strong>New vendor relationships trigger a BAA review process automatically</strong>, not as an afterthought</li>
  <li><strong>Terminated employees lose access same-day</strong> because the process is automated, not because IT manually handles every offboarding</li>
  <li><strong>Policy exceptions are documented</strong> because the documentation requirement is built into the exception process</li>
</ul>

<p>None of these require expensive tooling. They require process discipline and the organizational authority to enforce it consistently.</p>

<hr />

<h2 id="microsoft-365-in-healthcare-what-the-configuration-actually-needs-to-include">Microsoft 365 in Healthcare: What the Configuration Actually Needs to Include</h2>

<p>M365 is the dominant productivity suite in healthcare, and Microsoft’s HIPAA BAA covers most M365 services. That coverage is meaningful — but it doesn’t configure compliance for you.</p>

<p>The M365 features that matter most for HIPAA operational compliance:</p>

<p><strong>Data Loss Prevention (DLP) policies.</strong> PHI takes many forms — patient names with diagnoses, dates of birth with account numbers, Social Security Numbers. DLP policies can detect and block sharing of PHI patterns through email, Teams, and SharePoint. The default policies Microsoft provides are a starting point; the policies that protect your specific organization require customization based on your data patterns.</p>

<p>The critical configuration decision: <strong>block vs. audit</strong>. A block policy stops the sharing attempt. An audit policy logs it and notifies. In my experience, starting with audit-then-block gives you visibility into existing sharing patterns before enforcement, which prevents the operational disruption that comes from blocking workflows people didn’t know were non-compliant.</p>

<p><strong>Retention policies and litigation holds.</strong> HIPAA requires a 6-year retention period for compliance documentation — policies, procedures, and training records. Medical record retention periods are governed by state law and vary, but are typically longer. M365 retention policies can enforce both, automatically preventing premature deletion and managing accumulation. Litigation hold capability is essential for legal and compliance purposes and should be tested before it’s needed.</p>

<p><strong>Microsoft Defender for M365.</strong> Phishing and business email compromise are the leading vectors for healthcare data breaches. Defender’s anti-phishing, safe links, and safe attachments controls are not optional in a healthcare environment — they’re table stakes. The configuration that matters most: impersonation protection for executive and clinical leadership, whose email accounts are the highest-value targets.</p>

<p><strong>Communication Compliance.</strong> For organizations subject to additional regulatory oversight or with specific compliance requirements, Communication Compliance provides the supervisory review capability that HIPAA-covered entities sometimes need. This is not a universal requirement, but it’s one to understand before an auditor asks whether you have it.</p>

<hr />

<h2 id="conditional-access-the-identity-layer-that-makes-everything-else-defensible">Conditional Access: The Identity Layer That Makes Everything Else Defensible</h2>

<p>The most important security control in a Microsoft-first healthcare environment is Conditional Access.</p>

<p>A correctly configured Conditional Access policy answers the auditor’s core access control questions: who can access clinical systems, from what devices, under what conditions, and what happens when those conditions aren’t met?</p>

<p><strong>The policies that matter most in a healthcare environment:</strong></p>

<p><strong>Require MFA for all users accessing clinical systems.</strong> This is not optional. The authentication event creates the audit record that demonstrates the person accessing the system was verified at the time of access. Multi-factor authentication is the most effective single control against credential-based compromise.</p>

<p><strong>Require compliant or Microsoft Entra hybrid joined devices.</strong> A compliant device policy ensures that devices accessing clinical systems meet a minimum security standard: encrypted, up to date, managed by Intune. This prevents the scenario where a personal device — potentially compromised, potentially shared — accesses patient data without any organizational visibility.</p>

<p><strong>Block legacy authentication protocols.</strong> Basic Auth doesn’t support MFA. Any device or application authenticating via Basic Auth — including IMAP and POP3 clients that haven’t been migrated to OAuth2 — is a gap in your MFA coverage regardless of how well your Conditional Access policies are configured. Blocking Basic Auth and requiring Modern Authentication closes that gap. Microsoft has been deprecating Basic Auth for Exchange Online protocols since 2022.</p>

<p><strong>Implement risk-based Conditional Access for high-risk sign-ins.</strong> Entra ID evaluates sign-in risk in real time based on factors like impossible travel, anonymous IP usage, and known malicious IP ranges. A risk-based policy that requires step-up verification for high-risk sign-ins catches credential compromise scenarios that static policies miss.</p>

<p>The configuration of these policies is not the hard part. The hard part is maintaining them as the organization changes — new applications added to the tenant, new devices types introduced, new vendor access requirements. Conditional Access policy drift is a real operational risk.</p>

<p><strong>Operational discipline required:</strong> a quarterly policy review, documented, that verifies the policy set still matches the current environment and access requirements.</p>

<hr />

<h2 id="the-vendor-management-problem">The Vendor Management Problem</h2>

<p>Healthcare organizations work with dozens of vendors who have some level of access to systems containing or adjacent to PHI. Each of those relationships requires a Business Associate Agreement.</p>

<p>The compliance failure mode I’ve seen most often: a vendor relationship exists without a current, signed BAA because the relationship predated the compliance program, or because procurement didn’t understand the BAA requirement when the contract was signed.</p>

<p><strong>The operational fix is process, not technology:</strong></p>

<p>Every new vendor relationship involving access to organizational systems goes through a review that includes: does this vendor have access to PHI? If yes, is a BAA in place? If no BAA exists, the relationship doesn’t proceed until one is executed.</p>

<p>That process needs to exist independently of any individual’s knowledge of the requirement — which means it needs to be documented, communicated to everyone involved in procurement and vendor onboarding, and owned by someone with the authority to stop a vendor relationship that hasn’t met the requirement.</p>

<p>At Premier Health, building that process into the vendor onboarding workflow eliminated the category of risk entirely. The BAA question became automatic, not exceptional.</p>

<hr />

<h2 id="what-auditors-actually-look-for">What Auditors Actually Look For</h2>

<p>HIPAA audits — whether OCR audits or third-party assessments — are fundamentally documentation reviews. The auditor wants to see:</p>

<ol>
  <li><strong>That controls exist.</strong> The policies, the configurations, the technical controls.</li>
  <li><strong>That controls are maintained.</strong> Access reviews, policy updates, documentation that reflects the current state.</li>
  <li><strong>That incidents were handled appropriately.</strong> Documentation of any security incidents, the assessment of whether PHI was affected, and the breach notification decision.</li>
  <li><strong>That training happened.</strong> Evidence that employees received HIPAA training and understood their obligations.</li>
</ol>

<p>The organizations that struggle with audits are usually the ones where controls exist technically but aren’t documented consistently, or where the documentation reflects an earlier state of the environment that doesn’t match what’s actually deployed.</p>

<p><strong>The audit readiness discipline:</strong> treat every control change as a documentation event. When a Conditional Access policy changes, document why. When an access review happens, document the results. When a vendor BAA is signed, file it. The documentation burden is light if you do it continuously; it’s overwhelming if you try to reconstruct it before an audit.</p>

<hr />

<h2 id="the-leadership-dimension">The Leadership Dimension</h2>

<p>Compliance is ultimately a leadership problem, not a technical one.</p>

<p>Technical controls can be configured correctly and still fail if leadership doesn’t enforce them consistently. If a senior clinical staff member is exempt from MFA because they complained loudly enough, the MFA policy has a hole. If a vendor relationship bypasses the BAA process because a department head wants to move fast, the vendor management process has a hole.</p>

<p>The IT leader’s job is to build the processes, configure the controls, and then hold the line when the organization pushes back — which it will.</p>

<p>The most useful thing I’ve found for holding that line: framing compliance requirements as organizational protection, not IT bureaucracy. The controls exist to protect the organization, the patients, and the staff — not to create friction. When that framing is consistent and credible, the compliance culture develops. When it’s absent, compliance becomes an adversarial relationship between IT and the rest of the organization.</p>

<hr />

<p><strong>The bottom line:</strong> HIPAA compliance in a cloud environment is achievable and maintainable. The technology — M365, Azure, Entra ID — provides the controls. What keeps those controls meaningful is the operational discipline to maintain them, the process to enforce them, and the leadership to make the organization take them seriously.</p>

<p>Get the process right. The technical configuration is the easier part.</p>

<hr />

<p><em>Operating Microsoft 365 or Azure in a healthcare environment? The compliance requirements are real, but there are clear patterns that work. <a href="/contact/">I’m happy to talk through specifics.</a></em></p>]]></content><author><name>Darren Bell</name></author><category term="Security" /><category term="HIPAA" /><category term="healthcare IT" /><category term="Microsoft 365" /><category term="compliance" /><category term="Azure" /><category term="Entra ID" /><category term="Conditional Access" /><summary type="html"><![CDATA[Most healthcare IT compliance failures aren't technical failures. They're operational ones — gaps in process, gaps in documentation, and gaps in the daily discipline that keeps a compliance posture intact. Here's what it actually takes.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://darrenbell.net/assets/images/og-default.svg" /><media:content medium="image" url="https://darrenbell.net/assets/images/og-default.svg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Why I’m Writing This Blog</title><link href="https://darrenbell.net/blog/2026/02/12/welcome-to-my-blog/" rel="alternate" type="text/html" title="Why I’m Writing This Blog" /><published>2026-02-12T00:00:00+00:00</published><updated>2026-02-12T00:00:00+00:00</updated><id>https://darrenbell.net/blog/2026/02/12/welcome-to-my-blog</id><content type="html" xml:base="https://darrenbell.net/blog/2026/02/12/welcome-to-my-blog/"><![CDATA[<p>After more than ten years in enterprise IT — designing systems, managing teams, navigating compliance requirements, and fighting the same cost battles in different organizations — I’ve accumulated a lot of hard-won opinions.</p>

<p>Not the polished, certification-guide kind. The kind that come from being accountable for a HIPAA audit, from watching a licensing decision turn into a six-figure overspend, from trying to explain to a CFO why the Azure bill doesn’t match the forecast, from being the person whose phone rings when something stops working at 2am.</p>

<p>This blog is where I share those opinions.</p>

<h2 id="what-this-blog-is-and-isnt">What This Blog Is (and Isn’t)</h2>

<p><strong>This is a practitioner’s blog.</strong> I’m not a vendor advocate, not a conference speaker optimizing for applause, and not a consultant selling frameworks that work great in PowerPoint. I’m someone who has spent the better part of a decade doing this work in healthcare and managed services environments, and I write about what I’ve found to be true in practice.</p>

<p><strong>This is about the intersection of technology and leadership.</strong> Cloud architecture isn’t just a technical discipline — it’s an organizational one. The decisions that shape your infrastructure are shaped by budget cycles, compliance requirements, vendor relationships, team capabilities, and executive priorities. I can’t write honestly about cloud architecture without writing about those things too.</p>

<p><strong>This is not vendor-neutral marketing.</strong> My background is Microsoft-first — Azure, Microsoft 365, Entra ID, Intune, Defender. I’ll write about what I actually work with, what works, what doesn’t, and when the Microsoft way is the right way versus when it isn’t.</p>

<h2 id="why-now">Why Now</h2>

<p>I’ve resisted starting a blog for a while. Partly because the internet already has more infrastructure content than anyone can read. Partly because writing takes time I could spend building things.</p>

<p>But I keep finding myself in conversations — with engineers early in their careers, with IT managers navigating their first healthcare compliance audit, with architects trying to make a cost optimization case to leadership — where I wish I had a place to point people. Where I could say: here’s how I’ve thought about this, here’s what worked, here’s what I wish I’d known before we spent three months going in the wrong direction.</p>

<p>This blog is that place.</p>

<p>I’ve spent years operating at the intersection of technical depth and business accountability — owning budgets, driving cost outcomes, making compliance calls, and building teams that outlast any individual. That’s the lens this blog is written from. Writing in public is how I articulate and pressure-test ideas that I’m already living in the work.</p>

<h2 id="what-to-expect">What to Expect</h2>

<p>I’ll write in six areas where I have genuine, hands-on depth:</p>

<p><strong>Azure Architecture:</strong> Entra ID, cold storage design, analytics infrastructure, cost governance, and the architecture decisions that matter in regulated environments. The real trade-offs, not the vendor diagrams.</p>

<p><strong>Microsoft 365:</strong> Exchange Online, Teams, SharePoint, Intune, Defender — governance, compliance, and the operational discipline required to run M365 in a way that’s actually secure and auditable.</p>

<p><strong>Healthcare IT &amp; Compliance:</strong> HIPAA isn’t a checkbox — it’s an operational posture. I’ve spent six years in a healthcare environment and I’ll write about what compliance actually looks like in practice, what auditors actually look for, and how to build systems that hold up.</p>

<p><strong>IT Leadership:</strong> Budget ownership, team development, stakeholder communication, hiring, and the shift from technical expert to operational leader. The skills that matter at the manager and director level are different from the ones that make you a great engineer, and that gap is rarely discussed honestly.</p>

<p><strong>FinOps &amp; Cost Optimization:</strong> Cloud cost is one of the most misunderstood problems in enterprise IT. I’ve documented over $127K in savings across licensing audits, environment rationalization, and storage optimization. I’ll share frameworks and tactics that have actually moved the needle.</p>

<p><strong>Security &amp; Identity:</strong> Entra ID, Conditional Access, MFA strategy, Fortinet, and zero-trust foundations in environments where compliance requirements make security non-negotiable.</p>

<h2 id="a-note-on-perspective">A Note on Perspective</h2>

<p>I’m not the most senior person in any room, and I don’t pretend to be. What I am is someone who has owned outcomes — budgets, compliance postures, team performance, cost numbers — and has had to develop both technical depth and operational breadth to do that well.</p>

<p>The perspective here is practitioner to practitioner. If you’re an IT manager, a senior cloud architect, or a technical leader working through the same problems in healthcare or managed services, I think you’ll find something useful.</p>

<hr />

<p>If this resonates, subscribe via RSS or check back regularly. I write when I have something worth saying.</p>

<p>Let’s get into it.</p>]]></content><author><name>Darren Bell</name></author><category term="Technology Leadership" /><category term="blog" /><category term="cloud architecture" /><category term="Microsoft 365" /><category term="Azure" /><category term="leadership" /><category term="healthcare IT" /><summary type="html"><![CDATA[After 10+ years in enterprise IT across healthcare and managed services, I've accumulated lessons that took real pain to learn. This is where I share them.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://darrenbell.net/assets/images/og-default.svg" /><media:content medium="image" url="https://darrenbell.net/assets/images/og-default.svg" xmlns:media="http://search.yahoo.com/mrss/" /></entry></feed>