TL;DR

AWS Lens is an open-source Electron desktop app for managing AWS resources — EC2, S3, Lambda, IAM, Cost Explorer, and more. I wanted it accessible from my browser without running a desktop app. I adapted it to run as a containerized Express server on k3s, fixed a class of runtime crashes from the Electron-to-web adapter, hardened it against three security issues, and deployed it behind Traefik and Let’s Encrypt. The changes are open-source in BoraKostem/AWS-Lens#21.


Why Self-Host a Desktop App?

Electron apps make trade-offs. They bundle a full Chromium runtime and Node.js process so they can run on any desktop OS without a server. For a personal tool like AWS Lens, that works fine on a laptop.

But I run a k3s cluster. My preference is to host tools as web services — always-on, accessible from any device on the internal network, no “which machine has this installed” problem. AWS Lens had everything I needed except the containerization and web server layer.

The upstream project had an incomplete web mode — the core was there (an Express server that dispatched RPC calls from the browser), but it had never been exercised seriously. A lot of the browser-to-server method plumbing was broken.


How Electron IPC Works (and Why It’s a Problem)

Electron has a two-process model: a main process (Node.js) and a renderer process (browser). They communicate via IPC:

// main process registers handlers
ipcMain.handle('ec2:list', async (_event, connection) => {
  return listEc2Instances(connection)
})

// renderer calls them via preload bridge
const result = await window.awsLens.listEc2Instances(connection)

The preload/index.ts file bridges these — it exposes named methods on window.awsLens that call ipcRenderer.invoke('ec2:list', ...) under the hood.

For web mode, the plan was:

  1. Replace ipcMain with a registry Map in an Express server
  2. Add a POST /api/rpc endpoint that looks up the channel and calls the registered handler
  3. In the browser, replace window.awsLens with a webBridge that calls /api/rpc instead

The problem: webBridge.ts had ~95 method names that didn’t match preload/index.ts. The preload exposed getRelationshipMap; webBridge implemented getOverviewRelationships. The preload exposed terminateEc2Instance; webBridge had terminateInstance. And so on across every service area.

These bugs were silent at compile time because TypeScript couldn’t catch them — window.awsLens is typed as any in web mode. They only appeared at runtime when users clicked into a panel.


The Push Event Problem

Beyond the RPC mismatch, there was a deeper structural issue: Electron supports server-push events. Some operations stream progress back to the renderer:

// ec2Ipc.ts — streams volume snapshot progress
ipcMain.handle('ec2:create-temp-volume-check', async (event, connection, volumeId) => {
  return createTempInspectionEnvironment(connection, volumeId, (progress) => {
    event.sender.send('ec2:temp-volume-progress', progress)  // push to renderer
  })
})

In Electron, event.sender.send() pushes directly to the renderer window. In web mode, there’s no window — there’s an HTTP connection. The IPC shim passed null as the event object, so event.sender.send() threw at runtime.

The fix was an EventEmitter bus on the server side:

// terraformEvents.ts
const bus = new EventEmitter()

export function broadcastEvent(channel: string, payload: unknown): void {
  bus.emit(channel, payload)
}

export function makeMockEvent() {
  return {
    sender: {
      send(channel: string, payload: unknown) {
        broadcastEvent(channel, payload)
      }
    }
  }
}

The shim now passes makeMockEvent() to every IPC handler. When a handler calls event.sender.send('ec2:temp-volume-progress', progress), it hits the bus. A single /api/events WebSocket multiplexes all push channels to the browser:

const PUSH_CHANNELS = ['terraform:event', 'ec2:temp-volume-progress']

eventsWss.on('connection', (ws) => {
  const handlers: Array<{ channel: string; fn: Function }> = []
  for (const channel of PUSH_CHANNELS) {
    const fn = (payload: unknown) => {
      if (ws.readyState === ws.OPEN) {
        ws.send(JSON.stringify({ channel, payload }))
      }
    }
    onEvent(channel, fn)
    handlers.push({ channel, fn })
  }
  ws.on('close', () => {
    for (const { channel, fn } of handlers) offEvent(channel, fn)
  })
})

Adding a new push channel in the future is one line in PUSH_CHANNELS. No new WebSocket endpoint needed.


Security Issues Found

While reviewing the code before pushing upstream, three security issues turned up:

Shell injection in GitHub credential helper

The original implementation passed the OAuth token directly into a shell command:

execSync(`git config --global http.https://github.com/.extraheader "Authorization: token ${token}"`)

If the token contained shell metacharacters (unlikely from GitHub, but worth fixing), this would execute arbitrary shell commands. Fixed by using execFileSync with a proper argument array and piping credentials through git credential approve via stdin — no shell involved:

spawnSync('git', ['credential', 'approve'], {
  input: `protocol=https\nhost=github.com\nusername=x-access-token\npassword=${token}\n`
})

INI injection in profile names

AWS credential profiles are written to ~/.aws/credentials in INI format. The environment variable AWS_LENS_PROFILE_PROD creates a [prod] section. With no validation, a profile name like prod]\n[default would corrupt the credentials file. Fixed with a strict allowlist:

if (!rawName || !/^[a-z0-9-]+$/.test(rawName)) {
  console.warn(`[bootstrap] ${key}: invalid profile name "${rawName}" — skipped`)
  continue
}

Error info disclosure in RPC endpoint

The RPC error handler returned the IPC channel name in the response body:

res.status(500).json({ ok: false, error: message, channel })

channel exposes internal method names like ec2:list or iam:list-users to unauthenticated callers. Removed from the response.


Tooling in the Container

The container includes a full Terraform/OpenTofu toolchain for the Terraform workspace panel:

  • tfenv — installs and manages Terraform versions
  • tofuenv — installs and manages OpenTofu versions
  • Terragrunt — downloaded from GitHub releases at build time
  • terraform-docs — for generating module documentation
  • GitHub CLI — for cloning private repositories
  • git and git-lfs — required for most Terraform module sources

On startup, the server detects which CLI is available and sets the default. Users can switch between Terraform and OpenTofu per-workspace.


IAM Provisioning Reference

The app’s IAM panel now includes copyable Terraform snippets for provisioning the minimum IAM permissions needed to use each feature set. Two tiers:

Cost reader — Cost Explorer and billing read-only. Enough for the billing dashboard.

resource "aws_iam_user" "aws_lens_reader" {
  name = "aws-lens-reader"
}

resource "aws_iam_user_policy" "aws_lens_reader_policy" {
  name = "aws-lens-cost-reader"
  user = aws_iam_user.aws_lens_reader.name

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Effect   = "Allow"
      Action   = ["ce:Get*", "ce:List*", "sts:GetCallerIdentity"]
      Resource = "*"
    }]
  })
}

Full read-onlyReadOnlyAccess managed policy. Unlocks all panels. No write access possible.


Regression Tests

The webBridge method name mismatches were invisible to TypeScript and only surfaced at runtime. To catch them going forward, the test suite now includes a contract test that validates every method name against the preload bridge:

  • ~60 window.awsLens methods checked for presence and callability
  • 34 window.terraformWorkspace short-name methods checked
  • All push subscription methods (subscribeTempVolumeProgress, subscribeTerminal, etc.) verified

If anyone renames a method in the preload without updating webBridge (or vice versa), the test fails with the exact method name. No more blank panels.


Deployment

The container runs on k3s behind Traefik with a Let’s Encrypt cert:

aws-lens.k3s.internal.strommen.systems

AWS credentials are injected via a Kubernetes secret:

kubectl -n aws-lens create secret generic aws-lens-credentials \
  --from-literal=AWS_ACCESS_KEY_ID=... \
  --from-literal=AWS_SECRET_ACCESS_KEY=... \
  --from-literal=AWS_DEFAULT_REGION=us-east-1

Named profiles use a convention: any secret key matching AWS_LENS_PROFILE_<NAME> is bootstrapped into ~/.aws/credentials at startup as a named profile. Switching profiles in the UI doesn’t require restarting the pod.


What’s Next

A few things are on the roadmap but not yet implemented:

GitHub Actions support

The app currently covers AWS resources. GitHub Actions has its own operational surface — workflow runs, runner health, cache usage, artifact storage costs. Adding a GHA panel would make this a more complete picture of the homelab’s CI/CD spend and status.

Azure support

Same idea for Azure. The architecture is modular enough that adding Azure resource panels is feasible. Most of the IPC plumbing is service-agnostic.

LLM infrastructure assessment

The app already has AWS Cost Explorer data, EC2 rightsizing recommendations, and IAM policy summaries. Feeding this into a Bedrock (or local) LLM to generate plain-English cost analysis and security observations is a natural next step.

Response caching

Read-only AWS API calls (list EC2 instances, describe IAM policies, etc.) are currently fetched fresh on every panel open. A TTL cache on the server side would cut latency and reduce API call volume for accounts with a lot of resources.

Role-based access tiers

The IAM reference panel documents three tiers (cost reader, full read-only, full read-write) but the app doesn’t enforce them. Adding role awareness — show only relevant panels based on what the credentials can actually access — would reduce confusion when using restricted credentials.


The PR to upstream is at BoraKostem/AWS-Lens#21. If you’re running something similar or want to self-host it on your own cluster, the Dockerfile and k8s manifests are in the repo.