Articles (9)
Getting Started
litestream.io
Getting Started
This tutorial will get you up and running with Litestream locally and
replicating an SQLite database to an S3-compatible store called
MinIO. This works the same as Amazon S3 but it’s easier to
get started.
By the end, you’ll understand the replicate and restore commands and be able
to continuously backup your database. It assumes you’re comfortable on the
command line and have Docker installed.
⏱
You should expect this tutorial to take about 10 minutes.
Prerequisites#
Install Litestream & SQLite#
Before continuing, please install Litestream on your local machine.
You will also need SQLite installed for this tutorial. It
comes packaged with some operating systems such as macOS but you may need to
install it separately.
Setting up MinIO#
We’ll use a Docker instance of MinIO for this example. This
gets us up and running quickly but it will only persist the data for as long as
the Docker container is running. That’s good enough for this tutorial but you’ll
want to use persistent storage in a production environment.
First, start your MinIO instance:
docker run -p 9000:9000 -p 9001:9001 minio/minio server /data --console-address ":9001"
Then open a web browser to http://localhost:9001/
and enter the default credentials:
Username: minioadmin
Password: minioadmin
Next, navigate to “Buckets”, click the “Create Bucket” button in the top right corner and then click the
“Save” icon. Name your bucket, "mybkt".
⚠️
For remote MinIO servers:
If your MinIO instance is running on a different machine (not localhost), skip this tutorial. See the MinIO Configuration
section in the documentation for setup instructions. This tutorial only covers local MinIO running via Docker.
Setting up your database#
Now that our MinIO bucket is created, we can replicate data to it. Litestream
can work with any SQLite database so we’ll use the sqlite3 command line tool
to show how it works.
In a terminal window, create a new database file:
sqlite3 fruits.db
This will open the SQLite command prompt and now we can execute SQL commands.
We’ll start by creating a new table:
CREATE TABLE fruits (name TEXT, color TEXT);
And we can add some data to our table:
INSERT INTO fruits (name, color) VALUES ('apple', 'red');
INSERT INTO fruits (name, color) VALUES ('banana', 'yellow');
Replicating your database#
In a separate terminal window, we’ll run Litestream to replicate our new
database. Make sure both terminal windows are using the same working directory.
First, we’ll set our MinIO credentials to our environment variables:
export LITESTREAM_ACCESS_KEY_ID=minioadmin
export LITESTREAM_SECRET_ACCESS_KEY=minioadmin
Next, run Litesteam’s replicate command to start replication:
litestream replicate fruits.db s3://mybkt.localhost:9000/fruits.db
You should see Litestream print some initialization commands and then wait
indefinitely. Normally, Litestream is run as a background service so it
continuously watches your database for new changes so the command does not exit.
If you open the MinIO Console,
you will see there is a fruits.db directory in your bucket.
Restoring your database#
In a third terminal window, we’ll restore our database to a new file. First,
make sure your environment variables are set correctly:
export LITESTREAM_ACCESS_KEY_ID=minioadmin
export LITESTREAM_SECRET_ACCESS_KEY=minioadmin
Then run:
litestream restore -o fruits2.db s3://mybkt.localhost:9000/fruits.db
This will pull down the backup from MinIO and write it to the fruits2.db file.
You can verify the database matches by executing a query on our file:
sqlite3 fruits2.db 'SELECT * FROM fruits'
The data should show:
apple|red
banana|yellow
Continuous replication#
Litestream continuously monitors your database and backs it up to S3. We can
see this by writing some more data to our original fruits.db database.
In our first terminal window, write a new row to our table:
INSERT INTO fruits (name, color) VALUES ('grape', 'purple');
Then in your third terminal window, restore your database from our S3 backup
to a new fruits3.db file:
litestream restore -o fruits3.db s3://mybkt.localhost:9000/fruits.db
We can execute a query on this file:
sqlite3 fruits3.db 'SELECT * FROM fruits'
We should now see our new row:
apple|red
banana|yellow
grape|purple
Troubleshooting#
“The AWS Access Key Id you provided does not exist in our records”#
This error occurs when Litestream cannot authenticate with MinIO. Common causes:
Wrong credentials: Verify you’re using the correct access key and secret key.
The default MinIO Docker container uses minioadmin / minioadmin.
Missing endpoint for remote MinIO: If your MinIO server is on a different machine,
you must specify the endpoint parameter in your configuration file. See the
MinIO Configuration section.
Environment variable conflicts: Environment variables take precedence over
config files. Unset any conflicting environment variables:
unset LITESTREAM_ACCESS_KEY_ID
unset LITESTREAM_SECRET_ACCESS_KEY
unset AWS_ACCESS_KEY_ID
unset AWS_SECRET_ACCESS_KEY
“Cannot lookup region”#
This error typically means the region is missing or invalid. For MinIO, any region
value works (e.g., us-east-1) since MinIO ignores this parameter but still
requires it in the configuration.
MinIO console shows empty bucket but replicate command ran#
Check that you specified the correct bucket name in your replication command. The
URL format is s3://BUCKET_NAME.ENDPOINT/PATH.
For local MinIO: s3://mybkt.localhost:9000/fruits.db
For remote MinIO: You need a configuration file with the endpoint parameter—see
the MinIO Configuration section.
Changes aren’t being replicated#
Verify that:
Litestream is still running in your terminal window
The MinIO instance is still running
You can access the MinIO console at the expected address
If using a config file, ensure credentials are correct and the file is being read
by passing the -config flag to Litestream.
Further reading#
Litestream was built to run as a background service that you don’t need to worry
about—it just replicates your database all the time. To run Litestream as a
background service, please refer to the How-To Guides section to
run on your particular platform.
For security considerations including backup encryption, see the
Configuration Reference section. Note that Age
encryption is not available in v0.5.0+. If you are upgrading from v0.3.x with
Age encryption, please review the migration guide.
Has the cost of building software just dropped 90%?
martinalderson.com
I've been building software professionally for nearly 20 years. I've been through a lot of changes - the 'birth' of SaaS, the mass shift towards mobile apps, the outrageous hype around blockchain, and the perennial promise that low-code would make developers obsolete.
The economics have changed dramatically now with agentic coding, and it is going to totally transform the software development industry (and the wider economy). 2026 is going to catch a lot of people off guard.
In my previous post I delved into why I think evals are missing some of the big leaps, but thinking this over since then (and recent experience) has made me confident we're in the early stages of a once-in-a-generation shift.
The cost of shipping
I started developing just around the time open source started to really explode - but it was clear this was one of the first big shifts in cost of building custom software. I can remember eye watering costs for SQL Server or Oracle - and as such started out really with MySQL, which did allow you to build custom networked applications without incurring five or six figures of annual database licensing costs.
Since then we've had cloud (which I would debate is a cost saving at all, but let's be generous and assume it has some initial capex savings) and lately what I feel has been the era of complexity. Software engineering has got - in my opinion, often needlessly - complicated, with people rushing to very labour intensive patterns such as TDD, microservices, super complex React frontends and Kubernetes. I definitely don't think we've seen much of a cost decrease in the past few years.
AI Agents however in my mind massively reduce the labour cost of developing software.
So where do the 90% savings actually come from?
At the start of 2025 I was incredibly sceptical of a lot of the AI coding tools - and a lot of them I still am. Many of the platforms felt like glorified low code tooling (Loveable, Bolt, etc), or VS Code forks with some semi-useful (but often annoying) autocomplete improvements.
Take an average project for an internal tool in a company. Let's assume the data modelling is already done to some degree, and you need to implement a web app to manage widgets.
Previously, you'd have a small team of people working on setting up CI/CD, building out data access patterns and building out the core services. Then usually a whole load of CRUD-style pages and maybe some dashboards and graphs for the user to make. Finally you'd (hopefully) add some automated unit/integration/e2e tests to make sure it was fairly solid and ship it, maybe a month later.
And that's just the direct labour. Every person on the project adds coordination overhead. Standups, ticket management, code reviews, handoffs between frontend and backend, waiting for someone to unblock you. The actual coding is often a fraction of where the time goes.
Nearly all of this can be done in a few hours with an agentic coding CLI. I've had Claude Code write an entire unit/integration test suite in a few hours (300+ tests) for a fairly complex internal tool. This would take me, or many developers I know and respect, days to write by hand.
The agentic coding tools have got extremely good at converting business logic specifications into pretty well written APIs and services.
A project that would have taken a month now takes a week. The thinking time is roughly the same - the implementation time collapsed. And with smaller teams, you get the inverse of Brooks's Law: instead of communication overhead scaling with headcount, it disappears. A handful of people can suddenly achieve an order of magnitude more.
Latent demand
On the face of it, this seems like incredibly bad news for the software development industry - but economics tells us otherwise.
Jevons Paradox says that when something becomes cheaper to produce, we don't just do the same amount for less money. Take electric lighting for example; while sales of candles and gas lamps fell, overall far more artificial light was generated.
If we apply this to software engineering, think of supply and demand. There is so much latent demand for software. I'm sure every organisation has hundreds if not thousands of Excel sheets tracking important business processes that would be far better off as a SaaS app. Let's say they get a quote from an agency to build one into an app for $50k - only essential ones meet the grade. At $5k (for a decent developer + AI tooling) - suddenly there is far more demand.
Domain knowledge is the only moat
So where does that leave us? Right now there is still enormous value in having a human 'babysit' the agent - checking its work, suggesting the approach and shortcutting bad approaches. Pure YOLO vibe coding ends up in a total mess very quickly, but with a human in the loop I think you can build incredibly good quality software, very quickly.
This then allows developers who really master this technology to be hugely effective at solving business problems. Their domain and industry knowledge becomes a huge lever - knowing the best architectural decisions for a project, knowing which framework to use and which libraries work best.
Layer on understanding of the business domain and it does genuinely feel like the mythical 10x engineer is here. Equally, the pairing of a business domain expert with a motivated developer and these tools becomes an incredibly powerful combination, and something I think we'll see becoming quite common - instead of a 'squad' of a business specialist and a set of developers, we'll see a far tighter pairing of a couple of people.
This combination allows you to iterate incredibly quickly, and software becomes almost disposable - if the direction is bad, then throw it away and start again, using those learnings. This takes a fairly large mindset shift, but the hard work is the conceptual thinking, not the typing.
Don't get caught off guard
The agents and models are still improving rapidly, which I don't think is really being captured in the benchmarks. Opus 4.5 seems to be able to follow long 10-20 minute sessions without going completely off piste. We're just starting to see the results of the hundreds of billions of dollars of capex that has gone into GB200 GPUs now, and I'm sure newer models will quickly make these look completely obsolete.
However, I've spoken to so many software engineers that are really fighting this change. I've heard the same objections too many times - LLMs make too many mistakes, it can't understand [framework], or it doesn't really save any time.
These assertions are rapidly becoming completely false, and remind me a lot of the desktop engineers who dismissed the iPhone in 2007. I think we all know how that turned out - networking got better, the phones got way faster and the mobile operating systems became very capable.
Engineers need to really lean in to the change in my opinion. This won't change overnight - large corporates are still very much behind the curve in general, lost in a web of bureaucracy of vendor approvals and management structures that leave them incredibly vulnerable to smaller competitors.
But if you're working for a smaller company or team and have the power to use these tools, you should. Your job is going to change - but software has always changed. Just perhaps this time it's going to change faster than anyone anticipates. 2026 is coming.
One objection I hear a lot is that LLMs are only good at greenfield projects. I'd push back hard on this. I've spent plenty of time trying to understand 3-year-old+ codebases where everyone who wrote it has left. Agents make this dramatically easier - explaining what the code does, finding the bug(s), suggesting the fix. I'd rather inherit a repo written with an agent and a good engineer in the loop than one written by a questionable quality contractor who left three years ago, with no tests, and a spaghetti mess of classes and methods.
Gild Just One Lily — Smashing Magazine
smashingmagazine.com
“Gilding the lily” isn’t always bad. In design, a touch of metaphorical gold — a subtle animated transition, a hint of color, or added depth in a drop shadow — can help communicate a level of care and attention that builds trust. But first? You need a lily. Nail the fundamentals. Then, gild it carefully.
The phrase “gild the lily” implies unnecessary ornamentation, the idea being that adorning a lily with superficial decoration only serves to obscure its natural beauty. Well, I’m here to tell you that a little touch of what might seem like unnecessary ornamentation in design is exactly what you need.
When your design is solid, and you’ve nailed the fundamentals, adding one layer of decoration can help communicate a level of care and attention.
First, You Need A Lily
Let’s break down the “gild the lily” metaphor. First, you need a lily. Lilies are naturally beautiful, and each is unique. They don’t need further decoration. To play in this metaphor, let’s assume your design is already great. If not, you don’t have a lily. Get back to work on the fundamentals and check back in later (or keep reading anyhow).
Now that you’ve got a lily, let’s talk gilding. To “gild” is to cover it with a thin layer of gold. We’re not talking about the inner beauty baked into the very soul of your product (that’s the lily part of the metaphor). A touch of metaphorical gold foil on the surface can send a message of delight with a hint of decadence.
This gilding might come in the form of a subtle, animated transition or through a hint of colour and added depth in a drop shadow. Before we get into specifics, let’s make sure our metaphor doesn’t carry us too far.
Gild SparinglyIf we go too far with our gilding, we can communicate indulgence and excess rather than a hint of decadence.
An over-the-top design can be particularly irritating, depending on the state of mind of the person you’re designing for. For example, a flashy animation bragging about your new AI chat feature may not sit so well with a frustrated customer who can’t get their password reset to use it in the first place.
Wink At The Audience (Once)
Not every great product design can be so obviously beautiful as a lily. Even if you have a great design, it may not be noticeable to those enjoying the benefits of that design. Our designs shouldn’t always be noticeable, but sometimes it’s fun to notice and appreciate a great design.
If you’re Apple, you don’t need to worry about your design going unnoticed. Nobody thinks the background color of the Apple website is white (#FFFFFF) because they forgot to specify one in their stylesheet (though I’m old enough to remember a time when the default background of the web was a battleship gray, #CCCCCC). It’s so clear from the general level of refinement and production quality on the Apple site that the white background is a deliberate choice.The Apple website, featuring their trademark product photos in Jony Ive’s white world. (Large preview)
You and I are not Apple. Your client is (probably) not Apple. You don’t have an army of world-class product photographers and motion designers working in a glass spaceship in Cupertino. You’re on a small team pushing up against budget and schedule constraints. Even with these limitations, you’re managing to make great products.
The great design behind your products might be so well done that it is invisible. The door handle is so well-shaped that you don’t notice how well-shaped it is. That button is so well-placed that no one thinks about where it is positioned.When you’re nailing the fundamentals, it’s ok to wink at the audience once in a while. Not only is it ok, but it can even augment your design.
By calling just a touch of attention to the thoughtfulness of your design, you may make it even more delightful to experience. Take it one inch too far, though, and you’re distracting from the experience and begging for applause. Walk this line carefully.
Digital Lilies
A metaphor — even one with gold and lilies — only takes us so far. Let’s consider some concrete examples of gilding a digital product. When it comes to the web, a few touches of polish to reach for can include the following:The Supabase site has dark and light themes, both of which are just a touch of a pure black and pure white (pure black and white are shown at the bottom of the screenshot to highlight the tiny difference). (Large preview)
Not-quite black and not-quite white: Instead of solid black (#000000) and solid white (#FFFFFF) colors on the web, find subtle variations. They may look black/white on a first glance, but there’s a subtle implication of care and customization. An off-white background also allows you to have pure white elements, like form inputs, that stand out nicely against the backdrop. Be careful to preserve enough contrast to ensure accessible text.Josh W. Comeau’s example shows how color can improve shadows. (Large preview)
Layered and color-hinted shadows: Josh Comeau writes about bringing color into shadows, including a tool to help generate shadows that just feel better.This chart from the Utopia blog shows how font sizes can scale smoothly in proportion to the viewport width. (Large preview)
Comfortable lettering: Find a comfortable line height and letter spacing for the font family you’re using. A responsive type system like Utopia can help define spacing that looks and feels comfortable across a variety of device sizes.The One React framework site includes a distinctive splash of color along the top. Note the gentle curve of the color element. (Large preview)
A touch of color: When you don’t want your brand colors to overwhelm your design or you would like a complementary color to accent an otherwise monotone site, consider adding a single, simple stripe of solid color along the top of the viewport. Even something a few pixels tall can add a nice splash of color without complicating the rest of the design. The site for the One React web framework does this nicely and goes further with a uniquely shaped yellow accent at the top of the site. It’s even more subtle if you’re seeing their dark-mode design, but it’s still there.The A List Apart site features custom illustrations for its articles and has done that long before the advent of AI image generation. Visit the seminal Responsive Web Design article and try resizing your browser for an especially apt response. (Large preview)
Illustration and photography: It’s easier than ever to find whimsical and fun illustrations for your site, but no stock image can replace a relevant illustration or photo so apt that it must have been crafted just for this case. A List Apart has commissioned a unique illustration in a consistent style for each of their articles for decades. You don’t have to be a gifted illustrator. There may be charm in your amateur scribbles. If not, hire a great artist.
Beware, Cheap Gilding
Symbols of decadence are valued because they are precious in some way. This is why we talk about gilding with gold and not brass. This is also why a business card with rounded corners may feel more premium than a simple rectangle. It feels more expensive because it is.
Printing has gotten pretty cheap, though, even with premium touches. Printing flourishes like rounded corners or a smooth finish don’t convey the same value and care as they did before they became quick up-sell options from your local (or budget online) print shop.
A well-worded and thoughtful cover letter used to be a great way to stand out from a pile of similar resumes. Now, it takes a whole different approach to stand out from a wall of AI-LLM-generated cover letters that say everything an employer might want to hear.
On the web, a landing page where new page sections slide and fade in with animation is used to imply that someone spent extra time on the implementation. Now, a page with too much motion feels more like a million other templates enabled by site-building tools like Wix, Squarespace, and Webflow.
Custom fonts have also become so easy and ubiquitous on the web that sticking to system default fonts can be as strong a statement as a stylish typeface.
Does Anyone Care?
Is everyone going to notice that the drop shadows on your website have a hint of color? No. Is anyone going to notice? Maybe not. If you get the details right, though, people will feel it. These levels of polish are cumulative, contributing one percent here and there to the overall experience. They may not notice the hue of your drop shadow, but they may impart some trust from a sense of the care that went into the design.
Most people aren’t web developers or designers. They don’t know the implementation details of CSS animations and box-shadows. Similarly, I’m not a car expert — far from it. I value reliability and affordability more than performance and luxury in a car. Even so, when I close the door on a high-quality vehicle, I can feel the difference.
On that next project, allow yourself to gild just one lily.
(gg, yk)
GitHub Actions Has a Package Manager, and It Might Be the Worst
nesbitt.io
After putting together ecosyste-ms/package-manager-resolvers, I started wondering what dependency resolution algorithm GitHub Actions uses. When you write uses: actions/checkout@v4 in a workflow file, you’re declaring a dependency. GitHub resolves it, downloads it, and executes it. That’s package management. So I went spelunking into the runner codebase to see how it works. What I found was concerning.
Package managers are a critical part of software supply chain security. The industry has spent years hardening them after incidents like left-pad, event-stream, and countless others. Lockfiles, integrity hashes, and dependency visibility aren’t optional extras. They’re the baseline. GitHub Actions ignores all of it.
Compared to mature package ecosystems:
Feature
npm
Cargo
NuGet
Bundler
Go
Actions
Lockfile
✓
✓
✓
✓
✓
✗
Transitive pinning
✓
✓
✓
✓
✓
✗
Integrity hashes
✓
✓
✓
✓
✓
✗
Dependency tree visibility
✓
✓
✓
✓
✓
✗
Resolution specification
✓
✓
✓
✓
✓
✗
The core problem is the lack of a lockfile. Every other package manager figured this out decades ago: you declare loose constraints in a manifest, the resolver picks specific versions, and the lockfile records exactly what was chosen. GitHub Actions has no equivalent. Every run re-resolves from your workflow file, and the results can change without any modification to your code.
Research from USENIX Security 2022 analyzed over 200,000 repositories and found that 99.7% execute externally developed Actions, 97% use Actions from unverified creators, and 18% run Actions with missing security updates. The researchers identified four fundamental security properties that CI/CD systems need: admittance control, execution control, code control, and access to secrets. GitHub Actions fails to provide adequate tooling for any of them. A follow-up study using static taint analysis found code injection vulnerabilities in over 4,300 workflows across 2.7 million analyzed. Nearly every GitHub Actions user is running third-party code with no verification, no lockfile, and no visibility into what that code depends on.
Mutable versions. When you pin to actions/checkout@v4, that tag can move. The maintainer can push a new commit and retag. Your workflow changes silently. A lockfile would record the SHA that @v4 resolved to, giving you reproducibility while keeping version tags readable. Instead, you have to choose: readable tags with no stability, or unreadable SHAs with no automated update path.
GitHub has added mitigations. Immutable releases lock a release’s git tag after publication. Organizations can enforce SHA pinning as a policy. You can limit workflows to actions from verified creators. These help, but they only address the top-level dependency. They do nothing for transitive dependencies, which is the primary attack vector.
Invisible transitive dependencies. SHA pinning doesn’t solve this. Composite actions resolve their own dependencies, but you can’t see or control what they pull in. When you pin an action to a SHA, you only lock the outer file. If it internally pulls some-helper@v1 with a mutable tag, your workflow is still vulnerable. You have zero visibility into this. A lockfile would record the entire resolved tree, making transitive dependencies visible and pinnable. Research on JavaScript Actions found that 54% contain at least one security weakness, with most vulnerabilities coming from indirect dependencies. The tj-actions/changed-files incident showed how this plays out in practice: a compromised action updated its transitive dependencies to exfiltrate secrets. With a lockfile, the unexpected transitive change would have been visible in a diff.
No integrity verification. npm records integrity hashes in the lockfile. Cargo records checksums in Cargo.lock. When you install, the package manager verifies the download matches what was recorded. Actions has nothing. You trust GitHub to give you the right code for a SHA. A lockfile with integrity hashes would let you verify that what you’re running matches what you resolved.
Re-runs aren’t reproducible. GitHub staff have confirmed this explicitly: “if the workflow uses some actions at a version, if that version was force pushed/updated, we will be fetching the latest version there.” A failed job re-run can silently get different code than the original run. Cache interaction makes it worse: caches only save on successful jobs, so a re-run after a force-push gets different code and has to rebuild the cache. Two sources of non-determinism compounding. A lockfile would make re-runs deterministic: same lockfile, same code, every time.
No dependency tree visibility. npm has npm ls. Cargo has cargo tree. You can inspect your full dependency graph, find duplicates, trace how a transitive dependency got pulled in. Actions gives you nothing. You can’t see what your workflow actually depends on without manually reading every composite action’s source. A lockfile would be a complete manifest of your dependency tree.
Undocumented resolution semantics. Every package manager documents how dependency resolution works. npm has a spec. Cargo has a spec. Actions resolution is undocumented. The runner source is public, and the entire “resolution algorithm” is in ActionManager.cs. Here’s a simplified version of what it does:
// Simplified from actions/runner ActionManager.cs
async Task PrepareActionsAsync(steps) {
// Start fresh every time - no caching
DeleteDirectory("_work/_actions");
await PrepareActionsRecursiveAsync(steps, depth: 0);
}
async Task PrepareActionsRecursiveAsync(actions, depth) {
if (depth > 10)
throw new Exception("Composite action depth exceeded max depth 10");
foreach (var action in actions) {
// Resolution happens on GitHub's server - opaque to us
var downloadInfo = await GetDownloadInfoFromGitHub(action.Reference);
// Download and extract - no integrity verification
var tarball = await Download(downloadInfo.TarballUrl);
Extract(tarball, $"_actions/{action.Owner}/{action.Repo}/{downloadInfo.Sha}");
// If composite, recurse into its dependencies
var actionYml = Parse($"_actions/{action.Owner}/{action.Repo}/{downloadInfo.Sha}/action.yml");
if (actionYml.Type == "composite") {
// These nested actions may use mutable tags - we have no control
await PrepareActionsRecursiveAsync(actionYml.Steps, depth + 1);
}
}
}
That’s it. No version constraints, no deduplication (the same action referenced twice gets downloaded twice), no integrity checks. The tarball URL comes from GitHub’s API, and you trust them to return the right content for the SHA. A lockfile wouldn’t fix the missing spec, but it would at least give you a concrete record of what resolution produced.
Even setting lockfiles aside, Actions has other issues that proper package managers solved long ago.
No registry. Actions live in git repositories. There’s no central index, no security scanning, no malware detection, no typosquatting prevention. A real registry can flag malicious packages, store immutable copies independent of the source, and provide a single point for security response. The Marketplace exists but it’s a thin layer over repository search. Without a registry, there’s nowhere for immutable metadata to live. If an action’s source repository disappears or gets compromised, there’s no fallback.
Shared mutable environment. Actions aren’t sandboxed from each other. Two actions calling setup-node with different versions mutate the same $PATH. The outcome depends on execution order, not any deterministic resolution.
No offline support. Actions are pulled from GitHub on every run. There’s no offline installation mode, no vendoring mechanism, no way to run without network access. Other package managers let you vendor dependencies or set up private mirrors. With Actions, if GitHub is down, your CI is down.
The namespace is GitHub usernames. Anyone who creates a GitHub account owns that namespace for actions. Account takeovers and typosquatting are possible. When a popular action maintainer’s account gets compromised, attackers can push malicious code and retag. A lockfile with integrity hashes wouldn’t prevent account takeovers, but it would detect when the code changes unexpectedly. The hash mismatch would fail the build instead of silently running attacker-controlled code. Another option would be something like Go’s checksum database, a transparent log of known-good hashes that catches when the same version suddenly has different contents.
How Did We Get Here?
The Actions runner is forked from Azure DevOps, designed for enterprises with controlled internal task libraries where you trust your pipeline tasks. GitHub bolted a public marketplace onto that foundation without rethinking the trust model. The addition of composite actions and reusable workflows created a dependency system, but the implementation ignored lessons from package management: lockfiles, integrity verification, transitive pinning, dependency visibility.
This matters beyond CI/CD. Trusted publishing is being rolled out across package registries: PyPI, npm, RubyGems, and others now let you publish packages directly from GitHub Actions using OIDC tokens instead of long-lived secrets. OIDC removes one class of attacks (stolen credentials) but amplifies another: the supply chain security of these registries now depends entirely on GitHub Actions, a system that lacks the lockfile and integrity controls these registries themselves require. A compromise in your workflow’s action dependencies can lead to malicious packages on registries with better security practices than the system they’re trusting to publish.
Other CI systems have done better. GitLab CI added an integrity keyword in version 17.9 that lets you specify a SHA256 hash for remote includes. If the hash doesn’t match, the pipeline fails. Their documentation explicitly warns that including remote configs “is similar to pulling a third-party dependency” and recommends pinning to full commit SHAs. GitLab recognized the problem and shipped integrity verification. GitHub closed the feature request.
GitHub’s design choices don’t just affect GitHub users. Forgejo Actions maintains compatibility with GitHub Actions, which means projects migrating to Codeberg for ethical reasons inherit the same broken CI architecture. The Forgejo maintainers openly acknowledge the problems, with contributors calling GitHub Actions’ ecosystem “terribly designed and executed.” But they’re stuck maintaining compatibility with it. Codeberg mirrors common actions to reduce GitHub dependency, but the fundamental issues are baked into the model itself. GitHub’s design flaws are spreading to the alternatives.
GitHub issue #2195 requested lockfile support. It was closed as “not planned” in 2022. Palo Alto’s “Unpinnable Actions” research documented how even SHA-pinned actions can have unpinnable transitive dependencies.
Dependabot can update action versions, which helps. Some teams vendor actions into their own repos. zizmor is excellent at scanning workflows and finding security issues. But these are workarounds for a system that lacks the basics.
The fix is a lockfile. Record resolved SHAs for every action reference, including transitives. Add integrity hashes. Make the dependency tree inspectable. GitHub closed the request three years ago and hasn’t revisited it.
Further reading:
The AI Wildfire Is Coming. It's Going to be Very Painful and Incredibly Healthy.
ceodinner.substack.com
At a recent CEO dinner in Menlo Park, someone asked the familiar question: Are we in an AI bubble?
One of the dinner guests, a veteran of multiple Silicon Valley cycles, reframed the conversation entirely. She argued for thinking of this moment as a wildfire rather than a bubble. The metaphor landed immediately. Wildfires don’t just destroy; they’re essential to ecosystem health. They clear the dense underbrush that chokes out new growth, return nutrients to the soil, and create the conditions for the next generation of forest to thrive.
As I reflected on the wildfire metaphor, a framework emerged that revealed something deeper, built on her reframing. It offered a taxonomy for understanding who survives, who burns, and why, with specific metrics that separate the fire-resistant from the flammable.
The first web cycle burned through dot-com exuberance and left behind Google, Amazon, eBay, and PayPal: the hardy survivors of Web 1.0. The next cycle, driven by social and mobile, burned again in 2008–2009, clearing the underbrush for Facebook, Airbnb, Uber, and the offspring of Y Combinator. Both fires followed the same pattern: excessive growth, sudden correction, then renaissance.
Now, with AI, we are once again surrounded by dry brush.
The coming correction will manifest as a wildfire rather than a bubble burst. Understanding that distinction changes everything about how to survive and thrive in what comes next.
When the brush grows too dense, sunlight can’t reach the ground. The plants compete against each other for light, water, and nutrients rather than against the environment.
That’s what Silicon Valley feels like right now.
Capital is abundant, perhaps too abundant. But talent? That’s the scarce resource. Every promising engineer, designer, or operator is being courted by three, five, ten different AI startups, often chasing the same vertical, whether it’s coding copilots, novel datasets, customer service, legal tech, or marketing automation.
The result is an ecosystem that looks lush from above: green, growing, noisy. But underneath, the soil is dry. Growth becomes difficult when everyone’s roots are tangled.
In that kind of forest, fire serves as correction rather than catastrophe.
Wildfires don’t just destroy ecosystems. They reshape them. Some species ignite instantly. Others resist the flames. A few depend on the fire to reproduce.
The same is true for startups.
These are the dry grasses and resinous pines of the ecosystem: startups that look vibrant in a season of easy money but have no resistance once the air gets hot.
They include:
AI application wrappers with no proprietary data or distribution
Infrastructure clones in crowded categories (one more LLM gateway, one more vector database)
Consumer apps chasing daily active users instead of durable users
They’re fueled by hype and ebullient valuations. When the heat rises, when capital tightens or customers scrutinize ROI, they go up in seconds.
The flammable brush serves a purpose. It attracts capital and talent into the sector. It creates market urgency. And when it burns, it releases those resources back into the soil for hardier species to absorb. The engineers from failed AI wrappers become the senior hires at the companies that survive.
Then there are the succulents, oaks, and redwoods: the incumbents that store moisture and protect their cores.
Thick bark: Strong balance sheets and enduring customer relationships.
Deep roots: Structural product-market fit in cloud, chips, or data infrastructure.
Moisture reserves: Real revenue, diversified businesses, and long-term moats.
Think Apple, Microsoft, Nvidia, Google, Amazon. They will absorb the heat and emerge stronger. When the smoke clears, these giants will stand taller, their bark charred but intact, while the smaller trees around them have burned to ash.
Some plants die back but grow again; manzanita, scrub oak, and toyon are phoenix-like. In startup terms, these are the pivots and re-foundings that follow a burn.
They’re teams with:
After the fire, they re-sprout — leaner, smarter, and better adapted to the new terrain.
This is where the real learning happens. A founder who built the wrong product with the right team in 2024 becomes the founder who builds the right product with a battle-tested team in 2027. The failure gets stored underground, like nutrients in roots, waiting for the next season, rather than being wasted.
Finally come the wildflowers. Their seeds are triggered by heat. They can’t even germinate until the old growth is gone.
These are the founders who start after the crash. They’ll hire from the ashes, build on cheaper infrastructure, and learn from the mistakes of those who burned. LinkedIn in 2002, Stripe in 2010, Slack in 2013. All are fire followers.
The next great AI-native companies will likely emerge here. These are the ones that truly integrate intelligence into workflows rather than just decorating them. And critically, the inference layer (where AI models actually run in production) represents the next major battleground. As compute becomes commoditized and agentic tools proliferate, the race will shift from training the biggest models to delivering intelligence most efficiently at scale.
Every few decades, Silicon Valley becomes overgrown. Web 1.0 and Web 2.0 both proved the same truth: too much growth chokes itself.
The Web 1.0 crash cleared away more than startups. It cleared noise. The Web 2.0 downturn, driven more by the mortgage crisis than the market itself, followed the same dynamic: overfunded competitors fell away, talent dispersed, and the survivors hired better, moved faster, and built stronger. Savvy companies even used the moment to get leaner, cutting underperformers and upgrading positions from entry-level to executive with hungry refugees from failed competitors.
That redistribution of talent may be the single most powerful outcome of any crash. Many of Google’s best early employees (the architects of what became one of the most durable business models in history) were founders or early employees of failed Web 1.0 startups.
And it went beyond talent alone. Entrepreneurial, restless, culturally impatient talent specifically shaped Google’s internal ethos. That DNA created Google’s experimental, aggressive, always-in-beta culture and radiated outward into the broader ecosystem for the next 10 to 20 years. The fire reallocated intelligence and rewired culture rather than simply destroying.
The 2000 wildfire was a full incineration. Infrastructure overbuild, easy capital, and speculative exuberance burned away nearly all profitless growth stories. Yet what remained were root systems: data centers, fiber optics, and the surviving companies that learned to grow slow and deep.
Amazon looked dead, down 95%, but emerged as the spine of digital commerce. eBay stabilized early and became the first profitable platform marketplace. Microsoft and Oracle converted their software monopolies into durable enterprise cashflows. Cisco, scorched by overcapacity, rebuilt slowly as networking became the plumbing for doing business.
By adding Apple, Google, and Salesforce, the story becomes one of succession as well as survival. Apple didn’t merely survive the fire; it changed the climate for everything that followed. Google sprouted where others burned, fueled by the very engineers and founders whose startups perished in the blaze. Salesforce took advantage of scorched corporate budgets to sell cloud-based flexibility, defining the SaaS model.
During the late 1990s, telecom firms raised roughly $2 trillion in equity and another $600 billion in debt to fuel the “new economy.” Even the stocks that symbolized the mania followed a predictable arc. Intel, Cisco, Microsoft, and Oracle together were worth around $83 billion in 1995; by 2000, their combined market cap had swelled to nearly $2 trillion. Qualcomm rose 2,700% in a single year.
That money paid for over 80 million miles of fiber-optic cable, more than three-quarters of all the digital wiring that had ever been installed in the U.S. up to that point. Then came the collapse.
By 2005, nearly 85% of those cables sat unused, strands of dark fiber buried in the ground. This was overcapacity born of overconfidence. But the fiber stayed. The servers stayed. The people stayed. And that excess soon became the backbone of modern life. Within just four years of the crash, the cost of bandwidth had fallen by 90%, and the glut of cheap connectivity powered everything that came next: YouTube, Facebook, smartphones, streaming, the cloud.
That’s the paradox of productive bubbles: they destroy value on paper but create infrastructure in reality. When the flames pass, the pipes, the code, and the talent remain — ready for the next generation to use at a fraction of the cost.
The Great Recession sparked a different kind of wildfire. Where Web 1.0’s flames had consumed speculative infrastructure, Web 2.0’s burned through business models and illusions. Venture funding froze. Advertising budgets evaporated. Credit tightened. Yet the survivors didn’t just withstand the heat. They metabolized it.
Apple turned adversity into dominance, transforming the iPhone from curiosity into cultural infrastructure. Amazon, having survived the dot-com inferno, emerged as the quiet supplier of the internet’s oxygen: AWS. Netflix reinvented itself for the streaming era, its growth literally running over the fiber laid down by the previous bubble. Salesforce proved that cloud software could thrive when capital budgets died. Google discovered that measurable performance advertising could expand even in recession. And Facebook (a seedling then) would soon root itself in the ashes, nourished by cheap smartphones and surplus bandwidth.
The 2008 fire selected for companies that could integrate hardware, software, and services into self-sustaining ecosystems rather than simply clearing space. The result was evolution, not merely recovery.
This cycle, though, introduces a new kind of fuel — the canopy fire.
In the past, the flames mostly consumed the underbrush (small, overvalued startups). Today, the heat is concentrated in the tallest trees themselves: Nvidia, OpenAI, Microsoft, and a handful of hyperscalers spending staggering sums with each other.
Compute has become both the oxygen and the accelerant of this market. Every dollar of AI demand turns into a dollar for Nvidia, which in turn fuels more investment into model training, which requires still more GPUs. This creates a feedback loop of mutual monetization.
This dynamic has created something closer to an industrial bubble than a speculative one. The capital isn’t scattered across a thousand dot-coms; it’s concentrated in a few massive bilateral relationships, with complex cross-investments that blur the line between genuine deployment and recycled capital.
When the wildfire comes (when AI demand normalizes or capital costs rise) the risk shifts. Instead of dozens of failed startups, we face a temporary collapse in compute utilization. Nvidia’s stock may not burn to ash, but even a modest contraction in GPU orders could expose how dependent the entire ecosystem has become on a few large buyers.
That’s the real canopy problem: when the tallest trees grow too close, their crowns interlock, and when one ignites, the fire spreads horizontally, not just from the ground up.
In Web 1.0, Oracle (the de facto database for all dot-coms) saw a symbolic collapse from $46 to $7 in 2000 before recovering to $79 by the launch of ChatGPT and $277 today. In Web 2.0’s wildfire, Google (the supplier of performance advertising) dropped 64% from $17 to $6 but exploded to $99 with ChatGPT’s launch and has since hit $257. In this cycle, the analog could be Nvidia. Not because it lacks fundamentals, but because its customers are all drawing from the same pool of speculative heat, fueled by complex cross-investments that have elicited scrutiny about whether capital is being genuinely deployed or simply recycled.
Here’s where the AI wildfire may prove even more productive than its predecessors: the infrastructure being overbuilt today goes beyond fiber optic cable lying dormant in the ground. We’re building compute capacity, the fundamental resource constraining AI innovation right now.
Today’s AI market is brutally supply-constrained. Startups can’t get the GPU allocations they need. Hyperscalers are rationing compute to their best customers. Research labs are queuing for months to train models. Ideas and talent aren’t the bottleneck. Access to the machinery is.
This scarcity is driving the current frenzy. Companies are signing multi-billion dollar commitments years in advance, locking in capacity at premium prices, building private data centers, and stockpiling chips like ammunition. The fear centers on being unable to participate at all because you can’t access the compute, not just missing the AI wave.
What happens, however, after the fire?
The same pattern that played out with bandwidth in 2000 is setting up to repeat with compute in 2026. Billions of dollars are pouring into GPU clusters, data centers, and power infrastructure. Much of this capacity is being built speculatively, funded by the assumption that AI demand will grow exponentially forever.
But there’s another dynamic accelerating the buildout: a high-stakes game of chicken where no one can afford to blink first. When Microsoft announces a $100 billion data center investment, Google must respond in kind. When OpenAI commits to 10 gigawatts of Nvidia chips, competitors feel compelled to match or exceed that commitment. The fear centers on being locked out of the market entirely if demand does materialize and you haven’t secured capacity, not just that AI demand might not materialize.
This creates a dangerous feedback loop. Each massive spending announcement forces competitors to spend more, which drives up the perceived stakes, which justifies even larger commitments. No executive wants to be the one who underinvested in the defining technology of the era. The cost of being wrong by spending too little feels existential; the cost of being wrong by spending too much feels like someone else’s problem — a future quarter’s write-down, not today’s strategic failure.
It’s precisely this dynamic that creates productive bubbles. The rational individual decision (match your competitor’s investment) produces an irrational collective outcome (vast overcapacity). But that overcapacity is what seeds the next forest.
Yet there’s a critical distinction being lost in the bubble debate: not all compute is the same. The market is actually two distinct pools with fundamentally different dynamics.
The first pool is training compute made up of massive clusters used to create new AI models. This is where the game of chicken is being played most aggressively. No lab has a principled way of deciding how much to spend; each is simply r
[Article truncated for readability...]
Discovering the indieweb with calm tech
alexsci.com
Blog Home
When social media first entered my life, it came with a promise of connection.
Facebook connected college-aged adults in a way that was previously impossible, helping to shape our digital generation.
Social media was our super-power and we wielded it to great effect.
Yet social media today is a noisy, needy, mental health hazard.
They push distracting notifications, constantly begging us to “like and subscribe”, and trying to trap us in endless scrolling.
They have become sirens that lure us into their ad-infested shores with their saccharine promise of dopamine.
Beware the siren's call
How can we defeat these monsters that have invaded deep into our world, while still staying connected?
StreetPass for Mastodon
A couple weeks ago I stumbled into a great browser extension, StreetPass for Mastodon.
The creator, tvler, built it to help people find each other on Mastodon.
StreetPass autodiscovers Mastodon verification links as you browse the web, building a collection of Mastodon accounts from the blogs and personal websites you’ve encountered.
StreetPass is a beautiful example of calm technology .
When StreetPass finds Mastodon profiles it doesn’t draw your attention with a notification, it quietly adds the profile to a list, knowing you’ll check in when you’re ready.
StreetPass recognizes that there’s no need for an immediate call to action.
Instead it allows the user to focus on their browsing, enriching their experience in the background.
The user engages with StreetPass when they are ready, and on their own terms.
StreetPass is open source and available for Firefox, Chrome, and Safari.
Inspired by StreetPass, I applied this technique to RSS feed discovery.
Blog Quest
Blog Quest is a web browser extension that helps you discover and subscribe to blogs.
Blog Quest checks each page for auto-discoverable RSS and Atom feeds (using rel="alternate" links) and quietly collects them in the background.
When you’re ready to explore the collected feeds, open the extension’s drop-down window.
The extension integrates with several feed readers, making subscription management nearly effortless.
Blog Quest is available for both Firefox and Chrome.
The project is open source and I encourage you to build your own variants.
Ubiquitous yet hidden
I reject the dead Internet theory: I see a vibrant Internet full of humans sharing their experiences and seeking connection.
Degradation of the engagement-driven web is well underway, accelerated by AI slop.
But the independent web works on a different incentive structure and is resistant to this effect.
Humans inherently create, connect, and share: we always have and we always will.
If you choose software that works in your interest you’ll find that it’s possible to make meaningful online connections without mental hazard.
Check out StreetPass and Blog Quest to discover a decentralized, independent Internet that puts you in control.
You can't drown out the noise of social media by shouting louder, you've got to whisper.
Image credits
Titans + MIRAS: Helping AI have long-term memory
research.google
The Transformer architecture revolutionized sequence modeling with its introduction of attention, a mechanism by which models look back at earlier inputs to prioritize relevant input data. However, computational cost increases drastically with sequence length, which limits the ability to scale Transformer-based models to extremely long contexts, such as those required for full-document understanding or genomic analysis.
The research community explored various approaches for solutions, such as efficient linear recurrent neural networks (RNNs) and state space models (SSMs) like Mamba-2. These models offer fast, linear scaling by compressing context into a fixed-size. However, this fixed-size compression cannot adequately capture the rich information in very long sequences.
In two new papers, Titans and MIRAS, we introduce an architecture and theoretical blueprint that combine the speed of RNNs with the accuracy of transformers. Titans is the specific architecture (the tool), and MIRAS is the theoretical framework (the blueprint) for generalizing these approaches. Together, they advance the concept of test-time memorization, the ability of an AI model to maintain long-term memory by incorporating more powerful “surprise” metrics (i.e., unexpected pieces of information) while the model is running and without dedicated offline retraining.
The MIRAS framework, as demonstrated by Titans, introduces a meaningful shift toward real-time adaptation. Instead of compressing information into a static state, this architecture actively learns and updates its own parameters as data streams in. This crucial mechanism enables the model to incorporate new, specific details into its core knowledge instantly.
Una battaglia dopo l'altra
share.google
Bob è un rivoluzionario fallito che vive in uno stato di paranoia, sopravvivendo isolato con la figlia Willa. Quando la sua nemesi riemerge e Willa scompare, l'uomo si affanna per ritrovarla, mentre combatte le conseguenze del suo passato.
Work Logs (10)
session-summary
2025-12-09
# Session Summary - 2025-12-09
## Work Done
- **Domini terapeutatorino**: Configurati 6 domini su Internet.bs, redirect 301 via Cloudflare Worker per 5 domini secondari verso terapeutatorino.com
- **Email bozza**: Creata email per Stefania e Giorgia con tono informale, riferimenti pop culture (Hogwarts), "baci spaziali"
- **Stefania Demichelis**: Aggiornato profilo con doppia email (work METI + personal gmail per progetto terapeuta)
- **Sistema EWAF**: Implementato rating Alchemy nel /bye - Earth/Water/Air/Fire per valutare sessioni
- **JSON ratings**: Creato log/ewaf-ratings.json per futuro grafico su brain.giobi.com
## Files Changed
- `.claude/commands/bye.md` - Aggiunto Step 5 EWAF rating
- `log/ewaf-ratings.json` - Nuovo file per tracking rating (futuro grafico)
- `database/people/stefania-demichelis.md` - Doppia email
- `database/projects/terapeutatorino.com.md` - Domini e redirect config
## EWAF Rating
🌍8 💧8 🔥2 💨9
**Note**:
- Earth 8: Domini configurati, email pronta, sistema EWAF implementato
- Water 8: Buon flow, capito al volo tono email e logica EWAF
- Fire 2: Solo fix NS da rifare, poca friction
- Air 9: Sistema EWAF = meta-pattern per tutte le sessioni future
## Next Steps
- Verificare propagazione DNS domini terapeutatorino (24-48h)
- brain.giobi.com: futuro grafico EWAF
Nexum REST API playground session
2025-12-09
## Work Done
- **Nexum REST API Playground** (`/system/estendo`):
- Created `EstendoRestApi.php` client for new Estendo REST API
- Built Livewire UI with live search (2+ chars → filter brands)
- Cascade: select brand → load products → filter further
- JSON preview panel for inspecting API responses
- 10-minute cache to avoid hammering API
- Full documentation in `docs/estendo-rest-api.md`
- **Rules Update**:
- Added SSH safety rule: MUST announce before connecting to remote servers
- Added classification table (production/staging/dev)
- Logged incident: attempted clone on prod server without asking
- **Cloudways API**:
- Found correct endpoint for reset_permissions: `POST /api/v1/app/manage/reset_permissions`
- Fixed permission denied issue on staging
## Commits (Nexum repo)
- `5253b98` - feat: Add Estendo REST API playground at /system/estendo
- `87ff783` - docs: Add Estendo REST API documentation
## Files Changed (Brain)
- `boot/rules.md` - SSH safety rules
- `database/projects/nexum.md` - System routes documentation
## EWAF Rating
🌍9 💧7 🔥3 💨8
**Note**:
- Earth 9: Playground funzionante, client REST, docs, commit pushati
- Water 7: Buon flow ma ho fatto cazzata iniziale (prod server)
- Fire 3: Correzioni su prod server e frainteso sync vs playground
- Air 8: Pattern Cloudways API, SSH safety rule, playground riutilizzabile
## Next Steps
- Implement actual REST API integration in SmartphoneStep (replace DB lookup with live API calls)
- Test with real data from Estendo production API
ChangeTower analysis + refund request
2025-12-09
## Work Done
- Analizzato utilizzo ChangeTower ($90/anno per monitoring siti Solmeri)
- Inviata richiesta rimborso a ChangeTower support
- Aggiunto HTML Diff Monitoring ai moonshots di StatusPilot
## Files Changed
- `database/projects/statuspilot/index.md` - aggiunto moonshot HTML diff monitoring
## Next Steps
- Attendere risposta ChangeTower per rimborso
- Quando si sviluppa StatusPilot, implementare HTML diff come feature
Test email da Antigravity
2025-12-09
Verificato con successo il funzionamento dell'invio email dal sistema Antigravity
TerapeutaTorino.com - Setup Redirect Domains
2025-12-09
## Summary
Configurati 5 domini per redirect 301 permanente verso `https://terapeutatorino.com`:
1. terapeutatorino.it
2. drdemichelis.com
3. drdemichelis.it
4. psicologademichelis.com
5. psicologademichelis.it
## Cloudflare Configuration
### Zones Created
| Domain | Zone ID | Nameservers |
|--------|---------|-------------|
| terapeutatorino.it | 742cd6c783a1c84e3cd0422b5186f457 | jeff.ns.cloudflare.com, roxy.ns.cloudflare.com |
| drdemichelis.com | 376f4ee5c155d0bec3232a640ad4f9fd | ainsley.ns.cloudflare.com, razvan.ns.cloudflare.com |
| drdemichelis.it | 87ac82e2036de34852e8eaf77a593914 | jeff.ns.cloudflare.com, roxy.ns.cloudflare.com |
| psicologademichelis.com | 9952844539e937aa40b05dcbbf12014a | ainsley.ns.cloudflare.com, razvan.ns.cloudflare.com |
| psicologademichelis.it | 89105ca5ca8250851c50f14fd4077b31 | jeff.ns.cloudflare.com, roxy.ns.cloudflare.com |
### Worker Deployment
**Worker Name:** `terapeutatorino-redirects`
**Script:** `/tmp/redirect_worker.js` (deployed to Cloudflare account)
**Functionality:**
- Intercepts all requests to the 5 domains
- Returns 301 permanent redirect to `https://terapeutatorino.com`
- Preserves path and query string (e.g., `/contatti?foo=bar` → `https://terapeutatorino.com/contatti?foo=bar`)
**Worker Routes (all active):**
| Domain | Route ID | Pattern |
|--------|----------|---------|
| terapeutatorino.it | dbeec346bb1f416a86044829acac90b5 | `*terapeutatorino.it/*` |
| drdemichelis.com | 2ef14d24df7e43b298dee752ed7d9f82 | `*drdemichelis.com/*` |
| drdemichelis.it | c4346effeb654116829946264e1a4c27 | `*drdemichelis.it/*` |
| psicologademichelis.com | 33c446e1144a46979f1fd7b050849b40 | `*psicologademichelis.com/*` |
| psicologademichelis.it | 80afca95065a44ae85c8e044e944833b | `*psicologademichelis.it/*` |
## Internet.bs Nameserver Update
**API Endpoint:** `https://api.internet.bs/Domain/Update`
**Parameter:** `Ns_list` (comma-separated nameserver list)
**Status:** ALL 5 domains updated successfully
**Update Time:** 2025-12-09
## DNS Propagation
**Timeline:** 24-48 hours expected
**Current Status (2025-12-09):** Propagating (no DNS responses yet)
### Check Commands
```bash
# Check nameservers
dig +short NS terapeutatorino.it
dig +short NS drdemichelis.com
dig +short NS drdemichelis.it
dig +short NS psicologademichelis.com
dig +short NS psicologademichelis.it
# Test redirects (once NS are active)
curl -I https://terapeutatorino.it
curl -I https://drdemichelis.com
curl -I https://drdemichelis.it
curl -I https://psicologademichelis.com
curl -I https://psicologademichelis.it
```
Expected response:
```
HTTP/2 301
location: https://terapeutatorino.com/
```
## Technical Details
### Worker Code
JavaScript Worker handling redirects:
```javascript
addEventListener('fetch', event => {
event.respondWith(handleRequest(event.request))
})
async function handleRequest(request) {
const url = new URL(request.url)
const targetDomain = 'terapeutatorino.com'
const redirectDomains = [
'terapeutatorino.it',
'drdemichelis.com',
'drdemichelis.it',
'psicologademichelis.com',
'psicologademichelis.it'
]
if (redirectDomains.includes(url.hostname)) {
const targetUrl = `https://${targetDomain}${url.pathname}${url.search}`
return Response.redirect(targetUrl, 301)
}
return new Response('Domain not configured for redirect', { status: 404 })
}
```
### API Authentication Issues Encountered
**Problem:** Account-level API token (`CLOUDFLARE_API_TOKEN`) doesn't support:
- Page Rules endpoint
- Dynamic Redirects (Ruleset API)
**Solution:** Cloudflare Workers (fully supported with account tokens)
**Reason:** Workers are deployed at account level, then bound to zones via routes.
## Next Actions
1. Wait 24-48h for DNS propagation
2. Monitor nameserver status with `dig +short NS domain.com`
3. Once Cloudflare nameservers are active, test redirects
4. Verify all paths and query strings are preserved
5. Update client documentation if needed
## Files Created
- `/tmp/redirect_worker.js` - Worker script
- `/tmp/cloudflare_deploy_worker.py` - Deployment script
- `/tmp/internetbs_final_update_v2.py` - Nameserver update script (working)
- `/tmp/check_ns_status.sh` - DNS status checker
## Notes
- Main site `terapeutatorino.com` unchanged (already working)
- 301 is permanent redirect (good for SEO)
- Worker has no latency impact (edge execution)
- All 5 domains now managed in same Cloudflare account
- Future DNS changes can be done via Cloudflare dashboard
## Status
- Configuration: ✅ COMPLETE
- DNS Propagation: ⏳ IN PROGRESS
- Testing: ⏳ PENDING (waiting for DNS)
StatusPilot health checks session
2025-12-08
# Session Summary - 2025-12-08 01:20
## Work Done
### StatusPilot Major Features
- **Cloudflare DNS management**: staging/production switch via API
- **Monitor events system**: logging with Discord notifications
- **Scheduled go-live**: automatic DNS switch at scheduled time
- **LCP non-blocking**: preflight can pass with LCP issues (informational)
- **SEO fix buttons**: Fix buttons for OG tags, meta description failures
- **Scan history**: 7-day mini bar chart on monitors list
- **Ping check every 5 min**: quick HTTP response check for uptime
- **Full health scan every 6h**: complete SEO/SSL/performance audit
- **Score calculation fixed**: proper pass/fail/warning counts
### UI Improvements
- Hide "Live" badge when already live (only show for dev/preflight)
- Mini chart with colored bars based on score (green/lime/yellow/orange/red)
- Background container for visibility
## Files Changed
- 41 files in statuspilot (3541 insertions)
- New: CloudflareDnsManager, MonitorEventLogger, HttpCheck implementation
- New migrations for DNS fields, events table, scheduled_live_at
- Updated views: index, show, missing-seo
## Stats
- 12 monitors all UP
- Scores: 74-87%
- Next full scan: ~4 hours
- Ping check: every 5 minutes
statuspilot-session
2025-12-07
# Session Summary - 2025-12-07 03:10
## Work Done
### StatusPilot - Health Checks & Autofix
- **PageSpeed API fix**: Corretto il problema con le categorie (accessibility, seo, best-practices) che non venivano passate correttamente - ora usa URL manuale con `&category=` ripetuto
- **H1 check migliorato**: Ora proporzionale alla lunghezza del contenuto (1 H1 per ~3000 caratteri accettabile)
- **Noindex check**: Modificato per skippare X-Robots-Tag header (normale per staging)
- **Cache purge endpoint**: Fixato per Breeze/Cloudways usando `do_action()` hooks corretti
- **Meta description & Open Graph**: Settati via RankMath API (verificato in DB, problema è cache Varnish)
- **Email check creato**: Nuovo check `email_configured` che verifica Mailgun/WP Mail SMTP/FluentSMTP/Post SMTP
- **WordPressCheck class**: Nuovo handler per check che richiedono autenticazione WP
## Files Changed
### StatusPilot Laravel App
- `app/Services/Checks/PageSpeedCheck.php` - fix categorie API
- `app/Services/Checks/HttpCheck.php` - H1 proporzionale, noindex skip
- `app/Services/Checks/WordPressCheck.php` - NEW: check autenticati WP
- `app/Services/HealthCheckRunner.php` - aggiunto handler wordpress
- `app/Services/WordPressConnector.php` - aggiunto getEmailStatus()
### StatusPilot WordPress Plugin
- `public/plugins/statuspilot-connector/statuspilot-connector.php` - endpoint email-status, fix cache purge
## Next Steps
- Aggiornare plugin su staging Cloudways (ZIP disponibile: statuspilot.giobi.com/downloads/statuspilot-connector.zip)
- Test preflight completo dopo cache purge
- Implementare check contact form con Puppeteer (futuro)
Domain Registration - seopilot.it & xpilot.it
2025-12-07
## Domain Registration via Internet.bs API
Registered two new .IT domains for the "pilot" family brand protection.
### Registered Domains
**seopilot.it**
- Registrar: Internet.bs
- Registration Date: 2025-12-07
- Expiration: 2026-12-07
- Price: €6.07
- Transaction ID: da063af13e1cace9f489bb45cd950257
- Transfer Auth Code: pYQuRNzUUu
- Auto-Renew: NO
**xpilot.it**
- Registrar: Internet.bs
- Registration Date: 2025-12-07
- Expiration: 2026-12-07
- Price: €6.07
- Transaction ID: d5485a6c72ad4577492c9d59d5785030
- Transfer Auth Code: ildTLxVoLr
- Auto-Renew: NO
### Registrant Contact
All contacts set to:
- Name: GIOVANNIBATTISTA FASOLI
- Email: giobi@giobi.com
- Phone: +39.3483697620
- Address: Via Santa Maria 15, 28831 Baveno (VB), Italy
- Organization: giobi.com
- Codice Fiscale: FSLGNN83C12L746Y
### Purpose
- **seopilot.it**: Brand protection for SEO automation products (RankPilot family)
- **xpilot.it**: Generic pilot brand protection, potential future product
### Technical Details
API Implementation:
- Used Internet.bs Domain/Create endpoint
- Required .IT-specific fields: DotITEntityType (1 = Italian individual), DotITNationality (IT), DotITRegCode (CF)
- All contacts (registrant, admin, technical, billing) set to same details
- Registration period: 1 year
- Total cost: €12.14
### Next Steps
- Auto-renew is currently OFF for both domains
- Consider enabling auto-renew if domains are strategic
- Monitor expiration: 2026-12-07 (1 year from today)
- Transfer auth codes saved in domain entity files
invoicepilot-flux-sidebar-fix
2025-12-07
## Work Done
- **InvoicePilot Budget Module**: Fixed sidebar and navigation issues
- Converted sidebar component to use official Flux components (`flux:sidebar.item`, `flux:sidebar.nav`, `flux:sidebar.header`)
- Added `->layout('components.layouts.app')` to Livewire Budget components (was missing, causing no JS)
- Fixed guest redirect: added `redirectGuestsTo('/auth/google')` in bootstrap/app.php
- Cleaned up view structure for budget-index and budget-show
## Files Changed (InvoicePilot)
- `resources/views/components/app/sidebar.blade.php` - Rewrote with Flux sidebar components
- `app/Livewire/Budgets/BudgetIndex.php` - Added layout method
- `app/Livewire/Budgets/BudgetShow.php` - Added layout method
- `bootstrap/app.php` - Added guest redirect
- `resources/views/livewire/dashboard.blade.php` - Fixed structure
- `resources/views/livewire/budgets/budget-index.blade.php` - Updated structure
- `resources/views/livewire/budgets/budget-show.blade.php` - Updated structure
## Next Steps
- Verify sidebar sticky behavior works in browser
- Test budget detail links navigation
- Continue with budget PDF generation feature
InvoicePilot: Portal cliente con token sicuri
2025-12-06
## Sessione
Implementazione completa del portale cliente per InvoicePilot con focus sulla sicurezza.
## Problema identificato
Il portale cliente usava la P.IVA nell'URL (`/portal/{vat}`), il che permetteva a un cliente di vedere fatture di altri tenant con la stessa P.IVA. Buco di sicurezza enorme.
## Soluzione implementata
### Token univoci per cliente
Ogni cliente ora ha un `portal_token` (32 caratteri random) che identifica univocamente la coppia tenant+client.
**URL nuovo**: `/portal/{token}` - niente più P.IVA esposta
### Modifiche
1. **Client model** (`app/Models/Client.php`)
- Aggiunto `portal_token` a fillable
- Metodo `getPortalToken()` - genera token se mancante
- Accessor `portal_url` - ritorna URL completo del portale
2. **PortalController** (`app/Http/Controllers/PortalController.php`)
- `findClientByToken()` - cerca client per token attraverso tutti i tenant
- Tutti i metodi ora accettano `$token` invece di `$vat`
3. **Routes** (`routes/web.php`)
- Semplificate: solo `/portal/{token}`, `/portal/{token}/budget/{id}`, `/portal/{token}/contract/{id}`
- Rimosse le vecchie route con P.IVA
4. **Views**
- Dashboard e client-index usano `$client->portal_url`
- Portal dashboard usa `$client->portal_token` per link interni
5. **Migration**
- Aggiunto `portal_token VARCHAR(64)` a tabella clients
- Generati token per tutti i 91 clienti esistenti
## Altre migliorie nella sessione
### Subject documenti
- Campo `subject` aggiunto a Document model
- FicSyncService ora importa `visible_subject` da FIC API
- Metodo `populateSubjects()` per aggiornare documenti esistenti
- 278 documenti su 409 ora hanno subject visibile nel portale
### Layout portale
- Badge P/F (proforma/fattura) allineati a destra con `text-right`
- Totali in fondo alle tabelle
- Oggetto documento in evidenza (colonna dedicata)
## File modificati
```
app/Models/Client.php
app/Models/Document.php
app/Http/Controllers/PortalController.php
app/Services/FicSyncService.php
routes/web.php
resources/views/livewire/portal/client-dashboard.blade.php
resources/views/livewire/dashboard.blade.php
resources/views/livewire/clients/client-index.blade.php
database/migrations/2025_12_06_add_portal_token_to_clients.php
```
## Esempio URL
```
Prima: https://invoicepilot.it/portal/IT02419370032
Dopo: https://invoicepilot.it/portal/mamI5aaYkOIxzoPtlqeMh4XZ1RukjOnm
```
## Note
- Token generati on-demand se mancanti (lazy generation)
- Ricerca cross-tenant per token è O(n) sui tenant ma accettabile (pochi tenant)
- Nessun rate limiting aggiunto (TODO futuro?)