I Built a Portfolio Project in One Sit-Down. Claude Code Is the Fifth Agent.
Using Claude Code and Claude CoWork to build a real Postgres-to-Kafka integration pipeline as a portfolio piece, with every commit explained, every design choice defended.
Outwork Issue #7
What this issue is
I use Claude CoWork to manage agents. In Outwork #5 I described the four-agent stack I run for Outpost Intelligence — Scout, Prospector, Reacher, Writer. Today I added a fifth: Claude Code, Anthropic’s command-line coding tool. In one sit-down — long, but one — I went from npm install -g @anthropic-ai/claude-code to fifteen Git commits of a working Postgres-to-Kafka integration pipeline ready to put on my GitHub as a senior-Java portfolio piece.
This issue is a retrospective on that session. I’m writing it for two audiences. The first is you, my readers, who want to know whether Claude Code is worth your time. The short answer is yes, and the long answer is below. The second audience is me — by writing what happened, I’m reinforcing my understanding of what got built, because the session was long enough that some of the engineering choices passed by faster than my brain could absorb. Writing is how I catch up.
Why I was doing this in the first place
Two weeks ago I was laid off from a federal contract I’d been on for the last four years. I’m seventy-one. I have a family to support. The Outpost newsletters are a credibility engine, not yet a revenue engine, so the income pivot has to come from contracting work in my actual technical specialty: senior Java and SQL, with side credentials in JDBC driver debugging and enterprise data integration.
A LinkedIn profile that says “available for contract” is the price of entry. What gets you actually hired is a portfolio piece on GitHub that lets a hiring manager read your code in three minutes and decide you’re a serious engineer. I didn’t have one. So today I built one.
What the project actually is, in plain language
The piece is called orders-pipeline and it lives at ~/projects/orders-pipeline on my Mac. It’s a small Java application that does one thing: it reads new “order” rows from a Postgres database, transforms them into JSON messages, and publishes them to a Kafka topic. That’s it. The point isn’t the cleverness of the transformation; the point is the production-shaped plumbing around the transformation, because that plumbing is what real integration engineers spend most of their time worrying about.
Let me unpack the three pieces.
Postgres is a SQL database, the kind of system you’d find as the backend of essentially every modern application. In my demo, Postgres holds a table called orders with columns like id, customer_id, amount, currency, and a status column that starts at NEW and progresses through IN_PROGRESS to SENT (or ERROR).
Apache Kafka is a distributed message bus — think of it as a buffer between systems that produce data and systems that consume it. When Bank A wants to tell Bank B about a wire transfer, or when an e-commerce front-end wants to tell the warehouse system about an order, the message often flows through Kafka. In my demo, I’m using a Kafka-compatible system called Redpanda for local development, which is lighter and easier to run than Kafka itself but speaks the same protocol.
Apache Camel is the third piece, and it’s the one most people haven’t heard of unless they work in enterprise integration. Camel is a framework for writing the routes that move data between systems. You describe the route in a few lines of Java — “read from this database, transform with this bean, write to this Kafka topic” — and Camel handles all the plumbing: the polling, the threading, the retries, the connection pooling, the error handling. It’s used heavily in banks, insurance, healthcare, and government contracting because it specializes in connecting old systems to new ones without rewriting either.
The pipeline I built is exactly the kind of thing senior Java engineers do for a living, and it’s the kind of thing my contracting market actively hires for.
The five big design choices, and why each is defensible
Building this pipeline required making roughly five major decisions before I wrote a line of code. I want to walk through them because they’re the answers I’ll give in interviews when somebody asks “tell me about a project you’ve built.” Each decision is, by itself, a small interview answer.
Decision one: no Spring, no Spring Boot
In modern Java land, most people reach for Spring Boot as their default — a framework that handles dependency injection, configuration loading, web servers, and a thousand other things. It’s huge and ubiquitous.
I deliberately did not use Spring Boot. The framework Apache Camel ships called camel-main does the same job — classpath scanning, bean wiring, configuration loading — without dragging in the Spring dependency tree. The portfolio piece reads more interestingly because it shows that I know Camel can stand on its own. Anyone can write a Spring Boot Camel app; the fact that mine isn’t one is the first signal that I’m working at the framework level, not just at the framework-user level.
The trade-off: about ten percent more boilerplate, in exchange for a much smaller dependency footprint, faster startup, and a clearer architectural statement.
Decision two: the atomic-claim pattern with UPDATE ... RETURNING
This is the design decision I’m proudest of, and it’s a moment in today’s session where the AI and I had a real engineering conversation.
The naive way to build a poller is: every two seconds, run SELECT * FROM orders WHERE status=’NEW’, process each row, then UPDATE orders SET status=’SENT’ WHERE id=?. This works for one consumer. It fails the moment you run two consumers in parallel: both consumers see the same NEW rows and both try to process them.
Postgres has a clever solution called SELECT ... FOR UPDATE SKIP LOCKED. It says “lock these rows so other consumers skip them, and if anyone else is already locking them, skip those.” Beautiful. But it only works inside a transaction — the lock lives for the lifetime of the transaction, then releases.
I’d already committed to a Spring-Boot-free design. That meant I didn’t have Spring’s transaction manager available. Claude Code initially told me FOR UPDATE SKIP LOCKED would make multi-instance “free” — which was wrong, and Claude Code caught its own error mid-session and surfaced it before writing any SQL.
The fix is a single Postgres-specific atomic statement:
UPDATE orders
SET status = ‘IN_PROGRESS’, claimed_at = NOW()
WHERE id IN (
SELECT id FROM orders
WHERE status = ‘NEW’
ORDER BY created_at
FOR UPDATE SKIP LOCKED
LIMIT 100
)
RETURNING id, customer_id, amount, currency, created_at;
One statement. Atomic at the database level. Each consumer instance grabs a disjoint set of rows because of SKIP LOCKED in the subquery; the outer UPDATE flips them to IN_PROGRESS before anyone else can see them. No transaction manager needed.
This pattern — sometimes called “the atomic claim” or “the work-queue pattern” — is exactly how industrial systems like SQS, BullMQ, and many internal corporate work queues do their job. It is not beginner Postgres material. When a hiring manager asks me to draw on a whiteboard how I’d build a multi-instance work queue without a queue server, this is the answer.
Decision three: DLQ-first error handling
When something fails inside the pipeline — say, Kafka is temporarily down — there are two coherent design choices about what to do with the failed message.
Mark-and-investigate: leave the row in the database with status=’ERROR’ and write a separate investigation tool that lets a human look at error rows and decide what to do. More moving parts, more honest in some sense, but it implies you actually have a human ready to investigate.
Dead-letter queue (DLQ): define a second Kafka topic — orders.dlq in my case — and configure the route to retry three times with two-second backoffs, then send the failed message to the DLQ topic instead of the main topic. The database row gets marked SENT either way, because from the database’s perspective the message was processed, just to a different destination. Downstream investigation happens by tailing the DLQ topic.
I went with DLQ-first. The reasoning: it produces a more uniform Postgres state (every row eventually ends up SENT unless there’s a true catastrophe), it leverages Kafka’s strength as a durable message store, and it’s the pattern most modern integration teams converge on. The ERROR state in the database is reserved for the rare case where the DLQ write itself fails, which is a system-level problem requiring intervention regardless.
In an interview, this is the answer to “what happens when Kafka is down for thirty seconds.” My route retries, eventually succeeds, and the human never has to know. If Kafka is down for thirty minutes, the messages eventually flow to DLQ and the database stays clean. If both topics are down, we have a worse problem than a database status column can solve.
Decision four: JSON payload for v1, Avro for v2
The Kafka message payload could be in any format. The most common choices are JSON (human-readable, schema-less) and Avro (binary, schema-enforced, requires a schema registry).
Avro is the more impressive answer on a resume. It’s also a lot more setup — you need a separate schema registry container in your local dev, you need .avsc schema files in your repo, you need different serializer libraries on the producer and consumer sides.
I shipped v1 with JSON. The README explicitly says Avro+Schema Registry is on the v2 roadmap. That gives me two stories in one repo: the v1 demonstrates the pipeline; a future v2 commit demonstrates schema evolution. Reviewers care more about a candidate who knows what’s deferred and why than one who builds the whole castle on day one.
Decision five: every commit small and atomic
This is the design decision that runs through the entire project. Fifteen commits, each scoped to a single concern, each with a conventional-commits prefix (chore, build, fix, feat, refactor, docs), each with an explanatory body that says why not just what.
The git history reads as a careful engineer working through a problem step by step:
docs: add README with architecture, run steps, and roadmap
feat(infra): add local-dev stack and runtime config
feat(sql): add atomic-claim and mark-done queries
refactor(domain): remove pipeline status from OrderEvent
feat(routes): add OrderSyncRoute (Postgres -> Kafka)
feat(transform): add OrderEnricher bean for row -> event mapping
feat(domain): add OrderEvent record for the Kafka payload
feat(config): bind HikariCP DataSource and Jackson ObjectMapper
feat(app): add camel-main bootstrap
build: rename groupId to io.github.leonardsibelius.orders
fix: replace copyright placeholder in LICENSE with author name
build: scaffold Maven project with Camel runtime and dependencies
chore: initialize repository
A hiring manager who reads this log learns more about my engineering judgment than from any cover letter. Each commit is reviewable on its own. Each commit message anticipates the next reviewer’s question.
This isn’t an accident of working with an AI. It’s a deliberate practice I had to instruct Claude Code to follow. The default behavior of a tool like this is to make large changes with vague messages. Saying “make atomic commits with meaningful messages — this repo will end up on my GitHub as a portfolio piece, and the commit history matters as much as the final code” was the single most important sentence I typed today.
What Claude Code, with CoWork, actually felt like to use
It was a three-person-programming session with two fast colleagues who read documentation faster than I do and remember more of it. The dynamic was:
I describe what I want at the architectural level — “build a Postgres-to-Kafka pipeline with these design choices.”
Claude Code proposes the next file or commit, walks me through it in three or four paragraphs of explanation, then waits for my approval.
CoWork and I either approve, or push back. Pushing back happened more than I expected and was always productive.
Claude Code makes the change atomic, commits it with a real message, then moves to the next file.
The pushbacks were the interesting part.
The “stop, don’t reset” correction. Mid-session, Claude Code proposed a git reset --hard that would have undone a perfectly good commit just to remake it from scratch. The reasoning was over-paranoid compliance with an instruction I’d given earlier. I told it to stop, accept the current state, and move on. Claude Code did, and apologized for the overcorrection. The lesson: when an AI is over-complying with a rule, push back. The tool defers to you.
The atomic-claim discovery. This was the most important moment of the session. Claude Code had earlier promised that FOR UPDATE SKIP LOCKED made multi-instance safe “for free.” Hours later, just before writing the SQL files, it caught its own error: the lock only holds inside a transaction, which my Spring-Boot-free design didn’t have. Rather than ship the bug, Claude Code stopped, surfaced the issue, and proposed three coherent paths forward. We picked the atomic-claim pattern, and the project ended up structurally stronger because the AI caught its own mistake.
That last moment is the one I want to highlight to anyone considering Claude Code for serious work. The tool is not just an executor. It is capable of noticing inconsistencies in its own prior reasoning and surfacing them before they ship. That is a senior-engineer behavior. Not every model does it. Mine did, today.
What I and CoWork had to do that Claude Code couldn’t
Three things, all important.
Make the architectural decisions. Spring or not. Atomic claim or transaction manager. DLQ-first or mark-and-investigate. JSON or Avro. These are judgment calls about trade-offs, not implementations of known patterns. Claude Code presented options; I picked.
Tell it what kind of git history I wanted. The atomic-commits-with-good-messages pattern was my instruction, not the default. Without it the session would have produced one giant commit titled “initial implementation” and the portfolio value would have been roughly half.
Push back when the AI was wrong or over-cautious. Three times today. Each time the project improved. An AI that you don’t ever push back on is one whose ceiling is its own training distribution. Push back, and the ceiling becomes yours.
What’s still left for tomorrow
The project isn’t finished. Three concrete next steps:
Smoke-test locally. I don’t have Docker installed on the machine where I was building today, so the actual end-to-end run (Postgres + Kafka + the Camel app talking to both) is tomorrow morning’s first task. Static validation passed for every file; the architecture is internally consistent. But static checks aren’t the same as actually running it.
Push to GitHub. Only after the smoke test passes. A broken-on-first-clone portfolio piece is worse than no portfolio piece.
A v1.1 commit that adds the “stuck-IN_PROGRESS reaper” — a scheduled route that resets rows that have been stuck claimed for more than five minutes. Documented in the README as a known v1 limit; turning that into a real commit later is the portfolio-progression story I get to tell next.
What this means for the Outpost stack
The four-agent stack from Outwork #5 — Scout, Prospector, Reacher, Writer — runs the intelligence side of Outpost. Scout reads federal court records and clinical trial registries; Prospector identifies prospects; Reacher writes outreach; Writer turns research into briefing documents.
Claude Code is the engineering side. It builds the technical assets. The portfolio piece I just shipped is one example. The next example might be the data-pipeline tooling Outpost will eventually need for its own briefing-delivery system. Or the analytics dashboards. Or the customer onboarding flow.
The stack is now five tools, not four. One person plus five AIs, still. The business is still one person. But the surface area of what one person can credibly build has just expanded by another half-dimension.
The honest reflection
If you’re a peer of mine — senior engineer, decades of experience, watching younger candidates pull away in interviews because they “know AI tools” — the most useful thing I can tell you tonight is: the gap is not as wide as it feels. I went from never having opened Claude Code to fifteen-commit working portfolio piece in one sitting. The barrier to entry is hours, not months.
The barrier to being good with the tool is longer than that. But being competent with it, enough to demo a workflow in an interview, is genuinely a one-day investment.
Tomorrow I install Docker, run the smoke test, push to GitHub, and start drafting cover letters. The project is at ~/projects/orders-pipeline. The job hunt starts in earnest.
— Walt Parkman Outpost Intelligence

