GopherCon 2025
For the past year, I’ve been working on a small team of four in Stripe’s Developer Infrastructure organization. On Go Infra, we’re the stewards of the Go language and how Go is used at Stripe. It’s our charter to ensure that developing and deploying Go code is intuitive, fast and reliable.
Last week, I attended GopherCon 2025 in NYC on behalf of the team. It was my first GopherCon and a great opportunity to meet community members and learn about the OSS ecosystem.
I wrote some short notes (~5m read) on my experience at the conf. I’ve decided to share a version of them publicly as a blog post, in case anyone is curious about which parts we’re the most invested in at Stripe.
Who was at the conference?
A pretty diverse group of people! With the limited sample size of people I spoke to, it felt like a lot of attendees were building server software with Go in the cloud, with a minority doing data, AI, or developer productivity shaped things. There were a lot of individuals, but some companies had many people come (e.g. Google, CapitalOne).
I spoke with a member of the Linux Foundation trying to grow usage of Go in Nigeria, a Manchester Go meetup organizer (on the lookout for more speakers), and a maker who’s a friend of Naomi Wu. I also got the impression that a fair amount of NYC Gophers were alumni of (or otherwise linked to) Recurse Center.
I also ran into Aditya Mukerjee (early ex-Stripe) and Alan Donovan (Go team at Google), which was awesome. If GopherCon had just been running into them and nothing else, I still would have been thrilled. Some insights from my conversations with them follow…
What’s going on in the Go ecosystem?
Go was, second to Rust, the most rapidly growing language of 2024. At Stripe, we primarily use Go for infrastructure and internal tooling use cases. Out in the wild, people are using it for just about everything (from boring old “serving API traffic for $my_service
in the cloud” to “rendering graphics on embedded systems with screens”).
Due to basic Go features and invariants (e.g. backwards compatibility guarantees of the SDK, general adherence to semver in Go modules, batteries-included stdlib, etc), the Go maintainers believe Go is easier for LLMs to write than other languages. At the same time, they are cognizant of the broader debate on the merits of AI in the industry, and recognize that a significant minority of the Go community are strong AI bears.
As a result, they’re taking a AI two-pronged strategy: simultaneously making Go easier for LLMs to write (e.g. through LSP <> MCP integration), while also creating defensive bulwarks to ensure that LLM-authored Go is more idiomatic and pleasant for humans to work with (e.g. through modernizers, a planned feature for 1.26+).
What did I learn?
Static analysis
We do a lot of static analysis at Stripe. In order to enforce code hygiene and style, we enforce a very large set of checks in both CI and user IDEs. These checks include the basics, like staticcheck linters, but they do lots of other things too. For example, we have custom checks that limit the set of packages highly perf-sensitive code can import, and that enforce team ownership of v0 Go modules.
Because of our heavy investment in static analysis, the highlight of my GopherCon was hearing updates on Go analyzers and gopls.
At GopherCon, we got a sneak peek into Google’s proposal for a set of high quality analyzers called modernizers, special kinds of analyzers which:
- suggest an (auto-appliable) fix
- …which uses newer Go features
- …which is safe to apply unreservedly, without introducing bugs or semantic differences at runtime
Modernizers are one proposed defensive measure against LLMs, to enforce that they produce Go code in newer style (e.g. using range iterators when available), and as a mechanism to pull up older codebases so they make use of newer Go features and aesthetics.
We also saw a sneak peak into a new go fix
command which is available in preview today, and may land in a future Go release. With annotations like //go:fix inline
, we’ll be able to better support whole-repo codemods using the AST, rather than just heuristics.
At Stripe, we run a very large Go monorepo with >30k packages in a single Go workspace. We’ve begun to see some popular open source tools like gopls degrade as a result. We’ve patched gopls to lazy load packages in the workspace, to avoid pitfalls like multi-minute LSP startup, and editor freezes when looking up references for core libraries in the workspace.
I see a real opportunity for Stripe to unlock even greater developer productivity gains by investing more in gopls. I’d love to live in a world where:
- gopls is fast at scale
- we consolidate all of our custom analysis runners at Stripe into just gopls
- we plug our custom analyzers into gopls
- our analyzers can take advantage of gopls’s view/snapshot caching model
- we’re able to plug other tools we operate (e.g. pre-push Git hooks, AI agents) into gopls as RPC clients
- generalizable changes we’ve made are upstreamed
We’re not quite there yet, but hearing about all the recent and upcoming changes to gopls (such as an indexed imports cache, which we’re psyched about) gives me a lot to be excited about, and re-affirms my desire (and advocacy) to invest deeper in the OSS tooling in the months and years to come.
Security
Google very recently open sourced Capslock, a security tool that allows for capability analysis during build time. You can point Capslock at a package’s source and determine whether the its functions try to reach out to the network, write to the filesystem, make syscalls, etc.
You can also use Capslock to diff between Git revisions, get automatic color coded output based on the scariness of a given function using certain capabilities, and tell whether capabilities introduced into a package are direct or indirect (to defend against supply chain attacks).
This was really cool.
LLM
As to be expected in 2025, many talks were about LLM. Having only surface level experience in the LLM space prior to GopherCon, my impression is that the state of the world is changing frequently, but the barrier to entry is very low (a la writing JavaScript around the time that Node and transpilers were the hot new thing). You can spin up an AI agent, MCP server, or other agentic tools with a very small amount of code.
The gopls release even has an MCP server built in – one more reason for us to be excited about gopls.
Performance
Datadog has invested heavily in the runtime tracing parts of the Go SDK, and thanks to their contributions, runtime/trace
is now performant enough to use in many (more) production workloads.
They’ve also shipped a new feature in their SaaS offering: Datadog will automatically do critical path analysis on Go traces to determine the slowest path of your trace (denoted by the magenta arrows in the admittedly terrible photo below).
They’re looking to open source the meat and bones of this project, so that the OSS community can build on top of their research, which is awesome to hear.
Go 1.25 comes with a new flight recorder API to enable writing traces to a ring buffer. With it, most runtime traces can be dropped on the floor. You only durably write traces when events you deem significant occur, thus incentivizing the addition of tracing in places that would have been prohibitively expensive to instrument before.
Go 1.26 will likely ship with a new GC called Green Tea (named such because the engineer who prototyped it was working from Japan at the time). With Green Tea, Google reports “time spent in GC” improvements from 10-40% for many workloads. Over time, the existing Go GC will perform worse (relatively speaking) on newer hardware, due to changes in processor design like Non-Uniform Memory Access and the advent of vector instruction sets. Green Tea runs the mark-sweep algorithm over pages, not objects. This allows for decreasing the number of overall scans, more contiguous memory access, and overall better utilization of modern hardware microarchitecture. It’s neat to get performance increases for free :)
Should I (you) (not me) go to GopherCon next year?
I’d highly recommend it to anyone interested. My takes above are relevant to those of an engineer in the developer productivity space, but there was plenty to be excited about at GopherCon (for anyone using Go, regardless of skill level or industry).
Thanks!
Thanks to everyone who worked to make GopherCon 2025 work!
This blog post summarizes my learnings from just about all the speakers on the agenda. However, I owe a special thanks to the following speakers, whose talks I drew from a little extra in my notes above:
- Cameron Balahan, Go’s Next Frontier
- Jess McClintock, The Code You Reviewed is Not the Code You Built
- Alan Donovan, Analysis and Transformation Tools for Go Codebase Modernization
- Felix Geisendörfer, Profiling Request Latency with Critical Path Analysis
I’ve archived this year’s agenda on archive.today, and I believe all the talks should be posted to GopherCon’s YouTube channel shortly.
Stripe helped pay for my ticket via the education budget that engineers receive, and management was very on board with my attending. Thanks Stripe!
Some photos
Here’s a pic of the packed Javits Center right after keynote kickoff on Wednesday:
And here’s a pic of me. Please say hi if you see me at GopherCon next year!