Skip to content

Introducing Probe: A Modern, Zero-Dependency Goroutine Pool

Michael Fox 6 Min Read
Introducing Probe: A Modern, Zero-Dependency Goroutine Pool

We’re happy to announce that Amplify has open-sourced our internal goroutine pool module, Probe.

Bottom Line Up Front

Probe can be used to implement clean, reusable goroutine patterns in any Go application and has been made public under the MIT license. It supports single reusable goroutines or pools of reusable goroutines and allows you to bring your own logger via log/slog Handlers. Probe has zero 3rd party dependencies, but does require Go 1.21 or later.

Open Source is Our Culture

At Amplify, we love to build things just for the sake of building them. We also happen to love great open source projects of all types. Whether your open source project is a replica 1:1 F/A-18C Hornet simulator cockpit, a modern Bulletin Board System that supports 56K speeds on 8-bit computers, or a project that puts a Gameboy Color CPU in an original Nintendo DMG, we can nerd out on it! When we need to build pieces of the Amplify architecture that don’t include proprietary intellectual property, it only makes sense for us to open those projects up to the community. We also believe that open source should truly mean open with permissive licensing. We’re excited to take our first step towards our open source goals by making Probe public.

Why We Built a Goroutine Pool

Go provides excellent concurrent functionality baked into the language. At the core of this functionality are goroutines and channels. Goroutines can be thought of somewhat like system threads, but with less overhead. Go also multiplexes goroutines onto system threads; for example, two goroutines could potentially run on a single system thread if one is waiting on IO or blocked. Channels allow for easy data passing and synchronization between goroutines. Calculating data on a separate goroutine in Go (and waiting for the result) generally looks something like this:


// this code will add 10+10 in a separate goroutine and print the result
r := make(chan int)
go func() {
	r <-10 + 10
}()
result := <-r
fmt.Println(result) // 20

Easy, right? However, things get just a bit more complicated if you need a goroutine daemon that runs for the lifetime of your process. In this case, you’ll generally need to use a for select loop while you either await work on a channel or wait for a timer tick:


// this code will execute a function, doWork(), every minute in a background goroutine until CTRL+C is given
ctrlChan := make(chan struct{})
doneChan := make(chan struct{})
ctrlC := make(chan os.Signal, 1)
ticker := time.NewTicker(1 * time.Minute)
eventLoop := func() {
	for {
		select {
		case <-ticker.C:
			// do work every minute
			doWork()
		case <-ctrlChannel:
			// do shutdown work
			doShutdown()
			// signal completion
			close(doneChan)
			// exit the event loop
			return
		}
	}
}
go eventLoop()
// wait for CTRL+C
<-ctrlC
close(ctrlChan)
// wait for shutdown work to be completed
<-doneChan

Even goroutine daemons still seem fairly simple to achieve without the need for a pool. However, it’s possible to get into situations where different goroutine daemons are started at different places in the code for different reasons by different contributors. This can become a synchronization nightmare, especially if your code needs to be able to perform graceful shutdowns. It’s also difficult to tune performance when a new goroutine is created for each piece of work that needs to be run. Whether your workload is compute or IO bound, using a limited number of goroutines often results in better throughput. As a matter of code style, we prefer to have a single pool of reusable goroutines that can execute work without the overhead of creating and collecting new goroutines and without the headache of graceful shutdown issues. We built Probe to help lay a stable foundation for our concurrent Go processes.

What Probe Can Do

Probe exposes two concepts for reusable goroutines: Probe and Pool. Each probe is a single, reusable goroutine and each pool is a configurable collection of reusable probes. Let’s take a look at the previous examples and what they would look like using Probe.


// this code will add 10+10 on a probe and print the result
r := make(chan int)
p := probe.NewProbe(&ProbeConfig{})
p.WorkChan() <-func() {
	r <-10+10
}
result := <-r
fmt.Println(result) // 20

This isn’t a very compelling use case for Probe to solve, but it does show the basics: executing a function on a reusable probe. Let’s also look at the example of the goroutine daemon being executed on a pool. Instead of using a channel for control signals, we’ll use a cancellable context and 4 goroutine daemons.


// this code will execute eventLoop on 4 goroutine daemons in the pool and stop them with
// a context cancel before stopping the underlying pool
ctrlC := make(chan os.Signal, 1)
ctx, cancel := context.WithCancel(context.Background())
ticker := time.NewTicker(1 * time.Minute)
p := pool.NewPool(&pool.PoolConfig{ 
	Size: 4,
	Ctx: ctx,
})
eventLoop := func() {
	for {
		select {
		case <-ticker.C:
			// do work every minute
			doWork()
		case <-ctx.Done()
			// exit the event loop
			return
		}
	}
}
for i := 0; i < 4; i++ {
	p.Run(eventLoop)
}
// wait for CTRL+C
<-ctrlC
cancel() // will stop all eventLoops
p.Stop(true) // will block for any unfinished goroutines while the pool shuts down

Pools can also be passed throughout the application to limit the total number of goroutines executing concurrently and provide a single place for graceful shutdown.

Why Yet Another Goroutine Pool

A quick GitHub search will show hundreds, if not thousands, of goroutine pools available for use. The obvious question becomes, “why did we build yet another goroutine pool at Amplify?” The answer is multi-part. First, we wanted a pool that we knew would have continued support for the use cases and features that matter to us at Amplify. Goroutine pools underpin a lot of our critical pipelines and infrastructure and we couldn’t afford to wait for bug fixes or necessary features as we continue to scale and build out quickly. The best way that we knew how to get first class support for a goroutine pool was to build it ourselves, and we’d love to help support you in your journey with Probe as well. Second, we wanted very specific features like Go structured logging support and per-goroutine IDs. Some existing pools may have one of these features, but it was difficult to find a pool that met all of our requirements. Finally, at Amplify, we take supply chain security very seriously. We wanted to ensure that our goroutine pool had zero additional dependencies and worked well with modern versions of Go. It was in the spirit of supply chain integrity that we also made sure to make Probe public utilizing the MIT license. We built Probe for use within our own enterprise platform and we wanted to ensure anyone who packages Probe in closed source or for profit products could do so without any licensing concerns.

Wrap Up

We hope you find Probe interesting. We’ve released documentation for Probe on pkg.go.dev and will be adding more functionality to Probe in the coming weeks and months. The first release version of probe is v0.1.0 and, while some API changes may be necessary in the coming versions as we march towards v1.0.0, we don’t anticipate major changes to functionality. Feel free to head on over to our GitHub repo, give us a star, and check out what we’ve built!

Subscribe to Amplify Weekly Blog Roundup

Subscribe Here!

See What Experts Are Saying

BOOK A DEMO arrow-btn-white
By far the biggest and most important problem in AppSec today is vulnerability remediation. Amplify Security’s technology automatically fixes vulnerable code for developers at scale is the solution we’ve been waiting decades for.
strike-read jeremiah-grossman-01

Jeremiah Grossman

Founder | Investor | Advisor
As a security company we need to be secure, Amplify helped us achieve that without slowing down our developers
seclytic-logo-1 Saeed Abu-Nimeh, Founder @ SecLytics

Saeed Abu-Nimeh

CEO and Founder @ SecLytics
Amplify is working on making it easier to empower developers to fix security issues, that is a problem worth working on.
Kathy Wang

Kathy Wang

CISO | Investor | Advisor
If you want all your developers to be secure, then you need to secure the code for them. That's why I believe in Amplify's mission
strike-read Alex Lanstein

Alex Lanstein

Chief Evangelist @ StrikeReady

Frequently
Asked Questions

What is vulnerability management, and why is it important?

Vulnerability management is a systematic approach to managing security risks in software and systems by prioritizing risks, defining clear paths to remediation, and ultimately preventing and reducing software risks over time.

Why is vulnerability management important?

Without a sound vulnerability management program, organizations often face a backlog of undifferentiated security alerts, leading to inefficient use of resources and oversight of critical software risks.

What makes vulnerability management extremely challenging in today’s high-growth environment?

Vulnerability management faces challenges from the complexity and dynamism of software environments, often leading to an overwhelming number of security findings, rapid technological advancements, and limited resources to thoroughly explore appropriate solutions.

How can Amplify help me with vulnerability management?

Amplify automates repetitive and time-consuming tasks in vulnerability management, such as risk prioritization, context enrichment, and providing remediations for security findings from static (SAST) application security tools.

What technology does the Amplify platform integrate with?

Amplify integrates with hosted code repositories such as GitHub or GitLab, as well as various security tools.

Have a
Questions?

Contact Us arrow-btn-white

Ready to
Get started?

Book A GUIDED DEMO arrow-purple