Go Routines - Wicked Fast Concurrency in Golang

Want to write wicked fast Golang with no cost whatsoever, ever, any time? Go routines are (not) for you!

Go Routines - Wicked Fast Concurrency in Golang

One of the features I've touted as my reason for loving Go is Go Routines, Golang's first-class concurrency feature. If you're unfamiliar, many languages technically offer concurrency, but they're usually third-party or bolted on libraries. Golang features concurrency as a first-class feature, meaning it's a feature built into the language itself using the go keyword.

Now, if you're expecting some insane depth of explanation in this article, I'd recommend this video instead, which goes a lot further in depth than I will in this article, or you can hang out and wait for some more in-depth tutorials to come out on this blog. I wanted to softly introduce go routines first before deep diving into concurrent and parallel programming practices, channels and the like.

I also want to make sure I understand those topics first, too...

So, what is concurrency?

Concurrency - 4x Your Processing Speed!*

Concurrency is the ability to run multiple threads of execution at the same time. As a simple and obvious example, let's take web servers. Would you want to use a web server that only processes one request at a time? Maybe if that web server is only being used by you, and only in a limited fashion, because even if each request is processed wicked fast, at scale you're going to run into some long wait times.

This is why modern web servers service requests concurrently. Three calls to the same API endpoint are going to get serviced at the same time. This is why it's incredibly important to design your API's in such a way that they can feasibly operate concurrently: if each request is reading from and writing to a file or database row and each returned value needs to be accurate every time, every second, you're going to be running into problems! There are also certain scenarios that are going to require blocking behavior, or behavior that has to wait for all threads to finish executing before execution continues, but we will get to that. The TL;DR is that concurrency often causes dangerous or unexpected behavior if done incorrectly or un-carefully.

Now, let's talk Go routines!

Golang makes it as easy as possible to implement concurrency. You simply create a function that you want to run concurrently and call that function with the go keyword before it. We're going to simulate a REST API in this example to show some of the features and quirks of concurrency.

Your first Go Routine

Let's create the function getHandler() that will simulate some get request with a route, an offset and a message. The function will print the route, wait for the amount of time dictated by offset and then will print the message. The wait time simulates some computationally or time-intensive process, like reading from a database or running some calculation.

Now let's call that function 5 times with 5 different routes, offsets and messages, then print how long it took the main function to run using the time library.

package main

import (
	"fmt"
	"time"
)

func getHandler(route string, offset int, message string) {
	fmt.Printf("[-] Route: %s\n", route)
	time.Sleep(time.Duration(offset) * time.Second)
	fmt.Printf("[-] Message: %s\n", message)
}

func main() {
	start := time.Now().UnixMilli()
	fmt.Printf("[-] Start time: %d\n", start)
	getHandler("/route1", 2, "hit route 1")
	getHandler("/route2", 4, "hit route 2")
	getHandler("/route3", 8, "hit route 3")
	getHandler("/route4", 16, "hit route 4")
	getHandler("/route5", 32, "hit route 5")
	// This all adds up to 62s of expected wait time
	end := time.Now().UnixMilli()
	fmt.Printf("[-] Total runtime: %ds\n", (end-start)/1000)
}

The output:

[-] Start time: 1684894345000
[-] Route: /route1
[-] Message: hit route 1
[-] Route: /route2
[-] Message: hit route 2
[-] Route: /route3
[-] Message: hit route 3
[-] Route: /route4
[-] Message: hit route 4
[-] Route: /route5
[-] Message: hit route 5
[-] Total runtime: 62s

Total runtime: 62 seconds

Doing the math, that's exactly as long as one would expect: run one function for 2 seconds, the next for 4, then 8, 16 and 32, you'll get a total runtime of 62 seconds.

Now let's add the go keyword before three of the function calls and show the runtime again.

package main

import (
	"fmt"
	"time"
)

func getHandler(route string, offset int, message string) {
	fmt.Printf("[-] Route: %s\n", route)
	time.Sleep(time.Duration(offset) * time.Second)
	fmt.Printf("[-] Message: %s\n", message)
}

func main() {
	start := time.Now().UnixMilli()
	fmt.Printf("[-] Start time: %d\n", start)
	go getHandler("/route1", 2, "hit route 1")
	go getHandler("/route2", 4, "hit route 2")
	go getHandler("/route3", 8, "hit route 3")
	getHandler("/route4", 16, "hit route 4")
	getHandler("/route5", 32, "hit route 5")
	// This all adds up to 62s of expected wait time
	end := time.Now().UnixMilli()
	fmt.Printf("[-] Total runtime: %ds\n", (end-start)/1000)
}

The output:

[-] Start time: 1684894644833
[-] Route: /route4
[-] Route: /route1
[-] Route: /route2
[-] Route: /route3
[-] Message: hit route 1
[-] Message: hit route 2
[-] Message: hit route 3
[-] Message: hit route 4
[-] Route: /route5
[-] Message: hit route 5
[-] Total runtime: 48s

Total runtime: 48s

A lot shorter, right? We can also see the order of the output changed because of the different wait times changing the way the program ran.

Now let's add the go keyword before every function call. That'll make it run wicked fast, right?

package main

import (
	"fmt"
	"time"
)

func getHandler(route string, offset int, message string) {
	fmt.Printf("[-] Route: %s\n", route)
	time.Sleep(time.Duration(offset) * time.Second)
	fmt.Printf("[-] Message: %s\n", message)
}

func main() {
	start := time.Now().UnixMilli()
	fmt.Printf("[-] Start time: %d\n", start)
	go getHandler("/route1", 2, "hit route 1")
	go getHandler("/route2", 4, "hit route 2")
	go getHandler("/route3", 8, "hit route 3")
	go getHandler("/route4", 16, "hit route 4")
	go getHandler("/route5", 32, "hit route 5")
	// This all adds up to 62s of expected wait time
	end := time.Now().UnixMilli()
	fmt.Printf("[-] Total runtime: %ds\n", (end-start)/1000)
}

The output:

[-] Start time: 1684894749557
[-] Total runtime: 0s

Total runtime: 0s

Well... it did run wicked fast.

Now that's unexpected, right? Not necessarily. The main thread, or the thread of execution that's running the main function, is done after the final line of code is run, but since the final line of code is just printing out the time and each function call is happening as a Go routine, the main thread ends before all of the other threads are finished running. When the main thread ends, it kills all of the Go routines it created, whether they are still running or not.

We can remove this behavior (in a hack-y way) by waiting for user input using fmt.Scanln() at the end of the main function. This will cause the main thread to wait on our input to end execution, so we can simply wait until all Go routines are finished printing to press a key.

package main

import (
	"fmt"
	"time"
)

func getHandler(route string, offset int, message string) {
	fmt.Printf("[-] Route: %s\n", route)
	time.Sleep(time.Duration(offset) * time.Second)
	fmt.Printf("[-] Message: %s\n", message)
}

func main() {
	start := time.Now().UnixMilli()
	fmt.Printf("[-] Start time: %d\n", start)
	go getHandler("/route1", 2, "hit route 1")
	go getHandler("/route2", 4, "hit route 2")
	go getHandler("/route3", 8, "hit route 3")
	go getHandler("/route4", 16, "hit route 4")
	go getHandler("/route5", 32, "hit route 5")
	// This all adds up to 62s of expected wait time
	fmt.Scanln()
	end := time.Now().UnixMilli()

	fmt.Printf("[-] Total runtime: %ds\n", (end-start)/1000)
}

Now if we time it just right and hit a key as soon as we see that the fifth call has completed, there will be minimal delay and the program will functionally end when all of the Go routines have finished, leading to an expected runtime of 32s. Again... hack-y.

In Conclusion

This is your introduction to one of the many, many ways that concurrency can introduce bugs into your code, or create unforseen edge cases that can give you headaches. Throwing concurrency at a complex problem is oftentimes more likely to create more problems than it is to solve the original problem: you create uncertainty, race conditions and edge cases.

That said, concurrency can be incredibly powerful if introduced in a solid, reasonable manner. That's why this article is not going to cover everything, just the basics of concurrency.

See you in the next article!