Go Pointers: When to Use Pointers

And when not to.

Kent Rancourt
14 min readJan 28, 2020
Photo by Nick Fewings on Unsplash

It’s been a while since my last post, but this one is a special treat — an addition to my “Go Pointers” series that is actually about… pointers.

My first exposure to pointers was in my sophomore year of high school as I worked my way through one of the earlier editions of Teach Yourself C in 21 Days. The following year, I started learning C++ in my computer science class and continued with C++ in AP Computer Science my senior year and throughout college. I graduated with a BS in Computer science, knowing what a pointer was and how to use it. I could even do pointer arithmetic with the best of them…

But what no one ever taught me was when to use a pointer — and when not to.

After graduation, I went to work in the insurance industry, programming primarily in Java for the first ten years or so of my career. I knew, thanks to an excellent professor I’d had for my “Comparative Programming Languages” course that anyone who says “Java doesn’t have pointers” is either uninformed or telling a white lie to abate the fears of young programmers who might be intimidated by pointers. The knowledge that an object name in Java is, in fact, a reference (i.e. a pointer) was frequently useful. The shift towards Java was a new enough trend at the time that there wasn’t a great deal of institutional Java knowledge. I often had to instruct more senior developers on why a language that uses pass by value semantics was able to mutate the attributes of objects passed as arguments to their methods.

Though pointers remained in the back of my mind, Java didn’t give me much deliberate choice about when to use a pointer and when not to, so the question became largely moot. Years later, I had a brief love affair with Ruby where the pointer situation was much the same as it was with Java.

Then I started using Go and pointers — real, unadulterated pointers — came back into my life, albeit without any pointer arithmetic this time. I’ve been coding in Go as my primary language for five years now and still I’ve never happened across good guidance on when to use a pointer — and when not to.

Fortunately, I’ve developed (what I, humbly suppose is) decent intuition for when to use pointers and when not to and, if asked, I can usually defend why I opted for or against their use. But this was hard-won knowledge gained through years of experience — and a fair number of mistakes.

Too often, I encounter Go code that exhibits (what I perceive to be) haphazard use of pointers. This is not intended as criticism of those who wrote such code — after all, I’ve been arguing here that sage wisdom on this topic is hard to come by. This blog post is, therefore, my sincere attempt to share some of the insights on the topic that I wish had been accessible to me sooner.

The Hazards of Pointers

Before we begin talking about when to use pointers and when not to, we should briefly acknowledge some of the inherent dangers of working with pointers. i.e. We need to establish why overusing pointers could be a bad thing.

For me, the two biggest hazards of working with pointers are:

  • Accidental nil pointer dereferences
  • Accidental mutation of something you didn’t want mutated

If these were already obvious to you, feel free to skip ahead a few sections.

Let’s talk about nil pointer dereferences first.

Nil Pointer Dereferences

Consider this trivial bit of code:

Here, the variable s is initialized to nil because that is the zero value for any pointer, regardless of what type it points to. In the statement fmt.Println(s.Name), the pointer s is automatically dereferenced when we attempt to access the Name property of the S struct that s points to-- except it does not point to one.

In a language that is not memory safe, the runtime behavior under these circumstances is not well-defined. This can lead to buffer overruns, wildly unpredictable behavior, and dangerous vulnerabilities. Go, however, is memory safe, meaning the runtime behavior under these circumstances is well-defined. Under these circumstances, a Go program will panic and halt execution.

Running the program above yields:

This panic is, for sure, better than undefined behavior, but it is not without dire consequences.

Consider the following contrived example:

In this program, we count, slowly, from 0 to 9 in one goroutine while, in another, waiting for five seconds before deliberately triggering a panic via nil pointer dereference. Notice that this terminates the entire program and the first of the two goroutines doesn’t continue counting. And of course this is the best/safest possible behavior, because suppose that one goroutine that continued on were dependent on sending values to or receiving values from (over a channel) the goroutine that had encountered a nil pointer dereference and died. This could rapidly deadlock the program.

There’s a major consequence to this.

If you’re writing a program for a high concurrency use case — something Go excels at — a nil pointer dereference doesn’t only affect the goroutine in which it occurred; it terminates your entire program. Suppose your program were a web application or API server. Many, many in-progress requests that had a good chance of succeeding could fail because of just one among the many that encountered a nil pointer dereference.

Of course, the examples we’ve looked at so far are trivial and the ominous nil pointer dereference should have been easy to spot. In practical use, however, this problem may not be so glaringly obvious.

Let’s examine this hypothetical bit of code where a function call is made to retrieve data from some underlying datastore.

Where the function signature of storage.GetS(...) is as follows:

The GetS(...) function, returns a pointer and that pointer could potentially reference nil. Consider, for instance, the distinct possibility that when no S is found in the underlying datastore, the function might deliberately return nil. To calibrate your expectations, you need to consult the documentation for storage.GetS(...), and if that documentation doesn't exist, you're going to find yourself dissecting its code.

For sure, the responsible thing to do here is to check for a nil value returned from this function, like so:

Although this would be the responsible thing to do — imagine exactly how easy it is to forget to check for this — and developers (myself being no exception) are marvelous at forgetting to check all the places where we (perhaps quite reasonably) expected not to encounter a nil pointer, but could still be surprised by one nevertheless.

A bit later in this article, I’ll demonstrate how the author of the store.GetS(...) function could have written the function differently to help its users avoid accidental nil pointer dereferences.

Accidental Mutation

My second reason for not overusing pointers is the possibility of unintended mutations.

Consider this program:

If you’re taking the lessons of the previous section to heart, you’ll note that the printUpper(...) function probably should check that the value of the s argument it receives isn't nil, though that isn't the main focus of this example.

The implementation of the printUpper(...) function does something that the caller of the printUpper(...) function may not have anticipated. It changed the value of the Name field in the struct pointed to by s instead of using a local variable to store the upper-cased string. Certainly this is a poorly implemented function, but the worst consequence of this is that because the caller passed a pointer to s, the caller and receiver shared a common struct and the modification to the Name field will persist beyond the end of the function call.

The output of the program is this:

Again, the implementation of the printUpper(...) is simply awful, but that's the point. When you pass a pointer to a function call, you are trusting the function to handle the data that pointer references responsibly.

This problem can also occur in ways that are harder to spot.

Here’s a similar program to the previous example:

Here, the s variable passed from the main program to printUpper(...) is no longer a pointer, but a variable of type S. The printUpper(...) function receives a copy of the value (since Go function calls use pass by value semantics). It would be easy to assume this solves our problem, however the T field of an S struct is a pointer (to a T struct). When the S struct is copied, the T pointer is copied... but a copy of a pointer points to the same value as the original. The irresponsible modification to the value of s.T.Name by the printUpper(...) function, therefore, is still a modification that will persist beyond the call to printUpper(...).

As before, the output of the program is:

My First Rule of Pointers

My first rule of using pointers is don’t.

I can already hear some readers protesting this advice. Have no fear. While this rule and a few others that follow caution against using pointers, some others enumerate cases where you should consider them. My goal in declaring “don’t use pointers” as my first rule is to discourage the use of pointers as a default.

Do not start with pointers and then justify your way out of using them. Start by avoiding pointers, then justify your way into using them.

A Myth About Pointers

Many programmers are prone to belief in a myth that pointers are always more efficient. I am commenting specifically about Go when I say this, and making no comment about other languages. In Go, pointers are sometimes more efficient.

If the reasoning behind the myth isn’t obvious, I’ll explain it before debunking it. Since Go utilizes pass by value semantics for function calls, all arguments to a function are copied at the time the function is invoked. If a struct passed to a function occupies many bytes or kilobytes (or more) of memory, all of that is copied. By contrast, if a pointer to the same struct is passed to a function, when that pointer is copied (assuming you’re using a 64 bit operating system), only eight bytes are copied. And these eight bytes reference the location in memory where the original struct is stored.

Based on the above, it’s easy to assume that passing pointers around is always more efficient — especially when those pointers reference large structs. (And these semantics are equally applicable to a function’s return values, by the way. Anything you return is copied. In the case of large structs, it may intuitively seem better to return a pointer.)

To bust this myth, however, I invite you to do some benchmarking — which is beyond the scope of this article. If you do this, you will be surprised to discover a broad range of cases where passing a struct (even one much larger than eight bytes), which is destined to be copied, performs better than passing a pointer to the same struct. How could this be?

The key to understanding this requires understanding the difference between two different regions of memory — heap and stack. An overly-simplified differentiation between the two is that memory on the stack is managed efficiently by the CPU, while memory on the heap is managed manually by your program or perhaps, depending, on the language in question, by the language runtime.

Speaking broadly, the memory for copies of function arguments (or return values) is usually allocated on the stack. When the function returns, the copy goes out of scope and the CPU will reclaim the memory. By contrast, pointers to a struct (where the pointer itself may exist on the stack) reference memory allocated on the heap. Why? Because the value referenced by a pointer doesn’t necessarily go out of scope when a given function call returns — meaning it shouldn’t be on the stack where the CPU will eagerly reclaim memory. Even if a struct’s memory is originally allocated on the stack, it may be moved to the heap when a pointer to the struct is obtained. Of course, I said I was speaking broadly. It’s a good time to note that compiler optimizations can sometimes warp these rules.

But the heap comes with an additional overhead — at least in the case of Go. The Go language runtime (in contrast to a language like C) utilizes garbage collection to automatically reclaims memory from the heap that is no longer referenced by any pointer. Garbage collection isn’t cheap and CPU cycles consumed by garbage collection grow in proportion to heap use. The bottom line is that increased use of pointers equates to increased use of heap, leading in turn to more CPU cycles spent on garbage collection, and fewer CPU cycles spent executing application logic.

Does this mean that it’s always more efficient to shun pointers? Certainly not. If you’re dealing with very large structs, for instance, pointers might still be more efficient. Ultimately, only benchmarking will determine which approach is more efficient for a given use case. My point in debunking this myth, however, has not been to assert that it’s never more efficient to use pointers. My point has been to assert that using pointers in deference to the myth without benchmarking first is, at best, a premature optimization and, at worst, could actually degrade performance.

Re-read the first rule. Start without pointers and justify your way into them.

You Don’t Need Them as Badly as You Think

Here I’m going to highlight a common, and tempting use for pointers that can get you into trouble. We’ve already touched on it, in fact. Recall our example of a function call made to retrieve data from some underlying datastore?

Where the function signature of storage.GetS(...) is as follows:

The GetS(...) function, returns a pointer and that pointer could potentially reference nil. This may have been by design. The author of the GetS(...) function may have planned ahead for the possibility of not finding the item that the caller was looking for, but chose not to treat this case as an error, so instead of returning an S struct, he or she has opted to return a pointer to an S struct. When the requested item isn't found, the return value will be a nil pointer.

On its face, this seems clever — and indeed it works as intended — but it may force the caller to check for nil return values to avoid an accidental nil pointer dereference. Unfortunately, it cannot enforce the caller doing so.

Here’s how the signature (and implementation) of the GetS(...) function can be improved to help callers avoid such peril:

Here, we always return a struct S, although in the case where the sought-after item was not found, the zero value of S is returned. To help the caller differentiate between a positive result and a zero value, the new and improved function also returns a bool indicating whether the sough-after item was found.

At a glance, this may not seem like an improvement because all we’ve accomplished as the author of this function is transitioning callers from possibly forgetting to check for a nil value to possibly forgetting to check the value of the bool we're returning instead. However, it is crucial to note that the compiler will prevent the caller from invoking GetS(...), which now returns two values, in a single-value context. i.e. The following produces a compilation error:

This, however, will compile:

The compiler considers it an error when a variable is declared and has a value assigned to it and then is never used, so effectively, the caller is also forced by the compiler to use the ok variable (i.e. examine its value) or else actively ignore it like so:

It’s still possible for the caller of GetS(...) to make a mistake, but where a caller could previously have innocently forgotten to handle the case where the sought-after item wasn't found, leading to a nil pointer dereference, they now have a harder time ignoring that case-- and in fact could only do so willfully. And in no event is a nil pointer dereference now a possibility.

Okay… Here’s Where I DO Use Pointers

Generally, I use pointers for three specific cases:

  • When there is no other option
  • Where a function needs to modify its receiver
  • Anywhere I wish to utilize a singleton

When There is no Other Option

The json.Unmarshal(...) function from the standard library provides an instructive example of a function that expects a pointer argument for lack of any other option. Here it is in use:

The json.Unmarshal(...) function signature looks like this:

It might be more intuitive for users of this function if the signature looked like this:

But this could never possibly work because it puts responsibility for instantiating the struct on the json.Unmarshal(...) which doesn't know what type of struct you're actually intending to unmarshal the JSON into. The function signature is the way it is to work around that. The caller determines what kind of struct they want to unmarshal into by instantiating one and passing it in... but because the caller has an expectation of that struct being populated and those changes persisting beyond completion of the function call, it had better be a pointer.

Note that when writing your own functions, this is a pattern that should surely be applied as a last resort. It’s been said that one of the best measures of code quality is “wtf’s per minute.” Functions with side effects (i.e. functions that mutate the arguments that were passed to them) are reliable wtf-generators.

Functions That Mutate Their Receiver

Consider the following program:

This produces the following output:

Was that result surprising?

When the main program invoked s.SetName(...), it is invoked on a copy of the struct. i.e. Function receivers ( s in this case) observe the same pass by value semantics as any other argument to a function. The modification to s.Name is therefore applied to a copy and not the original.

A small change to the SetName(...) function to use a pointer receiver will cause the program to work as intended.

Note that this change does not (at least not consciously) prompt us to start using pointers to S in other places where we didn't necessarily wish to use a pointer. For instance, our main program is unchanged:

Here, s is an S struct and not a pointer to an S struct. The compiler is aware that the call to s.SetName(...) takes a pointer receiver and will automatically pass a reference to s without us needing to be conscious of that. I point this out only to highlight that the decision to use a pointer as a function receiver didn't have an egregiously cascading effect that forced us to use pointers in other places where we may not have wished to do so.

Singletons

There are many cases where one may wish to create one instance of some type and use that one instance in many places and, deliberately, never make a copy of that instance. There could be any number of reasons for doing this. Some that readily spring to mind:

  • Stateful components
  • Components that utilize finite or precious resources

An excellent example of a stateful component might be an in-memory data store. Suppose multiple components in a system utilize the same datastore, but whilst shunning the use of pointers, Go’s pass by value semantics lead to multiple components each having their own distinct copy of the datastore. The states of the many copies would begin to diverge immediately and almost certainly lead to unanticipated results.

An example of a component that utilizes precious resources might be a component that implements a database connection (or a database connection pool). Since the database the program connects to is likely to support a finite number of connections, multiple copies of a connection object inadvertently created by shunning the use of pointers combined with Go’s pass by value semantics could lead to unnecessarily greedy consumption of a finite (and possibly precious) resource.

For cases such as these, I always pass pointers around — with a small caveat. For components of such importance, I like to control the terms on which others interact with them. Coding so defensively, I am likely to create an exported interface with an non-exported implementation, where pointers to the component in question implement the interface.

For example:

Conclusion

At this point, I hope I’ve adequately formalized the intuition I’ve learned, through experience, to apply when deciding when to use (or not use) pointers. At the very least, this is the guidance I wish had been more accessible to me when I was cutting my teeth on Go.

I do implore readers not to treat these as hard and fast rules; rather keep whatever advice you find useful and dismiss the rest. In the best case, perhaps you will find this as a helpful jump-start in developing or improving your own intuition on this subject.

If I may suggest one key takeaway — it’s rule number one. Don’t start with pointers and justify your way out of using them. Start without them and justify your way into them.

--

--

Kent Rancourt

Kent is a founding engineer at Akuity, working primarily with Kubernetes, Argo CD, and other open source projects.